Artificial intelligence’s meteoric rise could soon bring a surge in legal challenges. According to new research from Gartner, Inc., AI regulatory violations will lead to a 30% increase in legal disputes for technology companies by 2028.
A Gartner survey conducted between May and June 2025, which gathered insights from 360 IT leaders involved in deploying generative AI (GenAI) tools, revealed that over 70% consider regulatory compliance among their top three challenges when scaling GenAI productivity assistants across the enterprise.
Notably, only 23% of respondents said they are “very confident” in their organization’s ability to manage security and governance as they integrate GenAI into enterprise applications.
Geopolitical Pressures Intensify AI Deployment Challenges
The research also highlighted growing geopolitical risks. Among non-U.S. IT leaders surveyed, 57% reported that the geopolitical climate moderately or significantly affects their GenAI strategy and deployment, while 19% cited a significant impact.
Despite these pressures, nearly 60% of respondents said they were either unable or unwilling to adopt non-U.S. GenAI alternatives, underscoring continued reliance on Western-developed AI platforms.
AI Sovereignty Becomes a Strategic Imperative
In a separate Gartner poll conducted during a September 2025 webinar, 40% of 489 respondents described their organization’s sentiment toward AI sovereignty—the ability of nations to control AI development and governance within their jurisdictions—as positive, viewing it as an opportunity. Another 36% reported a neutral, “wait-and-see” stance.
Importantly, two-thirds (66%) said they are already proactive or engaged in responding to sovereign AI strategies, and over half (52%) indicated that their organizations are making strategic or operating model changes in response.
What IT Leaders Should Do Now
With GenAI assistants becoming ubiquitous amid shifting geopolitical and legal frameworks, Gartner recommends that IT and risk leaders immediately strengthen AI output moderation through the following actions:
-
Engineer self-correction: Train models to recognize and decline inappropriate prompts, responding instead with messages like “beyond the scope.”
-
Implement rigorous use-case reviews: Evaluate AI use cases from legal, ethical, safety, and user-impact perspectives, with control testing aligned to the organization’s risk tolerance.
-
Expand model testing and sandboxing: Create cross-functional fusion teams—including data scientists, legal counsel, and decision engineers—to pre-test and validate model outputs against risk benchmarks.
-
Apply robust content moderation: Integrate safeguards like “report abuse” buttons and AI warning labels to mitigate misuse.
As AI governance grows increasingly fragmented across jurisdictions, organizations that fail to embed compliance and risk frameworks into GenAI rollouts may face not just regulatory scrutiny—but escalating legal exposure.






