Gartner Predicts Less Than 25% of Government Organizations Will Have Generative AI-Enabled Citizen-Facing Services by 2027

0

Less than 25% of government organisations will have generative AI (GenAI)-enabled citizen-facing services by 2027, according to Gartner, Inc. Fear of public failure and a lack of community trust in government use of the technology will slow adoption for external use with citizens.

Like all industries over the past 15 months, governments have been exploring the opportunities and risks associated with the emergence of GenAI. Gartner’s annual global survey of over 2,400 CIOs and technology executives found that 25% of governments have deployed, or plan to deploy GenAI in the next 12 months. A further 25% plan to deploy in the next 24 months. Early focus has been on establishing an initial governance framework to support experimentation and narrow adoption.

“While governments have been benefiting from the use of more mature AI technologies for years, risk and uncertainty are slowing GenAI’s adoption at scale, especially the lack of traditional controls to mitigate drift and hallucinations,” said Dean Lacheca, VP Analyst at Gartner. “In addition, a lack of empathy in service delivery and a failure to meet community expectations will undermine public acceptance of GenAI’s use in citizen-facing services.”

Align Adoption with Risk Appetite

To address this, Gartner recommends governments continue to actively deploy GenAI solutions that will improve internal aspects of citizen services.

“GenAI adoption by government organisations should move at a pace that is aligned to their risk appetite, to ensure that early missteps in the use of AI don’t undermine community acceptance of the technology in government service delivery,” said Lacheca. “This will mean back-office opportunities will progress more rapidly than uses of the technology to serve citizens directly.”

According to Gartner, government organisations can accelerate GenAI adoption by focusing on use cases that predominantly impact internal resources, avoid perceived risks associated with citizen-facing services and build knowledge and skill associated with the technology. They should also build trust and mitigate associated risks by establishing transparent AI governance and assurance frameworks for both internally developed and procured AI capabilities.

“These frameworks need to specifically address risks associated with citizen-facing service delivery use cases, such as inaccurate or misleading results, data privacy and secure conversations,” said Lacheca. “This can be done by ensuring governance processes specifically address each risk both before and after initial implementation.”

In addition, government organisations should implement an empathy-focused practice of human-centred design when designing the use of citizen or workforce-facing AI solutions. This ensures the solutions remain in line with community expectations when it comes to determining how and when they should be used from a citizen-facing perspective.

LEAVE A REPLY

Please enter your comment!
Please enter your name here