77% of the organisations that have GCP Vertex AI Workbench set up have at least one notebook configured with the overprivileged default Compute Engine service account
Finding: 70% of cloud AI workloads contain at least one unremediated critical vulnerability
From a cloud perspective, the exciting news about AI is that the cloud is AI’s natural home. The cloud makes it much easier and manageable to handle the enormous compute and data load required to test and train AI models, and run generative AI workloads. On the other hand if compromised, AI workloads could have a profound impact not only on an organisation’s cloud environment but its business as a whole.
In our Cloud AI Risk Report 2025 we surfaced two concerning AI risk factors in developer tools and services that bear repeating:
Overprivileged default service accounts. 77% of the organisations that had GCP’s Vertex AI Workbench set up had at least one notebook instance configured with the overprivileged default Compute Engine service account. Misconfigured defaults in AI service building blocks create high-risk exposure paths that can lead to privilege escalation, lateral movement and compliance violations. Such risks are amplified as AI grows as an attractive target for threat actors. In this case, the default service account had permissions beyond those needed for the AI service and notebook, increasing the risk impact across the broader environment and other services.
Workloads with critical vulnerabilities. Our research found that 70% of AI cloud workloads across Azure, AWS and GCP had at least one unremediated critical vulnerability — compared with “only” 50% in non-
AI workloads. A critical vulnerability in an AI environment can serve as a launchpad for unauthorised access to sensitive training data, model manipulation, data poisoning, or even lateral movement within the broader cloud infrastructure. When part of a toxic risk combination, such as overly permissive access and/or public exposure, critical CVEs can amplify attacker success and persistence. Given that the goal of many cloud breaches is to obtain sensitive data, and the massive quantity of AI data means a significant portion of it is likely to contain sensitive data, AI workloads need special security consideration.
Organisations using AI developer tools and services would do well to understand and mitigate cloud-based AI risks as early as possible in their development lifecycle. Security must be implemented in lockstep with an organisation’s AI initiatives. The good news is that the best practices for security cloud environments apply to securing AI environments.
Mitigation strategies
It takes a lot to shore up a cloud environment against a determined and highly motivated attacker. We suggest the following mitigating actions for the security threats identified in this report.
Monitor and minimise public exposure. Not everyone managing cloud assets is familiar with secure storage practices, increasing the risk of unintended exposure. Continuously monitor for public access including by third parties, often a weak leak in the cloud security chain — and reduce sensitive data exposure by automating detection of misconfigured storage services, enforcing least-privilege and assessing posture on an ongoing basis. Use exposure management tools to map complex asset, identity and risk relationships across hybrid environments, to spot and prioritise cross-cloud attack paths.
Safeguard secrets through continuous visibility into where sensitive data resides. Make secrets management one of the core pillars of your data governance strategy. The major CSPs offer mature, native secrets management tools that integrate easily with their identity and access management (IAM) frameworks: use them! Leveraging these tools is not just a best practice, it’s essential to enforcing least privilege, reducing sprawl and improving auditability.
Prioritise vulnerabilities for remediation by combining context with likelihood of exploitation. Correlate identity, vulnerability and network configuration data across your entire cloud stack to uncover toxic cloud trilogies — risky combinations that expose sensitive data and cloud infrastructure. Use vulnerability intelligence to assess the would-be risk impact and understand how specific exposures could affect your environment and business.
Secure your identities to secure your cloud. Educate your IAM and security teams on the critical role of entitlements management in reducing excessive permissions. Build on your adoption of IdP — which CSPs have made easier to use — to take identity security one step further by implementing Just in Time (JIT) access to eliminate standing permissions and enforce timebound access. Seek out solutions offering JIT for IdP groups and that deliver via your go-to collaboration tools.
Secure your sensitive data in this age of AI. Inventory, classify and track where your sensitive data resides across the cloud, including any AI or developer services that handle it. Know the sensitivity level and who has access when — so you have the context needed to apply the necessary controls to protect the data, and to understand and prioritise related risk.
Conclusion
With today’s cloud environments offering fertile ground for attackers, automating risk management across cloud infrastructure, workloads, identities, storage, data and AI resources is essential. A mature cloud security platform (CNAPP) integrates with cloud-native tools, IdPs and collaboration platforms to reveal and prioritise risk — from secrets exposure and data sensitivity to CVEs and access misconfigurations — empowering security teams to stay focused, effective and ahead of evolving threats.
thank you