Tenable released research detailing the successful jailbreak of Microsoft Copilot Studio. The findings underscore how the democratisation of AI creates severe, yet overlooked, enterprise risks.
Organisations are rapidly adopting “no-code” platforms to enable employees to build their own AI agents. The premise is harmless, efficiency without needing developers. While well-intentioned, automation without strict governance opens the door to catastrophic failure.
To demonstrate how easily AI agents can be manipulated, Tenable Research created an AI travel agent in Microsoft Copilot Studio to manage customer travel reservations, including creating new reservations and modifying existing ones, all without human intervention. The AI travel agent was provided with demo data that included the names, contact information, and credit card details of demo customers and was given strict instructions to verify the customer’s identity before sharing information or modifying bookings.
Using a technique called prompt injection, Tenable Research successfully hijacked the AI agent’s workflow to book a free vacation and extracted sensitive credit card information.
The findings of this research could have significant business implications, including:
- Data Breaches and Regulatory Exposure: Tenable Research coerced the agent into bypassing identity verification and leaking payment card information (PCI) of other customers. The agent, designed to handle sensitive data, was easily manipulated into exposing full customer records.
- Revenue Loss and Fraud: Because the agent had broad “edit” permissions intended for updating travel dates, it could also be manipulated into changing critical financial fields. Tenable Research successfully instructed the agent to change a trip’s price to $0, effectively granting free services without authorisation.
“AI agent builders, like Copilot Studio, democratise the ability to build powerful tools, but they also democratise the ability to execute financial fraud, thereby creating significant security risks without even knowing it,” said Keren Katz, Senior Group Manager of AI Security Product and Research at Tenable. “That power can easily turn into a real, tangible security risk.”
AI Governance and Enforcement are Mission Critical for Safe and Secure AI Usage
A key takeaway is that AI agents often possess excessive permissions that are not immediately visible to the non-developers building them. To mitigate this, business leaders must implement robust governance and enforce strict security protocols before deploying these tools.
To avoid data leakage, Tenable recommends:
- Preemptive Visibility: Map exactly which systems and data stores an agent can interact with before deployment.
- Least Privilege Access: Minimise write and update capabilities to only what is absolutely necessary for the agent’s core use case.
- Active Monitoring: Track agent actions for signs of data leakage or deviations from intended business logic.









