Infopercept warns of rising AI-driven cyber threats in 2026

2

As organisations accelerate the adoption of artificial intelligence, cybercriminals are evolving at an equally rapid pace. In its newly released 2026 Threat Predictions Report, Infopercept outlines how AI will fundamentally reshape the global cyber risk landscape — not only as a defensive tool, but also as a powerful weapon in the hands of adversaries.

Titled “Attacks on AI & Attacks Using AI,” the report highlights a critical shift: artificial intelligence has now become both a primary target of attacks and a force multiplier for cybercriminals. Traditional skill barriers that once separated sophisticated attackers from novices are disappearing, creating a more volatile and unpredictable threat environment.

“Never in the history of cybersecurity have attackers and defenders shared equal access to the same source of power,” said Satyakam Acharya, Director of Exposure Management at Infopercept. “GenAI has erased traditional skill gaps. Attacks that once required deep technical expertise can now be executed by almost anyone. This will accelerate the speed, scale, and impact of cyberattacks in 2026.”

When AI Itself Becomes the Target

The report outlines several attack vectors aimed directly at AI systems, models, and pipelines. These include data poisoning attacks, where malicious data is injected into training sets; tampering with Model Context Protocols (MCP) to misguide AI systems; and bypass techniques in multi-LLM environments similar to firewall evasion in traditional networks.

Another major area of concern is the increasing reliance on autonomous AI agents in Security Operations Centers (SOCs). While these systems are designed to accelerate response times, attackers may attempt to manipulate them to disable monitoring tools or erase traces of intrusion. Identity and access management is also facing new challenges, with AI-based identity agents creating opportunities for token forgery and privilege escalation.

In addition, the growing use of on-premises and air-gapped AI systems — often thought to be more secure — may be undermined by data transfer bridges used for model updates. Infopercept also warns against the rise of shadow AI, where unsanctioned tools operate outside governance frameworks, quietly creating backdoors into enterprise environments.

AI as a Force Multiplier for Cybercrime

The second half of the report focuses on how attackers will use AI as an offensive tool. From deepfake-driven fraud and voice cloning to AI-powered vulnerability discovery, malicious actors will be able to identify and exploit weaknesses in unprecedented timeframes.

Advanced threats such as polymorphic malware, which continuously mutates to evade detection, and cognitive overload attacks that flood SOC teams with fake but convincing alerts, will become more common. In some cases, attackers may even attempt dual-layer decision hijacking — manipulating both human operators and AI systems simultaneously.

According to Infopercept, these developments will put intense pressure on organisations to rethink their security strategies, moving toward continuous exposure management, AI model protection, and stricter governance for internal AI usage.

A Call for Proactive AI Security

The report is a product of Infopercept’s Threat Research Team, which brings together expertise in red-teaming, AI security, and threat intelligence. Leveraging insights from its Invinsense platform, the team has created one of the most forward-looking assessments of how adversarial behaviour will evolve in an AI-first world.

For CISOs and IT leaders, the message is clear: securing traditional infrastructure is no longer enough. Enterprises must now secure their AI ecosystems with the same rigour as networks, endpoints, and identities — or risk becoming the next victim of AI-native cyberattacks.

2 COMMENTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here