Cyber adversaries have successfully infiltrated over 90 organizations in 2025 by injecting malicious prompts into legitimate AI tools. These compromised tools were used to steal credentials and cryptocurrency, highlighting a new level of threat in the cybersecurity landscape. The compromised tools could only read data, lacking the ability to rewrite firewall rules. However, the latest autonomous SOC agents now have the capability to rewrite infrastructure, presenting a concerning escalation in cyber threats.
Although this level of exploitation has not been widely seen in production environments, the conditions for such attacks are evolving rapidly. Compromised SOC agents can now manipulate firewall rules, modify IAM policies, and quarantine endpoints using their own privileged credentials. This can all be done through approved API calls that may go undetected as authorized activity by traditional security measures. This shift towards autonomous agents with such capabilities poses a serious risk to organizations.
Major players in the cybersecurity industry, such as Cisco and Ivanti, have introduced new technologies to address these threats. Cisco’s AgenticOps for Security offers autonomous firewall remediation and PCI-DSS compliance features, while Ivanti’s Continuous Compliance and Neurons AI self-service agent focus on policy enforcement and data context validation from the platform level. These advancements aim to combat the growing sophistication of cyber threats targeting AI systems.
The rise of state-sponsored use of AI in offensive operations has increased by 89% over the previous year, indicating a concerning trend in cyber warfare. The attack surface is expanding, with malicious entities exploiting vulnerabilities in AI workflows by impersonating trusted services. The U.K. National Cyber Security Centre has warned that prompt injection attacks against AI applications may be challenging to completely mitigate.
The governance framework for autonomous agents is crucial in mitigating these risks. The OWASP Agentic Top 10 outlines various categories of attacks against autonomous AI systems, with specific focus on Agent Goal Hijacking, Tool Misuse, and Identity and Privilege Abuse. Organizations must implement strict governance measures to prevent unauthorized access and manipulation by autonomous agents.
As organizations navigate the evolving threat landscape, it is essential to prioritize governance and security measures when deploying autonomous agents. Continuous compliance and robust policy enforcement are key factors in safeguarding against potential exploits. By conducting regular audits and ensuring that governance controls are in place, organizations can proactively protect their systems from malicious actors.
