In a hospital exam room, a physician observes as a medical transcription agent updates electronic health records, suggests prescription options, and provides patient history in real time. Meanwhile, on a manufacturing line, a computer vision agent is conducting quality control at speeds beyond human capability. Despite their impressive capabilities, both these agentic AI technologies face a fundamental challenge that is hindering their widespread adoption in enterprises – identity governance.
According to Cisco President Jeetu Patel, a staggering 85% of enterprises are currently running agent pilots, but only 5% have successfully transitioned to full-scale production. The primary obstacle in this transition is a trust issue stemming from the inability of most organizations to manage the non-human identities generated by these AI agents effectively.
IANS Research has found that many businesses lack the necessary role-based access control mechanisms to handle human identities, let alone the complexities introduced by AI agents. The 2026 IBM X-Force Threat Intelligence Index has also highlighted the increase in attacks exploiting vulnerabilities in AI systems due to inadequate security measures.
Michael Dickman, SVP and GM of Cisco’s Campus Networking business, emphasizes the importance of establishing a trust framework for agentic AI. He stresses the significance of actual system-to-system communications data provided by the network, as opposed to inferred activities, in enabling organizations to enforce policies at machine speed.
Dickman argues that trust should not be an afterthought in AI deployment but a fundamental requirement from the outset. He identifies four key conditions for building trust in agentic AI: secure delegation, cultural readiness, token economics, and human judgment.
One of the critical aspects highlighted by Dickman is the need for cross-domain visibility, which is often lacking in enterprises due to siloed observability tools. By unifying network, security, and application telemetry into a shared data fabric, organizations can gain a comprehensive view of their AI systems’ activities.
To address the trust gap in agentic AI, Dickman recommends a strategic approach that includes implementing microsegmentation, enhancing governance-to-enforcement pipelines, and ensuring cultural and workflow readiness. By prioritizing these initiatives, organizations can build a solid foundation of trust for their AI deployments.
In conclusion, the key to successful adoption of agentic AI lies in establishing a robust trust architecture that encompasses identity governance, cross-domain visibility, and policy enforcement. Organizations that focus on these critical areas will be able to deploy AI agents with confidence and accelerate their digital transformation journey.
