OpenClaw, an open-source AI assistant previously known as Clawdbot and Moltbot, has gained significant popularity with over 180,000 GitHub stars and attracting 2 million visitors in a single week, as reported by creator Peter Steinberger.
However, security concerns have arisen as security researchers discovered more than 1,800 exposed instances of OpenClaw leaking sensitive information such as API keys, chat histories, and account credentials. The project underwent two rebrandings due to trademark disputes in recent weeks.
The rise of grassroots agentic AI presents a major challenge for enterprise security teams, as traditional security tools struggle to detect and protect against threats posed by autonomous AI agents. These agents operate within authorized permissions, pull context from sources influenced by attackers, and execute actions autonomously, all without being detected by typical security measures.
Carter Rees, VP of Artificial Intelligence at Reputation, highlighted the semantic nature of AI runtime attacks, emphasizing the need for a new approach to security. Simon Willison, a software developer and AI researcher, warned about the “lethal trifecta” for AI agents, which includes access to private data, exposure to untrusted content, and external communication capabilities that can be exploited by attackers.
IBM Research scientists Kaoutar El Maghraoui and Marina Danilevsky analyzed OpenClaw and found that it challenges the assumption that autonomous AI agents must be vertically integrated. The tool demonstrates that community-driven open-source platforms can be powerful, posing significant security risks for organizations.
Security researcher Jamieson O’Reilly identified exposed OpenClaw servers using Shodan, uncovering vulnerabilities such as leaked API keys, chat histories, and sensitive data. Cisco’s AI Threat & Security Research team labeled OpenClaw as a “security nightmare” due to its capabilities and vulnerabilities, highlighting the need for enhanced security measures.
As OpenClaw-based agents form their own social networks like Moltbook, security implications become more severe. These autonomous agents can communicate independently, posing a risk of data leakage and unauthorized actions.
Security leaders are advised to treat agents as production infrastructure, segment access aggressively, scan agent skills for malicious behavior, update incident response playbooks, and establish policies to regulate experimentation without hindering innovation.
In conclusion, OpenClaw serves as a warning sign for the security gaps in agentic AI deployments. Organizations must strengthen their security measures to prevent potential breaches and ensure the safety of their data and systems.
