Clawdbot’s implementation of MCP lacks mandatory authentication, allowing for prompt injection and granting shell access intentionally. An article published by VentureBeat on Monday highlighted these architectural vulnerabilities. By Wednesday, security researchers had confirmed these three attack surfaces and uncovered additional ones as well.
The project underwent a rebranding from Clawdbot to Moltbot on January 27 following a trademark request from Anthropic due to the similarity to “Claude.”
Various commodity infostealers have already begun exploiting these vulnerabilities. RedLine, Lumma, and Vidar have included the AI agent in their target lists even before most security teams were aware of its presence in their environments. Shruti Gandhi, a general partner at Array VC, reported a staggering 7,922 attack attempts on her firm’s Clawdbot instance.
The security concerns surrounding Clawdbot prompted a comprehensive evaluation of its security posture. Here are the key findings that emerged:
SlowMist issued a warning on January 26 revealing that hundreds of Clawdbot gateways were exposed to the internet, providing unauthorized access to API keys, OAuth tokens, and private chat histories without the need for credentials. Matvey Kukuy, the CEO of Archestra AI, successfully extracted an SSH private key via email using prompt injection within just five minutes.
Referred to as Cognitive Context Theft by Hudson Rock, the malware associated with Clawdbot not only steals passwords but also gathers psychological profiles, work-related information, trust networks, and personal anxieties – offering attackers a wealth of data for effective social engineering.
Clawdbot, an open-source AI agent designed to automate tasks across various platforms, gained immense popularity as a personal assistant, garnering 60,000 GitHub stars in a short span of time. However, many developers deployed instances without fully understanding the security implications. Default settings left port 18789 exposed to the public internet, making it vulnerable to exploitation.
A red-teaming firm led by Jamieson O’Reilly conducted a scan on Shodan, revealing hundreds of exposed Clawdbot instances, with some lacking any authentication measures, allowing for full command execution. O’Reilly also demonstrated a supply chain attack on ClawdHub’s skills library, reaching multiple developers across different countries in a short timeframe.
Despite the prompt patching of the gateway authentication bypass by Peter Steinberger, the creator of Clawdbot, fundamental architectural issues persist, such as plaintext memory file storage, unverified supply chain components, and pathways for prompt injection – all ingrained in the system’s design.
AI agents like Clawdbot pose a significant risk due to their extensive permissions across various platforms. A minor prompt injection can quickly escalate into substantial actions without detection, highlighting the expanding attack surface that security teams struggle to monitor effectively.
Security experts emphasize the need for a shift in mindset regarding the treatment of AI agents, urging organizations to view them as critical production infrastructure rather than mere productivity tools. The lack of visibility into where agents are deployed, their actions, and data access permissions poses a significant challenge for security teams.
As the threat landscape evolves, security leaders must take proactive steps to address the vulnerabilities associated with AI agents like Clawdbot. Implementing inventory management, enforcing least privilege, and enhancing runtime visibility are crucial measures to mitigate potential risks.
In conclusion, the rapid rise and subsequent security concerns surrounding Clawdbot underscore the urgency for organizations to adopt a proactive stance towards securing AI agents. As the threat of exploitation looms, security teams must stay ahead of potential attacks by implementing robust security measures and maintaining vigilance to safeguard critical data and infrastructure.
