Russia’s APT28 has been actively utilizing LLM-powered malware against Ukraine, while on the dark web, similar capabilities are being sold for just $250 per month.
In a recent report, Ukraine’s CERT-UA highlighted the emergence of LAMEHUG, the first confirmed instance of LLM-powered malware being deployed in the wild. This malicious software, attributed to APT28, makes use of stolen Hugging Face API tokens to access AI models, allowing for real-time attacks while distracting victims with irrelevant content.
Vitaly Simonovich, a researcher at Cato Networks, emphasized that these attacks are not isolated incidents, with APT28 using this method to test Ukrainian cyber defenses. Simonovich drew parallels between the threats faced by Ukraine and those encountered by enterprises worldwide.
Of particular concern is Simonovich’s demonstration of how any enterprise AI tool can be repurposed into a malware development platform within six hours. By utilizing a technique that bypasses existing safety measures, he successfully converted various LLMs into functional password stealers.
The rise of AI-powered malware by nation-state actors coincides with the growing adoption of AI in the business sector. The 2025 Cato CTRL Threat Report indicates a significant increase in AI adoption across thousands of enterprises, signaling a shift towards AI becoming a mainstream production tool.
APT28’s LAMEHUG represents a new form of AI warfare, with the malware operating efficiently through phishing emails and decoy documents. The malware uses AI-generated commands for reconnaissance and distraction tactics, showcasing the sophistication of APT28’s approach.
Simonovich’s demonstration at Black Hat underscored the ease with which consumer AI tools can be turned into malware factories. By exploiting vulnerabilities in LLM safety controls, he was able to create a functional Chrome password stealer in just six hours, highlighting the urgent need for enhanced security measures.
The availability of underground platforms offering unrestricted AI capabilities, such as Xanthrox AI and Nytheon AI, further exacerbates the cybersecurity threat. These platforms provide access to AI tools without safety controls, enabling malicious actors to carry out sophisticated attacks with ease.
As enterprises continue to adopt AI at a rapid pace, the attack surface expands, creating new challenges for security leaders. The lack of consistent responses from major AI companies to security vulnerabilities raises concerns about the industry’s readiness to address emerging threats.
Simonovich’s research serves as a stark reminder that the barrier to entry for nation-state attacks using AI tools has significantly decreased. With just $250 per month and a few hours of storytelling, threat actors can leverage enterprise AI tools for malicious purposes, highlighting the need for robust cybersecurity measures in the age of AI-driven attacks.
