Are you seeking deeper insights delivered straight to your inbox? Stay updated with our weekly newsletters tailored to enterprise AI, data, and security leaders. Subscribe now to receive curated content that truly matters.
Russia’s APT28 has been actively using LLM-powered malware against Ukraine, while on the dark web, platforms are selling similar capabilities for $250 per month.
Recently, Ukraine’s CERT-UA documented LAMEHUG, the first confirmed instance of LLM-powered malware in the wild. This malware, attributed to APT28, leverages stolen Hugging Face API tokens to query AI models, facilitating real-time attacks while distracting victims with irrelevant content.
According to Cato Networks researcher Vitaly Simonovich, these incidents are not isolated and APT28 is employing this attack technique to test Ukrainian cyber defenses. Simonovich draws parallels between the threats faced by Ukraine and those encountered by enterprises today and in the future.
A notable discovery by Simonovich is how easily any enterprise AI tool can be converted into a malware development platform in under six hours. He successfully demonstrated how popular AI models such as OpenAI, Microsoft, DeepSeek-V3, and DeepSeek-R1 can be transformed into functional password stealers, bypassing existing safety controls.
The convergence of nation-state actors utilizing AI-powered malware and researchers exposing vulnerabilities in enterprise AI tools coincides with the explosive adoption of AI in over 3,000 enterprises, as highlighted in the 2025 Cato CTRL Threat Report. The report indicates a significant increase in the adoption of AI models such as Copilot, ChatGPT, Gemini, Perplexity, and Claude by organizations.
APT28’s LAMEHUG malware showcases a new dimension of AI warfare, operating efficiently by using phishing emails to distribute malware disguised as legitimate government documents. The malware connects to Hugging Face’s API using stolen tokens to execute commands from AI models, while distracting victims with AI-generated content.
Simonovich’s demonstration at Black Hat illustrates the concerning ease with which APT28 deploys AI-powered malware. By utilizing an “Immersive World” narrative technique, he transformed consumer AI tools into malware factories without prior coding experience. The method exploits weaknesses in LLM safety controls, allowing for the creation of functional attack code without detection.
Simonovich’s research uncovered underground platforms offering unrestricted AI capabilities for as low as $250 per month, demonstrating the availability of infrastructure for AI-powered attacks. These platforms provide interfaces similar to ChatGPT without safety controls, enabling malicious activities beyond traditional AI model guardrails.
The rapid adoption of AI in enterprises is expanding the attack surface, as evidenced by Cato Networks’ analysis of network flows. Various industries have witnessed significant growth in AI usage, leading to new security challenges for CISOs and security leaders. Despite the increasing deployment of AI tools, the response from major AI companies to security concerns has been inconsistent, highlighting a gap in security readiness.
In conclusion, the ease and affordability of deploying AI-powered malware, as demonstrated by APT28’s LAMEHUG operation, underscore the urgency for organizations to address the evolving threat landscape. Enterprises must be vigilant in securing their AI tools and infrastructure to prevent exploitation by malicious actors. The evolving nature of AI warfare demands a proactive approach to cybersecurity to safeguard against emerging threats.