Anthropic, a leading company in the artificial intelligence industry, made headlines recently by accusing three prominent Chinese AI labs – DeepSeek, Moonshot AI, and MiniMax – of engaging in coordinated efforts to extract capabilities from its Claude models through fraudulent means. The San Francisco-based company revealed that these labs conducted over 16 million exchanges with Claude using fake accounts, a practice that goes against Anthropic’s terms of service and regional restrictions.
This revelation sheds light on a troubling trend in the AI industry, where foreign competitors are using a technique called distillation to bypass years of research and investment. Distillation involves extracting knowledge from a larger AI model (the “teacher”) to create a smaller, more efficient one (the “student”). While distillation is a legitimate training method, it can be exploited by competitors to steal intellectual property.
The issue of distillation came to the forefront in 2025 when DeepSeek released its R1 reasoning model, which rivaled leading American models at a lower cost. This sparked a wave of replication efforts by other labs, raising concerns about the integrity of AI development.
Anthropic detailed the sophisticated methods used by DeepSeek, Moonshot AI, and MiniMax to extract capabilities from Claude. These labs targeted specific features of Claude, such as agentic reasoning and tool use, using fraudulent accounts and coordinated tactics to evade detection.
One key aspect of the operation was the use of proxy networks and “hydra cluster” architectures to bypass Anthropic’s restrictions on access to Claude in China. These networks distributed traffic across multiple accounts, making it difficult to trace the origin of the attacks.
Anthropic framed the issue of distillation as a national security crisis, highlighting the risks posed by the unauthorized extraction of AI capabilities. The company warned that models built through illicit means lack necessary safeguards, making them vulnerable to misuse by authoritarian governments.
In response to these attacks, Anthropic has implemented various defenses, including classifiers and behavioral fingerprinting systems to detect distillation patterns. The company is also working with industry partners and policymakers to address the issue on a broader scale.
The disclosure by Anthropic has far-reaching implications for the AI industry, raising questions about the security of API access and the need for greater collaboration to combat distillation attacks. The company’s call for coordinated action underscores the urgent need to address this growing threat in the AI ecosystem.
