Close Menu
  • Home
  • Psychology
  • Dating
    • Relationship
  • Spirituality
    • Manifestation
  • Health
    • Fitness
  • Lifestyle
  • Family
  • Food
  • Travel
  • More
    • Business
    • Education
    • Technology
What's Hot

Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude

February 24, 2026

Don’t Visit Ko Lipe

February 24, 2026

Blizzard Slams Northeast with Heavy Snow, Disrupting Travel

February 24, 2026
Facebook X (Twitter) Pinterest YouTube
Facebook X (Twitter) Pinterest YouTube
Mind Fortunes
Subscribe
  • Home
  • Psychology
  • Dating
    • Relationship
  • Spirituality
    • Manifestation
  • Health
    • Fitness
  • Lifestyle
  • Family
  • Food
  • Travel
  • More
    • Business
    • Education
    • Technology
Mind Fortunes
Home»Technology»Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude
Technology

Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude

February 24, 2026No Comments2 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp VKontakte Email
Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude
Share
Facebook Twitter LinkedIn Pinterest Email

Anthropic, a leading company in the artificial intelligence industry, made headlines recently by accusing three prominent Chinese AI labs – DeepSeek, Moonshot AI, and MiniMax – of engaging in coordinated efforts to extract capabilities from its Claude models through fraudulent means. The San Francisco-based company revealed that these labs conducted over 16 million exchanges with Claude using fake accounts, a practice that goes against Anthropic’s terms of service and regional restrictions.

This revelation sheds light on a troubling trend in the AI industry, where foreign competitors are using a technique called distillation to bypass years of research and investment. Distillation involves extracting knowledge from a larger AI model (the “teacher”) to create a smaller, more efficient one (the “student”). While distillation is a legitimate training method, it can be exploited by competitors to steal intellectual property.

The issue of distillation came to the forefront in 2025 when DeepSeek released its R1 reasoning model, which rivaled leading American models at a lower cost. This sparked a wave of replication efforts by other labs, raising concerns about the integrity of AI development.

Anthropic detailed the sophisticated methods used by DeepSeek, Moonshot AI, and MiniMax to extract capabilities from Claude. These labs targeted specific features of Claude, such as agentic reasoning and tool use, using fraudulent accounts and coordinated tactics to evade detection.

One key aspect of the operation was the use of proxy networks and “hydra cluster” architectures to bypass Anthropic’s restrictions on access to Claude in China. These networks distributed traffic across multiple accounts, making it difficult to trace the origin of the attacks.

See also  The billion-dollar infrastructure deals powering the AI boom

Anthropic framed the issue of distillation as a national security crisis, highlighting the risks posed by the unauthorized extraction of AI capabilities. The company warned that models built through illicit means lack necessary safeguards, making them vulnerable to misuse by authoritarian governments.

In response to these attacks, Anthropic has implemented various defenses, including classifiers and behavioral fingerprinting systems to detect distillation patterns. The company is also working with industry partners and policymakers to address the issue on a broader scale.

The disclosure by Anthropic has far-reaching implications for the AI industry, raising questions about the security of API access and the need for greater collaboration to combat distillation attacks. The company’s call for coordinated action underscores the urgent need to address this growing threat in the AI ecosystem.

Accounts Anthropic Claude DeepSeek fake MiniMax Moonshot rip
Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
Previous ArticleDon’t Visit Ko Lipe

Related Posts

Nothing Phone (4a) Design Reveals New Glyph Bar

February 24, 2026

Spotify May Let You Tune Your Taste Profile With Notes

February 24, 2026

With AI, investor loyalty is (almost) dead: at least a dozen OpenAI VCs now also back Anthropic 

February 23, 2026

Anthropic's Claude Code Security is available now after finding 500+ vulnerabilities: how security leaders should respond

February 23, 2026
Leave A Reply Cancel Reply

Our Picks
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Don't Miss
Technology

Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude

February 24, 20260

Anthropic, a leading company in the artificial intelligence industry, made headlines recently by accusing three…

Don’t Visit Ko Lipe

February 24, 2026

Blizzard Slams Northeast with Heavy Snow, Disrupting Travel

February 24, 2026

Nothing Phone (4a) Design Reveals New Glyph Bar

February 24, 2026
About Us
About Us

Explore blogs on mind, spirituality, health, and travel. Find balance, wellness tips, inner peace, and inspiring journeys to nurture your body, mind, and soul.

We're accepting new partnerships right now.

Our Picks

Anthropic says DeepSeek, Moonshot, and MiniMax used 24,000 fake accounts to rip off Claude

February 24, 2026

Don’t Visit Ko Lipe

February 24, 2026

Blizzard Slams Northeast with Heavy Snow, Disrupting Travel

February 24, 2026

Subscribe to Updates

Awaken Your Mind, Nourish Your Soul — Join Our Journey Today!

Facebook X (Twitter) Pinterest YouTube
  • Contact
  • Privacy Policy
  • Terms & Conditions
© 2026 mindfortunes.org - All rights reserved.

Type above and press Enter to search. Press Esc to cancel.