Advertise With Us Report Ads

Anthropic Accuses Chinese Unicorns of Stealing AI Capabilities

LinkedIn
Twitter
Facebook
Telegram
WhatsApp
Email
Anthropic
From research to real-world applications, Anthropic drives responsible AI innovation. [SoftwareAnalytic]

American artificial intelligence company Anthropic has accused three major Chinese AI labs of illegally copying its technology. In a blog post published Monday, the firm claims DeepSeek, Minimax, and Moonshot AI used a technique called “distillation” to extract capabilities from its Claude model to boost their own products.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by atvite.com.

Anthropic alleges these companies created over 24,000 fake accounts to access Claude, which is officially banned in China. Through these accounts, they reportedly conducted more than 16 million exchanges. Essentially, they used the American AI’s answers to train their own software, bypassing the expensive and difficult work usually required to build advanced models.

While developers often use distillation to make their own internal models smaller and faster, using a competitor’s proprietary data to build a commercial product is a major violation. It effectively allows a company to skip the massive computing costs associated with training an AI from scratch.

This accusation follows a similar report from OpenAI earlier this month. The creator of ChatGPT told a US House committee that Chinese companies have been “free-riding” on American innovation for over a year. Both companies strictly forbid this kind of data scraping in their terms of service.

The controversy centers largely on DeepSeek. The Chinese startup shocked the tech world last year when it released a powerful AI model that rivaled top US competitors but cost much less to build. At the time, experts questioned how they achieved this despite strict US export controls on high-end computer chips. These new allegations suggest they may have taken a shortcut.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.

Anthropic warns that this “theft” is more than just a business dispute; it poses a national security risk. When companies distill models illicitly, they often strip away the safety guardrails that US firms painstakingly build into their systems.

The firm fears that these unchecked models could help bad actors launch cyberattacks, create biological weapons, or spread disinformation. Anthropic urged officials to act quickly, stating that authoritarian governments could use these tools for mass surveillance and offensive operations, and the window to stop this trend is narrowing.

ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.
ADVERTISEMENT
3rd party Ad. Not an offer or recommendation by softwareanalytic.com.