anthropic.com
|
ksl
|
|
Anthropic published a detailed technical account of how DeepSeek, Moonshot AI, and MiniMax systematically extracted Claude’s capabilities through roughly 24,000 fraudulent accounts and over 16 million exchanges. The operations were not casual scraping – DeepSeek targeted reasoning and reward modeling, Moonshot focused on agentic coding and tool use across 3.4 million exchanges, and MiniMax ran 13 million exchanges through commercial proxy networks managing hydra cluster architectures with 20,000+ simultaneous fake accounts. Anthropic detected the campaigns using behavioral fingerprinting and chain-of-thought elicitation patterns in API traffic. The national security framing is deliberate: distilled models strip safety guardrails, and Anthropic is explicitly connecting that risk to bioweapons, offensive cyber, and surveillance. OpenAI raised similar distillation concerns about DeepSeek earlier this year, but no lab had published detection methods or operational scale at this level of detail before.
