3 min readMar 12, 2026 06:16 AM IST
First published on: Mar 12, 2026 at 06:16 AM IST
Anthropic and the Department of War (DoW) are locked in a standoff over how far the military should be allowed to go with frontier AI. At the centre of this clash are two red lines that Anthropic insists on for its models: No mass domestic surveillance and no fully autonomous weapons. In response, the DoW has designated the company as a “supply chain risk”, a label previously reserved for US adversaries, and never before applied to an American company.
The US Secretary of War Pete Hegseth stated on X that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic”. As legally untenable as that sounds, it is an existential risk for Anthropic if it is cut off from being offered by major cloud service providers. While the company could challenge this designation in court, a negotiated settlement that allows the DoW to use the technology for “all lawful purposes”, in line with what it has been advocating, is likely.
This episode has established a precedent. The Trump administration is willing to repurpose tools designed for foreign adversaries for use against domestic companies. In doing so, it is trying to establish that private companies that spearhead general-purpose technologies with vast civilian applications have obligations to the state beyond their terms of service when the technology is “spun on” for military use.
Second, it is only a matter of time before Anthropic falls in line. Amid the clash with the DoW, the company’s rival, OpenAI, has been quick to accept a permissive contract for military-use cases. This is an undesirable outcome for Anthropic.
Third, the irony here is hard to miss. Anthropic and other frontier AI companies spent years building the case that the technology is so powerful that it compares to nuclear weapons and requires non-proliferation controls. Now the US government is using that very logic to argue that such a powerful technology cannot be left to a private company’s discretion.
Finally, despite the company’s safety guardrails, there are reports that Claude, its family of language models, was used in US operations in Venezuela and Iran. While generative AI is described as a “country of geniuses in a data centre” that may surpass human intelligence within a year or two, this suggests that this technology is being used to make lethal decisions in military operations. However, the most likely use cases for generative AI in military operations today would be to augment text-, code-, analysis-, and simulation-heavy work, not serving as an autonomous entity making kill decisions.
Given that the current state of military AI technologies may be popularly misunderstood, both Anthropic’s red lines and the Pentagon’s heavy-handedness seem somewhat performative. Anthropic has positioned itself as a responsible AI lab, and standing up to the DoW appears to be an effort to stay true to its brand. However, the real fight does not seem to be about present use cases as much as who gets to set the terms for the future.
The writers are researchers in technology geopolitics with the Takshashila Institution, Bengaluru. Views are personal
