Anthropic CEO Dario Amodei made a public apology to the US government and Pentagon last week. In a full 180 degree U-turn, CEO apologised for the memo that he sent to employees soon after the company’s talks with Pentagon (Department of War) broke down. Amodei said he is very sorry for the way he handled the crisis that he described as one of the most “disorienting” in Anthropic’s history. The apology was even put on Anthropic’s website. Similarly, in his first-ever interview with The Economist, Amodei said sorry for the message and stressed that he would not characterise it as a formal memo. Amodei said that he would want to offer a complete apology “for the tone” of the memo, adding it had been written within hours of a chaotic series of announcements and did not reflect his “careful or considered views.” The apology came after the Trump administration blacklisted Anthropic and deemed the company as a threat to US national security.On the first working day after the apology, Anthropic filed a lawsuit against the Trump administration. In its lawsuit, Anthropic has asked the court to vacate the supply chain risk designation and to grant the company a stay on the action. The complaint claims that the actions could jeopardize “hundreds of millions of dollars” in revenue for the company. It calls US government’s actions as “unprecedented and unlawful,” and that they are “harming Anthropic irreparably.” Anthropic filed the complaint in the US District Court for the Northern District of California, In a related filing CFO Krishna Rao said that the government’s decision could jeopardize “hundreds of millions of dollars” in revenue in the near term. “Across Anthropic’s entire business, and adjusting for how likely any given customer is to take a maximal reading, the government’s actions could reduce Anthropic’s 2026 revenue by multiple billions of dollars,” Rao said.
Key details of Anthropic CFO Krishna Rao ‘s court filing against the US government
As the CFO for Anthropic, I have personal knowledge of the contents of this declaration, or have knowledge of the matters based on my review of information and records gathered by Anthropic personnel, and could testify thereto.The Uncertainty Created By The Government’s Actions Is Concretely Harming Anthropic Recent actions by the government have created significant uncertainty in the market. For example, over the weekend after the President Trump’s and Secretary Hegseth’s social media posts, a major investor in Anthropic informed me that the Department of War (the “Department”) had contacted several of its portfolio companies about their use of Claude. Those companies have grown worried and uncertain about their ability to use Claude. A different Anthropic investor forwarded me market analysis, which called Anthropic’s supposed designation as a supply chain risk a “sanction” and opined that any company that wants to serve as a federal contractor cannot do business with Anthropic. I have seen multiple client alerts from law firms describing the potentially far-reaching nature of the government’s actions and suggesting that Department contractors may be best served by reevaluating their relationship with Anthropic. Public reporting has been similarly confused. For example, a Yahoo article mistakenly asserts that Secretary Hegseth “ordered the Pentagon to bar its contractors and their partners from any commercial activity with Anthropic.” The uncertainty the government’s actions have created is already inflicting a wide range of challenges and problems on Anthropic. My team has estimated revenue exposure across a range of potential customer interpretations of the government’s actions. Insofar as customers adopt a narrow reading—where the government’s actions are understood to prohibit only the use of Claude in work performed for the Department—we estimate that hundreds of millions of 2026 revenue is at risk. Insofar as significant swaths of customers’ risk calculations sweep broader—where customers believe that doing business with Anthropic at all would jeopardize their ability to contract with U.S. agencies—the impact is significantly larger and will vary by customer type. Defense contractors and others with financial dependence on the Department are most likely to adopt the maximal interpretation, and we estimate it could reduce revenue from those customers by 50–100 percent.Across Anthropic’s entire business, and adjusting for how likely any given customer is to take a maximal reading, the government’s actions could reduce Anthropic’s 2026 revenue by multiple billions of dollars.When it comes to Anthropic’s investors, even if they recognize the true scope of the relevant authorities being invoked by the government, the mere fact of the purported designation—combined with President Trump’s and Secretary Hegseth’s public statements—risks substantially undermining market confidence and Anthropic’s ability to raise the capital critical to train next-generation models and maintain its position in a very competitive race at the AI frontier.Compounding the problem is the threat of mounting fear and uncertainty among customers. These effects reinforce one another: investors withdraw from companies losing customers, and customers avoid companies perceived as struggling to attract capital or sustain growth. In a field as competitive as frontier AI, this feedback loop—if allowed to persist— could result in harm well beyond the immediate consequences of the government’s actions. Training and serving frontier-level models like Claude requires extraordinary computational resources. Anthropic has already spent over $10 billion on model training and inference (serving the model to end users) and expects to spend many billions more in the coming years. Although the company has generated substantial revenue since entering the commercial market—exceeding $5 billion to date—it has nonetheless had to raise more than $60 billion in outside capital to fund its operations. Anthropic has raised this capital by issuing investors equity stakes in the company.We also have funded substantial critical technology infrastructure (“compute”) purchases via long-term financing arrangements, where it is critical that our counterparties believe that Anthropic can and will repay or otherwise fulfill its financial obligations. This need for capital is inherent to the frontier AI business: although commercial adoption is continuing to grow, even companies with strong commercial traction cannot yet self-fund the required infrastructure. Investors supply this capital because they believe Anthropic’s models will remain at or near the frontier.The uncertainty created by the government’s actions serves to undermine investors’ confidence in Anthropic. Faltering investor confidence, if it goes on for long enough, will increase Anthropic’s costs to raise the funds it needs to operate. And this will put us at a competitive disadvantage relative to other frontier AI labs because of an escalating inability on Anthropic’s part to acquire sufficient compute for research and development, serve our customers, and satisfy investor expectations. If investors opt not to invest in Anthropic in the future, Anthropic will be unable to train the next generation of models, further eroding its commercial position and investor confidence. If the government’s actions are allowed to stand, and if the ripple effect described above comes to pass, it would be almost impossible to reverse.
