US warns it will axe all Anthropic agreements without Pentagon deal

Your guide to what Trump’s second term means for Washington, business and the world
The US government has warned it will rip up its agreements with Anthropic if the AI start-up fails to reach a deal with the Pentagon, with a senior official accusing chief executive Dario Amodei of having a “God-complex” after he rejected a “final offer” to continue working with the military.
An administration official told the FT on Thursday that a deal signed in August to offer Anthropic’s Claude chatbot to all three branches of government would be axed if the start-up did not agree to give the Pentagon unrestricted use of its technology.
Later on Thursday evening, under-secretary of defence for research and engineering Emil Michael, who has been a part of negotiations with Anthropic, called Amodei a liar with a “God-complex”.
“He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk,” Michael wrote on X.
It’s a shame that @DarioAmodei is a liar and has a God-complex. He wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.
— Under Secretary of War Emil Michael (@USWREMichael) February 27, 2026
The @DeptofWar will ALWAYS adhere to the law but not bend to whims of any one for-profit tech… https://t.co/ZfwXG36Wvl
The escalating dispute is likely to prompt a legal fight between one of America’s leading AI labs and the Trump administration.
The maker of the Claude model has refused to allow the defence department full use of its technology, citing concerns over lethal autonomous weapons and mass domestic surveillance.
US defence secretary Pete Hegseth on Tuesday summoned Amodei to Washington to demand that Anthropic permit any legal use of its model by Friday or face being cut from Pentagon supply chains or having its technology co-opted.
Pentagon officials wrote to Anthropic on Wednesday with a final offer, according to people with knowledge of the matter. But Amodei said those terms were unacceptable to his company, which bills itself as more responsible and focused on safety than its rivals.
“These threats do not change our position: we cannot in good conscience accede to their request,” Amodei wrote in a blog on Thursday.
“In a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” he added. “Some uses are also simply outside the bounds of what today’s technology can safely and reliably do.”
The administration official told the FT the government considered Claude to be a “fantastic product” and was hopeful a resolution would be reached. But they added if this failed to materialise, the chatbot would no longer be offered to federal agencies through the General Services Administration.
Amodei said Anthropic was still hopeful for an agreement. “Given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider,” he wrote.
The Pentagon did not respond to a request for comment. Its chief spokesperson Sean Parnell wrote on X earlier on Thursday that the military had “no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement”.
“We will not let ANY company dictate the terms regarding how we make operational decisions,” he added.
If the government follows through on its ultimatum, Anthropic — which has partnered with Palantir and is the only AI lab whose models have been used in classified work for the Pentagon — could be cut from the US defence supply chain and lose a $200mn contract.
A supply chain risk designation could trigger a legal challenge from the $380bn company, according to people with knowledge of the matter. Companies that have previously been deemed a supply chain risk under US law, such as China’s Huawei, have historically been from adversary nations.
Alan Rozenshtein, associate professor at the University of Minnesota Law School and a writer on AI and the military, said the attack on Anthropic was “pretty far outside what the statute possibly constitutes” and that he believed the company had “strong legal defences if it’s designated a supply chain risk”.
Hegseth has also threatened to invoke the Defense Production Act, a cold war-era measure allowing the president to control domestic industry in the national interest.
The law was used by Joe Biden and Donald Trump during the coronavirus pandemic to boost production of medical supplies and would enable the Pentagon to use Anthropic’s tools without a contractual agreement.
Comments