Anthropogenic cannot manipulate its Claude generative AI model once the U.S. military puts it into service, an executive wrote in a court filing Friday. This statement was made in response to the Trump administration’s accusations regarding the company. potentially tampering with its AI tools during war.
“Anthropic has never had the ability to prevent Claude from functioning, modify its functionality, shut down access, or otherwise influence or jeopardize military operations,” said Thiyagu Ramasamy, Anthropic’s head of public sector. wrote. “Anthropic does not have the required access to disable the technology or modify the behavior of the model before or during ongoing operations.”
The Pentagon has been battling with the leading AI lab for months over how its technology can be used for national security — and what the limits of that use should be. This month, Defense Secretary Pete Hegseth called Anthropic a supply chain riska designation that will prevent the Department of Defense from using the company’s software, including through contractors, for the next few months. Other federal agencies are also abandoning Claude.
Anthropic filed two lawsuits challenges the constitutionality of the ban and seeks an emergency order to overturn it. However, customers have already started cancel offers. A hearing in one of the cases is scheduled for March 24 in federal district court in San Francisco. The judge could decide on a temporary reversal soon after.
In a document filed earlier this week, government lawyers wrote that the Department of Defense “is not required to tolerate the risk that critical military systems may be compromised at pivotal times for national defense and active military operations.”
Pentagon uses Claude to analyze data, write memos and help generate battle plans, WIRED reported. The government’s argument is that Anthropic could disrupt active military operations by disabling access to Claude or releasing harmful updates if the company disapproves of certain uses.
Ramasamy rejected this possibility. “Anthropic does not operate any backdoors or remote ‘kill switches,'” he wrote. “Anthropogenic personnel cannot, for example, log into a DoW system to change or disable models during an operation; the technology simply does not work that way.”
He added that Anthropic would only be able to provide updates with approval from the government and its cloud provider, Amazon Web Services, but did not specify this by name. Ramasamy added that Anthropic cannot access prompts or other data that military users enter into Claude.
Anthropic executives argue in court filings that the company does not want veto power over military tactical decisions. Sarah Heck, policy manager, wrote In a court filing filed Friday, Anthropic was willing to guarantee as much in a contract proposed March 4. “For the avoidance of doubt, [Anthropic] understands that this license does not grant or confer any rights of control or veto over lawful operational decision-making of the War Department,” the proposal said, according to the filing, which referred to an alternative name for the Pentagon.
The company was also willing to accept language that would address its concerns about Claude being used to help carry out deadly strikes without human oversight, Heck claimed. But the negotiations ultimately failed.
For now, the Ministry of Defense said in court filings, it is “taking additional steps to mitigate the supply chain risk” posed by the company by “working with third-party cloud service providers to ensure that Anthropic management cannot make unilateral changes” to the Claude systems currently in place.


























