The Trump administration argued in a court filing Tuesday that it did not violate Anthropic’s First Amendment rights by designating the AI developer as a supply chain risk and predicted that the company lawsuit against the government will fail.
“The First Amendment is not a license to unilaterally impose contractual terms on the government, and Anthropic cites nothing to support such a sweeping conclusion,” U.S. Justice Department lawyers wrote.
The response was filed in federal court in San Francisco, one of two locations where Anthropic operates. challenge the Pentagon’s decision to sanction the company with a label that can prohibit companies from entering into defense contracts due to concerns about potential security vulnerabilities. Anthropic contends that the Trump administration exceeded its authority by applying the label and preventing the company’s technologies from being used within the department. If designation stands, Anthropic could lose up to billions of dollars in expected revenue This year.
Anthropic wishes to resume business as usual until the litigation is resolved. Rita Lin, the judge in the San Francisco case, has scheduled a hearing next Tuesday to decide whether to honor Anthropic’s request.
Justice Department lawyers, writing for the Defense Department and other agencies in Tuesday’s filing, described Anthropic’s concerns about a potential loss of business as “legally insufficient to constitute irreparable harm” and called on Lin to deny the company a reprieve.
The lawyers also wrote that the Trump administration was motivated to act because of “concerns about Anthropic’s potential future conduct if it retained access” to government technology systems. “No one has claimed to restrict Anthropic’s expressive activity,” they write.
The government says Anthropic’s efforts to limit how the Pentagon can use its AI technology led Defense Secretary Pete Hegseth to “reasonably” determine that “Anthropic personnel could sabotage, maliciously introduce undesirable functions, or otherwise subvert the design, integrity, or operation of a national security system.”
The Department of Defense and Anthropic are feuding over potential restrictions on the company’s Claude AI models. Anthropic believes its models should not be used to facilitate large-scale surveillance of Americans and are not currently reliable enough to power fully autonomous weapons.
Several legal experts previously told WIRED that Anthropic has a strong case that the supply chain action amounts to illegal retaliation. But courts often favor the government’s national security arguments, and Pentagon officials have portrayed Anthropic as a contractor gone rogue whose technologies cannot be trusted.
“In particular, the DoW expressed concern that allowing Anthropic continued access to the DoW’s technical and operational warfare infrastructure would introduce unacceptable risk into the DoW’s supply chains,” Tuesday’s filing said. “AI systems are extremely vulnerable to manipulation, and Anthropic may attempt to disable its technology or preemptively modify the behavior of its model before or during ongoing war operations, if Anthropic – at its discretion – believes that its business ‘red lines’ are crossed. »
The Defense Department and other federal agencies are working to replace Anthropic’s AI tools with products from competing technology companies in the coming months. One of the main uses of Claude by the military is Palantir Data Analysis Softwarepeople familiar with the matter told WIRED.
In their Tuesday filing, the lawyers argued that the Pentagon “cannot simply flip a switch at a time when Anthropic is currently the only AI model authorized for use” on the department’s “classified systems and high-intensity combat operations are underway.” The department is working to deploy AI systems from Google, OpenAI and xAI as alternatives.
A number of companies and groups, including AI researchersMicrosoft, a union of federal employees and former military leaders filed legal briefs in support of Anthropic. None were filed in support of the government.
Anthropic has until Friday to file a counter-response to the government’s arguments.




























