A federal judge on Thursday temporarily blocked the Trump administration from labeling Anthropic a “supply chain risk” and cutting off the artificial intelligence company’s access to federal contracts.
U.S. District Judge Rita Lin granted Anthropic’s request for a preliminary injunction, finding that the Trump administration’s “broad punitive measures” against the company “were likely unlawful” and could “cripple Anthropic.”
“Nothing in the applicable law supports the Orwellian notion that an American company can be characterized as a potential adversary and saboteur of the United States for expressing disagreement with the government,” Lin wrote in his ruling.
(Disclosure: Ziff Davis, CNET’s parent company, filed a lawsuit in 2025 against OpenAI, alleging that it violated Ziff Davis’ copyrights in the training and operation of its AI systems.)
The dispute centers on the Pentagon’s request to use Anthropic’s Claude AI for “any lawful purposes,” while Anthropic wanted to bar the military from using it for mass domestic surveillance or for fully autonomous weapons systems. After Anthropic refused to respond to government requests, President Donald Trump and Defense Secretary Pete Hegseth said they would declare the company a “supply chain risk,” banning the use of its products in defense contracts.
Anthropic responded with a lawsuit filed earlier this month in federal court challenging the designation, calling it an “unprecedented and unlawful” attack on the company’s right to free speech.
Lin wrote that the administration’s measures do not appear to reflect the government’s national security interests, but rather appear punitive in nature.
“If the concern is about the integrity of the operational chain of command, the War Department could simply stop using Claude. Instead, these measures appear designed to punish Anthropic,” Lin wrote.
Lin also delayed his order for a week to allow the Pentagon to request a reprieve from the order.
Anthropic said in a statement that it was “grateful to the court for acting quickly, and pleased that they agree that Anthropic is likely to succeed on the merits.” While this matter is necessary to protect Anthropic, our customers, and our partners, our goal remains to work productively with the government to ensure that all Americans benefit from safe and trustworthy AI.
The White House and Pentagon did not immediately respond to a request for comment.

























