Anthropic “has not meets the strict requirements” to temporarily lose the supply chain risk designation imposed by the Pentagon, a US appeals court in Washington, DC ruled Wednesday. The decision contradicts one published last month by a judge in a lower court in San Francisco, and it was not immediately clear how the conflicting preliminary rulings would be resolved.
The government sanctioned Anthropic under two different supply chain laws with similar effects, and the courts in San Francisco and Washington DC are each ruling on only one of them. Anthropic said it was the first U.S. company to be designated under the two laws, which are typically used to punish foreign companies that pose a national security risk.
“Granting a stay would force the U.S. military to prolong its relationship with an undesirable provider of critical AI services in the midst of an ongoing significant military conflict,” the three-judge appellate panel said. wrote Wednesday in what they described as an unprecedented case. The panel said that while Anthropic could suffer financial harm from the pending designation, they did not want to risk “substantial judicial imposition on military operations” or “lightly overriding” the military’s national security judgments.
The San Francisco judge had ruled that the Defense Department likely acted in bad faith against Anthropic, motivated by frustration with the AI company’s proposed limits on how its technology could be used and its public criticism of those restrictions. The judge ordered the supply chain risk label removed last week, and the Trump administration complied by restoring access to anthropogenic AI tools within the Pentagon and the rest of the federal government.
Anthropic spokeswoman Danielle Cohen said the company was grateful to the Washington, D.C., court “for recognizing that these issues needed to be resolved quickly” and remained confident that “the courts will ultimately agree that these supply chain designations were unlawful.”
The Department of Defense did not immediately respond to a request for comment, but Acting Attorney General Todd Blanche job a statement on “Our position has been clear from the beginning: Our military requires full access to Anthropic’s models if their technology is integrated into our sensitive systems. Military authority and operational control resides with the Commander in Chief and the War Department, not a technology company.”
These cases test the power of the executive branch over the conduct of technology companies. The battle between Anthropic and the Trump administration also plays out as the Pentagon deploys AI in its war against Iran. The company argued that it was being illegally punished for insisting that its Claude AI tool lacked the precision needed for certain sensitive operations, such as carrying out deadly drone attacks without human supervision.
Several experts in public procurement and business rights have said to WIRED that Anthropic has a strong case against the government, but that courts sometimes refuse to overturn the White House’s decision on issues related to national security. Some AI researchers I said Pentagon’s actions against Anthropic “chill professional debate” on the performance of AI systems.
Anthropic claimed in court that it lost business because of the designation, which government lawyers say prohibits the Pentagon and its contractors from using the company’s Claude AI on military projects. And as long as Trump remains in office, Anthropic may not be able to regain the important place it once held in the federal government.
Final decisions in the two lawsuits filed by the company could be expected months away. The Washington court is scheduled to hear oral arguments on May 19.
So far, the parties have revealed few details about exactly how the Department of Defense has used Claude or what progress has been made in transitioning personnel to other AI tools from Google DeepMindOpenAI or others. The military, which under President Trump calls itself the War Department, said it has taken steps to ensure Anthropic cannot deliberately try to sabotage its AI tools during the transition.
Updated 4/8/26 at 7:27 a.m. EDT: This story has been updated to include a statement form from Acting Attorney General Todd Blanche.




























