Policy / March 11, 2026
But make no mistake: the company is not one of the good guys.
Anthropic CEO Dario Amodei, Chief Product Officer Mike Krieger and Communications Manager Sasha de Marigny give a press conference on May 22, 2025.
(Julie Jammot/AFP via Getty Images) Anthropic, creator of the “Claude” AI model, continued the Ministry of Defense in two separate lawsuits, including one alleging the government is violating First Amendment rights. The conflict erupted last week when the Trump administration called the company a “supply chain risk” and banned government agencies, or any entity working with the U.S. military, from using the Claude system. The Trump administration now considers Claude a national security risk. (The second lawsuit challenges this designation, which, until now, has never been used against a U.S. company.)
The blacklisting follows months of fighting between Anthropic and the government. Anthropic wants to maintain “guardrails” on Claude that prevent the system from being used to power autonomous weapons – essentially, killing machines capable of conducting military operations without human involvement – and to engage in widespread surveillance of Americans. The Trump administration wants the company to relax these safeguards. Clearly, War Crimes Secretary Pete Hegseth wants the killer robots now, and he doesn’t like Anthropic getting in his way.
The government has repeatedly threatened Anthropic with consequences if it does not remove its security restrictions. It would appear that the supply chain risk designation and associated blacklisting are those consequences.
All of this should make the Anthropic trial a sure-fire hit, at least on the First Amendment side, assuming there are still judges and justices willing to hold the Trump administration constitutionally accountable, even in the area of national security. Anthropic’s complaint constitutes a fairly clear argument for a violation of the First Amendment (I’m less familiar with the other claim, although my assumption, based on prior history, is that the Trump administration is indeed violating all the laws it is accused of violating).
The simple facts are these: The government wanted Anthropic to make its AI do something. Anthropic didn’t want its AI to do this, because of its beliefs, and those beliefs are protected by the First Amendment. The government punished Anthropic with an adverse national security designation, because the company would not do what the government wanted. This is a violation of freedom of expression.
It would have been one thing if the government had simply decided to use another AI vendor or, God forbid, stopped using AI for military purposes. This would not violate the First Amendment; it would simply be the government choosing to use a different service. But the government was not content to entrust its affairs elsewhere: it decided to punish Anthropic by declaring it a threat to national security.
Current number
As so often happens, Donald Trump’s chronic inability to stay silent even when he violates the Constitution should help Anthropic make the case for him. On social media, he called Anthropic “out of control” and “RADICAL LEFT, WAKE UP CORPORATE” and “left-wing weirdo jobs.” It does not say that the company is no longer able to provide a useful service to the government; he says the government is blacklisting the company because of its political views.
Hegseth doubled down on those comments. According to the complaint, when Hegseth issued the blacklisting order, he “denounced what he called ‘Silicon Valley ideology,’ ‘flawed altruism,’ ‘corporate virtue signaling,’ and a ‘master class in arrogance.’ And he criticized Anthropic for not being “more patriotic.”
All of this violates the First Amendment. The DOD can use any service provider it wants, but it cannot give a company an unfavorable legal designation for lack of “patriotism.” Punishing people who don’t wave the flag enough is one of the things the First Amendment was designed to do.
There is recent case law, furthermore from the Trump-controlled Supreme Court, that should also help Anthropic’s cause. In National Rifle Association v. Vullothe NRA successfully argued that New York State Department of Financial Services Superintendent Maria Vullo pressured banks and insurance companies to stop doing business with the NRA and other pro-gun groups in the wake of the Sandy Hook shooting. The Supreme Court ruled that this violated the NRA’s First Amendment rights, essentially saying that New York State was using its power to take business away from the NRA because New York didn’t like what the NRA stood for.
That decision, by the way, was 9-0. THE unanimous opinion was written by Justice Sonia Sotomayor, who is not exactly on the ammosexual side of the spectrum. But: Trying to crush a company because the government doesn’t like what the company does is a classic violation of the First Amendment. I suspect the justices who treat Trump like God on national security issues (Chief Justice John Roberts and Justices Clarence Thomas, Sam Alito, and alleged rapist Brett Kavanaugh) will find a way to reverse their views. Vullo and decide that the First Amendment doesn’t matter when Trump wants your company to automate the killing of people, but that still only brings the Trump administration four votes.
Anthropic should win, but here’s the thing: it’s not exactly one of the good ones. Yes, the current generation of war criminals running the government want horrible things, but Anthropic mostly wants to provide them. It’s not, after all, as if he hasn’t pursued the $200 trillion in contracts the government is now trying to take away from him. And company executives have gone out of their way to say how “patriotic” they are and how much they believe in using AI for national security. They’re basically saying they’ll let Claude do anything other than pull the trigger:
Anthropic therefore worked proactively to deploy our models to the Department of War and the Intelligence Community. We were the first cutting-edge AI company deploy our models in classified US government networks, the first to deploy them at National laboratoriesand the first to provide custom models for national security customers. Claude is widely deployed within the Department of War and other national security agencies for critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.
Popular “Swipe left below to see more authors”Swipe →
The company wants to help the Trump administration do almost every bad thing it wants to do. And he’s happy to play the game in areas both large and very small (see his repeated and insinuating references to the “War Department”).
Here’s my reading: I feel like Anthropic is simply trying to maintain plausible deniability when, inevitably, their system is used in the broadest possible way. obviously blatantly. Think of it this way: when Claude kills the “wrong” person (or, more likely, a village full of people), the lawsuit won’t just be against the U.S. government; it will also be a business-destroying lawsuit filed against Anthropic. And I’ll bet all of Claude’s venture capital funding that the government will try to blame any violent incidents on Anthropic and not the drunk guys who run the Department of Defense. All their rhetoric and safety protocols about what Claude should not be used for seems to me more than anything like an early warning liability shield.
Anthropic seems to me to be the guys who split the atom and then said, “But we’re only going to use this for scientific purposes, not to make… bombs that could destroy all of human civilization, right? Right, Robbie Oppenheimer?” Sure, you may want your technology to “only be used for good,” but… that’s not how technology works. And that’s certainly not how the American war machine works.
The best thing that could happen would be to prevent the DOD from using lethal autonomous AI and surveilling the American public. by an act of Congressnot by the defense of Anthropic’s First Amendment rights. This situation requires legislation, not a 5-4 Supreme Court decision on whether the government can blacklist companies that don’t meet its expectations.
The Trump administration should not be able to list a company as a national security threat because it will not make terminators. But even if Anthropic (for now) doesn’t want its technology used this way, the next company won’t have a problem with it. OpenAI, creator of ChatGPT, is I’m already trying to fill the void left by Claude.
Eventually, we will be told that we simply duty making autonomous killer robots because the Chinese, Russians or Klingons are already doing it and we can’t fall behind.
As per usual, Terminator 2 I predicted all this.
John Connor: “We’re not going to make it, are we? People, I mean.”
Terminator: “It’s in your nature to destroy yourself. »
Support independent journalism that breaks the rules Even before February 28, the reasons for Donald Trump’s imploding popularity couldn’t have been clearer: rampant corruption and billions of dollars’ worth of personal enrichment during an affordability crisis, a foreign policy guided solely by his own abandoned sense of morality, and the deployment of a murderous campaign of occupation, detention, and deportation on American streets.
Today, an undeclared, unauthorized, unpopular and unconstitutional war of aggression against Iran has spread like wildfire across the region and Europe. A new “forever war” – with an ever-increasing likelihood of US troops on the ground – could very well be upon us.
As we have seen time and time again, this administration uses lies, misdirection, and attempts to flood the zone to justify its abuses of power at home and abroad. Much like Trump, Marco Rubio and Pete Hegseth offer erratic and contradictory justifications for justi proud of the attacks on Iran, the administration is also spreading the lie that the upcoming midterm elections are threatened by non-citizens registered to vote. When these lies go unchecked, they become the basis for further authoritarian encroachment and war.
In these dark times, independent journalism is the only one that can uncover the lies that threaten our republic – and civilians around the world – and shine a light on the truth.
The Nation’s experienced team of writers, editors and fact-checkers understand the scale of what we face and the urgency with which we must act. That’s why we publish critical reporting and analysis on the war with Iran, ICE violence at home, new forms of voter suppression emerging in the courts, and much more.
But this journalism is only possible with your support.
This month of March, The nation must raise $50,000 to ensure we have the resources to produce reports and analysis that set the record straight and empower people of conscience to organize. Will you donate today?
Elie Mystal Elie Mystal is The Nationjustice correspondent and columnist. He is also an Alfred Knobler Fellow at the Type Media Center. He is the author of two books: New York Times bestseller Let me respond: A Guide to the Constitution for Black Men And Bad Laws: Ten Popular Laws That Are Ruining Americaboth published by The New Press. You can subscribe to his Nation newsletter “Elie c. US » here.




























