Sam Altman tells the government and Anthropic to

Sam Altman tells the government and Anthropic to

Sam Altman, CEO of OpenAI
(Image credit: Getty Images/Bloomberg)

  • Sam Altman urged the government and Anthropic to ease tensions and work together on AI governance.
  • He argued that governments should hold power over decisions regarding AI and national security.
  • He said he still largely trusts the government, while admitting many do not.

Relations between Anthropic and the U.S. government have become an unusually combustible flashpoint in the broader fight over the regulation and control of AI. The escalation of the fight began when negotiations with the Pentagon over how Anthropic’s Claude AI model could be used broke down due to the company’s refusal to remove safeguards against fully autonomous weapons or mass domestic surveillance.

Washington’s responses, including an executive directive banning federal agencies from using Anthropic’s technology and calling the company a “supply chain risk,” led to lawsuits alleging constitutional violations, and a federal judge has since temporarily blocked the Pentagon’s actions.

OpenAI CEO Sam Altman apparently sees harmony as necessary on both sides of the debate.

Article continues below

“Find a way to work together. Like stop, stop things on both sides, stop the escalation on both sides and find a way to work together,” Altman said in an interview with Laurie Segall.

AI security requirements

Sam Altman, CEO of OpenAI. (Image credit: Getty Images/Bloomberg)

AI companies have touted the technology’s potential in areas like national security, even as they push for a light regulatory touch. Altman apparently concluded that businesses can’t have it both ways. If AI has as many geopolitical consequences as everyone claims, then governments will want to have their hands on the wheel.

“I don’t think it works for our industry to say, Hey, this is the most powerful technology that humanity has ever built,” Altman said. “It’s going to be a high-level in geopolitics. It’s going to be the greatest cyberweapon the world has ever built. It’s going to, you know, be instrumental in future wars and protection. And we’re not giving that to you.”

Of course, whether people feel comfortable with the government controlling such important technology is another question. Altman said he still largely trusts the system of checks and balances, although he acknowledged that many people currently “really don’t trust the government to follow the law.”

Sign up to receive the latest news, reviews, opinions, best tech deals and more.

It’s a position that stands out from that of some AI leaders who are more wary of government. However, he believes it would be a mistake not to help the government with national security, particularly cyberinfrastructure.

“I think we need to work with the government, but I was miscalibrated about the intensity of the current climate of mistrust and I understand something now,” he said.

Trust AI control

Essentially, Altman and others aligned with him want to work with governments, even as public distrust of AI misuse increases.

“One of the most important questions the world will have to answer next year is: Are AI companies or governments more powerful? And I think it’s very important that governments are more powerful,” Altman said. “The future of the world and decisions about the most important elements of national security must be made through a democratically elected process. And the people who were appointed through that process, not me, nor the CEO of another lab.”

Altman kept returning to the question of how the power of AI is arriving faster than institutions, governments, or most humans can adapt to it. Systems are becoming more and more efficient and their potential for misuse is increasing at the same time.

The stakes are getting higher and more serious. A huge problem is the big struggles between those who are supposed to develop safe regulations and the companies that are trying, at least in theory, to steer technology in an ethical direction.

A diplomatic shrug urging diametrically opposed parties to “find a way to work together” is unlikely to solve the problems. Still, that at least means Altman knows the answer won’t be obvious, even if he phrased it as a request to ChatGPT.


Follow TechRadar on Google News And add us as your favorite source to get our news, reviews and expert opinions in your feeds. Make sure to click the Follow button!

And of course you can too follow TechRadar on TikTok for news, reviews, unboxings in video form and receive regular updates from us on WhatsApp Also.


Eric Hal Schwartz is a freelance writer for TechRadar with over 15 years of experience covering the intersection of world and technology. For the past five years, he served as editor-in-chief for Voicebot.ai and was at the forefront of reporting on generative AI and large language models. Since then, he has become an expert on generative AI model products, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and all other synthetic media tools. Her experience spans the gamut of media, including print, digital, broadcast and live events. Today, he continues to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York.