Sam Altman, CEO of OpenAI, addresses the gathering at the AI Impact Summit, in New Delhi, India, February 19, 2026.
Bhavika Chhabra Reuters
OpenAI CEO Sam Altman said Monday that the company “should not have rushed” its recent agreement with the U.S. Department of Defense and would make some revisions to the agreement.
He comes after the ChatGPT creator announcement it had reached a new deal with the Defense Department on Friday, just hours after the White House ordered federal agencies to stop using tools from rival AI company Anthropic, and hours before Washington carried out strikes against Iran.
In a job on
He added that the Department of Defense had asserted that OpenAI’s tools would not be used by intelligence agencies such as the NSA.
“There are a lot of things that the technology is simply not ready for, and in many areas we don’t yet understand the security tradeoffs required,” Altman said, adding that the company would work with the Pentagon on technical safeguards.
The CEO also admitted he made a mistake and “shouldn’t have rushed” to close the deal on Friday.
“We were genuinely trying to de-escalate the situation and avoid a much worse outcome, but I think it just seemed opportunistic and sloppy,” he said.
This recognition comes after a public dispute between Anthropic and Washington over the guarantees of its Claude AI systems. Defense Secretary Pete Hegseth also said the company would be designated as a supply chain threat.
Anthropic had sought guarantees that its tools would not be used for purposes such as domestic surveillance in the United States, or to operate and develop autonomous weapons without human control.
The dispute began after it was revealed that Anthropic’s Claude was used by the US military during its raid to capture Venezuelan President Nicolás Maduro in January, although the company has not publicly objected to the use case.
OpenAI’s deal with the Pentagon came just after negotiations between Anthropic and the Department of Defense broke down, sparking a public backlash online, with many using it. would have abandoning ChatGPT for Claude on the app stores.
In his post, Altman addressed the controversy by stating, “In my conversations over the weekend, I reiterated that Anthropic should not be referred to as a [supply chain risk]and that we hope that [Department of Defense] offers them the same conditions that we accepted. »





























