OpenAI launches his support for an Illinois state bill that would protect AI labs from liability in cases where AI models are used to cause serious harm to society, such as the death or serious injury of 100 or more people or at least $1 billion in property damage.
This effort appears to mark a change in OpenAI legislative strategy. Until now, OpenAI has largely played a defensive role, opposing bills that could have AI labs linkable for the harms of their technology. Several AI policy experts told WIRED that SB 3444, which could set a new standard for the industry, is a more extreme measure than bills OpenAI has supported in the past.
The bill would protect AI developers from liability for “critical harm” caused by their pioneering models as long as they did not intentionally or recklessly cause such an incident and published safety, security, and transparency reports on their website. He defines a frontier model as any AI model trained with more than $100 million in computational costs, which could likely apply to America’s largest AI labs, like OpenAI, Google, xAI, Anthropic, and Meta.
“We support approaches like this because they focus on what matters most: reducing the risk of serious harm from the most advanced AI systems while allowing this technology to fall into the hands of Illinois individuals and businesses, small and large,” OpenAI spokesperson Jamie Radice said in an emailed statement. “They also help avoid a patchwork of state-by-state rules and move toward clearer and more consistent national standards. »
In its definition of critical harm, the bill lists a few areas of concern common to the AI industry, such as a bad actor using AI to create a chemicalorganic, radiological or nuclear weapon. If an AI model alone engaged in behavior that, if committed by a human, would constitute a criminal offense and lead to these extreme results, that would also constitute critical harm. If an AI model committed any of these actions under SB 3444, the AI lab behind the model could not be held liable, as long as it was not intentional and they published their reports.
U.S. federal and state legislatures have yet to pass laws specifically addressing whether developers of AI models, like OpenAI, could be liable for these types of damages caused by their technology. But as AI labs continue to release more powerful AI models that raise new security and cybersecurity challenges, such as Claude Myth of Anthropicthese questions seem increasingly prescient.
In her testimony in support of SB 3444, Caitlin Niedermeyer, a member of OpenAI’s global affairs team, also advocated for a federal framework for regulating AI. Niedermeyer delivered a message that is consistent with that of the Trump administration crackdown on domestic AI security lawssaying it is important to avoid “a patchwork of inconsistent state requirements that could create friction without significantly improving security.” This is also consistent with the broader view of Silicon Valley in recent years, which generally held that it was paramount to AI legislation must not hinder America’s position in the global AI race. Although SB 3444 is itself a state-level security law, Niedermeyer argued that these can be effective if they “strengthen the path toward harmonization with federal systems.”
“At OpenAI, we believe the North Star of border regulation should be the safe deployment of the most advanced models in a way that also preserves American leadership in innovation,” Niedermeyer said.
Scott Wisor, policy director of the Secure AI Project, told WIRED he thinks this bill is unlikely to pass, given Illinois’ reputation for aggressively regulating technology. “We surveyed people in Illinois to ask if they think AI companies should be exempt from liability, and 90% of people are opposed. There is no reason for existing AI companies to face reduced liability,” Wisor says.
He notes that Illinois lawmakers have also submitted bills increasing liability for AI model developers. Last August, the state became the First of all in the country to pass legislation limit the use of AI in mental health services. Illinois was also the first to regulate the collection of biometric data, passage THE Biometric Information Protection Act in 2008.
While SB 3444 focuses on mass casualty events and large financial catastrophes, AI labs also face the question of the damage their AI models can cause at the individual level. Several family members of children who died by suicide after allegedly developing unhealthy relationships with ChatGPT sued OpenAI last year.
The federal AI legislation that Niedermeyer advocates for in his testimony remains an elusive goal for Congress. While the Trump administration has issued executive orders and frameworks to try to catalyze some federal AI legislation, it has been talking about enacting such a measure. I don’t seem to be going anywhere. In the absence of federal guidance, states like California and New York have passed bills, such as SB 53 and the Raise Act, that require developers of AI models to submit safety and transparency reports.
Years after the AI boom, an open legal question remains about what happens if an AI model causes a catastrophic event.


























