California Governor Gavin Newsom signed an executive order Monday requiring AI companies doing business with the state to implement security and privacy guidelines.
The order (PDF) aims to ensure that companies contracting with the state adhere to rigorous standards and develop responsible policies to prevent misuse of their technology while protecting consumer security and privacy, according to Newsom’s office.
“California is leading in AI, and we will use every tool we have to ensure that companies protect people’s rights, without exploiting or putting them at risk,” Newsom said in a statement. “While others in Washington design policies and create contracts in the shadow of abuse, we strive to do it the right way.”
The executive order comes as the Trump administration maintains that the federal government should be responsible for regulating the AI industry — and that requiring AI companies to comply with 50 different sets of state laws would prevent the United States from “winning” the global AI race.
The White House recently released a new policy framework for regulating generative AI that focuses on some of people’s biggest concerns about AI: job loss, copyright chaos for creators, rapid expansion of infrastructure like data centers, and protection of vulnerable groups like children. But critics say it doesn’t go far enough to regulate the fast-growing AI sector.
Some states have passed laws making it a crime to create sexual images of people without their consent, while others have placed restrictions on insurance companies that use AI to approve or deny healthcare requests. Companies including Google, Meta, OpenAI and Andreessen Horowitz have called for national AI standards rather than litigating in all 50 states.
























