SACRAMENTO, Calif. (AP) — California Gov. Gavin Newsom on Monday signed a law that aims to prevent people from using powerful artificial intelligence models for potentially catastrophic activities like building a bioweapon or shutting down a bank system.
The move comes as Newsom touted California as a leader in AI regulation and criticized the inaction at the federal level in a recent conversation with former President Bill Clinton. The new law will establish some of the first-in-the-nation regulations on large-scale AI models without hurting the state's homegrown industry, Newsom said. Many of the world's top AI companies are located in California and will have to follow the requirements.
“California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance," Newsom said in a statement.
The legislation requires AI companies to implement and disclose publicly safety protocols to prevent their most advanced models from being used to cause major harm. The rules are designed to cover AI systems if they meet a “frontier” threshold that signals they run on a huge amount of computing power.
Such thresholds are based on how many calculations the computers are performing. Those who crafted the regulations have acknowledged the numerical thresholds are an imperfect starting point to distinguish today’s highest-performing generative AI systems from the next generation that could be even more powerful. The existing systems are largely made by California-based companies like Anthropic, Google, Meta Platforms and OpenAI.
The legislation defines a catastrophic risk as something that would cause at least $1 billion in damage or more than 50 injuries or deaths. It's designed to guard against AI being used for activities that could cause mass disruption, such as hacking into a power grid.
Companies also have to report to the state any critical safety incidents within 15 days. The law creates whistleblower protections for AI workers and establishes a public cloud for researchers. It includes a fine of $1 million per violation.
It drew opposition from some tech companies, which argued that AI legislation should be done at the federal level. But Anthropic said the regulations are “practical safeguards” that make official the safety practices many companies are already doing voluntarily.
"While federal standards remain essential to avoid a patchwork of state regulations, California has created a strong framework that balances public safety with continued innovation,” Jack Clark, co-founder and head of policy at Anthropic, said in a statement.
The signing comes after Newsom last year vetoed a broader version of the legislation, siding with tech companies that said the requirements were too rigid and would have hampered innovation. Newsom instead asked a group of several industry experts, including AI pioneer Fei-Fei Li, to develop recommendations on guardrails around powerful AI models.
The new law incorporates recommendations and feedback from Newsom’s group of AI experts and the industry, supporters said. The legislation also doesn't put the same level of reporting requirements on startups to avoid hurting innovation, said state Sen. Scott Wiener of San Francisco, the bill's author.
“With this law, California is stepping up, once again, as a global leader on both technology innovation and safety,” Wiener said in a statement.
Newsom's decision comes as President Donald Trump in July announced a plan to eliminate what his administration sees as “onerous” regulations to speed up AI innovation and cement the U.S.' position as the global AI leader. Republicans in Congress earlier this year unsuccessfully tried to ban states and localities from regulating AI for a decade.
Without stronger federal regulations, states across the country have spent the last few years trying to rein in the technology, tackling everything from deepfakes in elections to AI “therapy.” In California, the Legislature this year passed a number of bills to address safety concerns around AI chatbots for children and the use of AI in the workplace.
California has also been an early adopter of AI technologies. The state has deployed generative AI tools to spot wildfires and address highway congestion and road safety, among other things.
Associated Press reporter Matt O'Brien contributed to the report.
Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.
(0) comments
Welcome to the discussion.
Log In
Keep the discussion civilized. Absolutely NO personal attacks or insults directed toward writers, nor others who make comments.
Keep it clean. Please avoid obscene, vulgar, lewd, racist or sexually-oriented language.
Don't threaten. Threats of harming another person will not be tolerated.
Be truthful. Don't knowingly lie about anyone or anything.
Be proactive. Use the 'Report' link on each comment to let us know of abusive posts.
PLEASE TURN OFF YOUR CAPS LOCK.
Anyone violating these rules will be issued a warning. After the warning, comment privileges can be revoked.