California considers unique safety regulations for AI companies, but faces tech firm opposition
SACRAMENTO, California — California lawmakers are considering legislation that would require companies that develop artificial intelligence to test their systems and add safeguards so they can’t be manipulated to take out the state’s power grid or create chemical weapons. Experts say such scenarios are possible in the future as the technology advances at a rapid pace.
Lawmakers plan to vote Tuesday on the first bill, which aims to reduce the risks created by AI. It is fiercely opposed by tech companies including Meta, the parent company of Facebook and Instagram, and Google. They say the regulation targets developers and should instead target those who use and exploit AI systems to cause harm.
Democratic Sen. Scott Wiener, who is authoring the bill, said the proposal would provide reasonable safety standards by preventing “catastrophic harm” from extremely powerful AI models that might be created in the future. The requirements would apply only to systems that require more than $100 million in computing power to train. No current AI model had reached that threshold as of July.
“This is not about smaller AI models,” Wiener said at a recent legislative hearing. “This is about incredibly large and powerful models that, to our knowledge, do not exist today, but will exist in the near future.”
Democratic Gov. Gavin Newsom has praised California as an early AI adopter and regulator, saying the state could soon deploy generative AI tools to tackle traffic congestion, make roads safer and provide tax advice. At the same time, his government is considering new rules against AI discrimination in hiring. He declined to comment on the bill, but warned that overregulation could put the state in a “dangerous position.”
The proposal, backed by some of the most renowned AI researchers, would also create a new state agency to oversee developers and provide best practices. The state attorney general would also be able to take legal action for violations.
A growing coalition of tech companies argues the requirements would prevent companies from developing large AI systems or maintaining their technology. open source.
“The bill will make the AI ecosystem less secure, jeopardize open-source models that startups and small businesses rely on, rely on standards that don’t exist, and create regulatory fragmentation,” Rob Sherman, Meta’s vice president and deputy chief privacy officer, wrote in a letter to lawmakers.
The state Chamber of Commerce said the proposal could also lead to companies leaving the state to avoid regulations.
Opponents want to wait for more guidance from the federal government. Supporters of the bill said California can’t wait, citing hard lessons learned from not moving quickly enough to rein in social media companies.
State lawmakers also considered another ambitious measure Tuesday to automation discrimination when companies use AI models to screen resumes and applications for rental apartments.