California governor vetoes bill to create first-in-nation AI safety measures
SACRAMENTO, California — California Governor Gavin Newsom vetoed a landmark bill aimed at establishing the first-in-the-nation safety measures for large artificial intelligence models Sunday.
The decision is a major blow to efforts to rein in the domestic industry, which is developing rapidly and with little oversight. The bill would have established some of the first rules for large-scale AI models in the country and paved the way for AI safety rules across the country, supporters said.
Earlier this month, the Democratic governor told an audience at Dreamforce, an annual conference hosted by software giant Salesforce, that California must take the lead in regulating AI despite federal inaction, but that the proposal “Could have a chilling effect on the industry.”
The proposal, which drew fierce opposition from startups, tech giants and several Democratic House members, could have hurt the domestic industry by imposing strict requirements, Newsom said.
“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making, or uses sensitive data,” Newsom said in a statement. “Instead, the bill applies. strict standards for even the most basic functions – as long as a major system uses them. I do not believe this is the best approach to protecting the public from real threats from the technology.”
Newsom instead announced Sunday that the state will work with several industry experts, including AI pioneer Fei-Fei Lito develop guardrails around powerful AI models. Li opposed the AI safety proposal.
The measure, aimed at reducing the potential risks posed by AI, would have required companies to test their models and make public their security protocols to prevent the models from being manipulated to, for example, shut down the state’s electricity grid or to help build chemical weapons. Experts say these scenarios could be possible in the future as the industry continues to develop rapidly. It would also provide employees with protection against whistleblowers.
The bill’s author, Democratic Senator Scott Weiner, called the veto “a setback for anyone who believes in oversight of major corporations that make critical decisions that affect the safety and well-being of the public and the future of the planet.”
“The companies developing advanced AI systems recognize that the risks these models pose to the public are real and rapidly increasing. While the major AI labs have made admirable commitments to monitor and mitigate these risks, the truth is that voluntary industry commitments are unenforceable and rarely work out well for the public,” Wiener said in a statement Sunday afternoon.
Wiener said the debate surrounding the bill has dramatically advanced the issue of AI safety, and he will continue to push that point.
The legislation belongs to the a whole series of bills passed by the legislature this year to regulate AI, fight against deepfakes And protect employees. State lawmakers said California must take action this year, citing the hard lessons learned by failing to rein in social media companies when they might have had a chance.
Proponents of the measure, including Elon Musk and Anthropic, said the proposal could have injected a degree of transparency and accountability around large-scale AI models, as developers and experts say they still don’t have a full understanding of how AI models behave. worn and why.
The bill targeted systems that require this more than 100 million dollars build. None of the current AI models have reached that threshold, but some experts said that could change within a year.
“This is due to the massive scaling up of investments within the sector,” said Daniel Kokotajlo, a former OpenAI researcher who resigned in April over what he saw as the company’s disregard for AI risks. “This is an insane amount of power to give a private company inexplicably control, and it is also incredibly risky.”
The United States is already behind Europe in regulating AI limit risks. California’s proposal was not as comprehensive as regulations in Europe, but it would have been a good first step to place guardrails around the fast-growing technology that is raising concerns about job losses, disinformation, privacy breaches and automation biassupporters said.
Some leading AI companies last year voluntarily agreed to follow safeguards set by the White House, such as testing and sharing information about their models. The California bill would require AI developers to follow requirements similar to those commitments, the measure’s backers said.
But critics, including former Speaker of the U.S. House of Representatives Nancy Pelosi, argued the bill would “kill California technology” and stifle innovation. It would have discouraged AI developers from investing in large models or sharing open source software, they said.
Newsom’s decision to veto the bill marks another victory in California for big tech companies and AI developers, many of whom have been lobbying with the California Chamber of Commerce over the past year to block the governor and lawmakers from AI regulations to promote.
Two other sweeping AI proposals, which also faced mounting opposition from the tech industry and others, died before a regulatory deadline last month. The bills would require AI developers to label AI-generated content ban discrimination by AI tools used to make employment decisions.
The governor said earlier this summer that he wanted to protect California’s status as a global leader in AI, noting that 32 of the world’s 50 largest AI companies are based in the state.
He has promoted California as an early adopter state will soon be able to use generative AI tools to address traffic congestion, provide fiscal guidance and streamline homelessness programs. The state also announced this last month a voluntary partnership with AI giant Nvidia to help train students, university faculty, developers and data scientists. California is also considering new rules against AI discrimination in hiring practices.
Earlier this month, Newsom signed some of the country’s toughest laws to crack down on them deepfakes in elections and measures to protect Hollywood workers against unauthorized AI use.
But even with Newsom’s veto, California’s security proposal is inspiring lawmakers in other states to take similar action, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit that works with lawmakers on technology and privacy proposals .
“They may copy it or do something similar in the next legislative session,” Rice said. “So it’s not going away.”
—-
The Associated Press and OpenAI have done so a license and technology agreement which gives OpenAI access to some of AP’s text archives.