California advances measures targeting AI discrimination and deepfakes

SACRAMENTO, California — As companies increasingly integrate artificial intelligence technologies into Americans’ daily lives, California lawmakers want to build public trust, combat algorithmic discrimination and ban deepfakes involving elections or pornography.

The efforts in California – home to many of the world’s largest AI companies – could pave the way for AI regulations across the country. The United States is already behind Europe in regulating AI to limit risks, lawmakers and experts say, and the fast-growing technology is raising concerns about job losses, disinformation, privacy breaches and automation bias.

A slew of proposals aimed at addressing these concerns were put forward last week, but must first win approval from the other chamber before reaching Gov. Gavin Newsom’s desk. The Democratic governor has promoted California as an early adopter and regulator, saying the state could soon deploy generative AI tools to tackle traffic congestion, make roads safer and provide tax advice, even as his administration considers new rules against AI -discrimination in hiring practices. . On Wednesday, he announced at an AI summit in San Francisco that the state is considering at least three more tools, including one to address homelessness.

With strong privacy laws already in place, California is better positioned to enact impactful regulations than other states with big AI interests, such as New York, said Tatiana Rice, deputy director of the Future of Privacy Forum, a nonprofit profit organization that works with legislators. on technology and privacy proposals.

“You need a data privacy law to pass an AI law,” Rice said. “We’re still paying a little bit of attention to what New York is doing, but I would bet more on California.”

California’s leaders said they can’t wait to act, citing the hard lessons learned from failing to govern social media companies when they might have had a chance. But they also want to continue attracting AI companies to the state.

“We want to dominate this space, and I’m too competitive to suggest otherwise,” Newsom said at the event on Wednesday. “I think in many ways the world looks to us as a leader in this area, and so we feel a deep sense of responsibility to get this right.”

Here’s a closer look at California’s proposals:

Some companies, including hospitals, are already using AI models to make hiring, housing and medical options decisions for millions of Americans without much oversight. According to the U.S. Equal Employment Opportunity Commission, a whopping 83% of employers are using AI to assist with hiring. How those algorithms work remains largely a mystery.

One of the most ambitious AI measures in California this year would pull back the curtain on these models by establishing an oversight framework to prevent bias and discrimination. It would require companies using AI tools to participate in decisions that determine outcomes and to inform the people involved when AI is used. AI developers should routinely review their models internally for bias. And the attorney general would have the authority to investigate reports of discriminatory modeling and impose fines of $10,000 per violation.

AI companies could also soon be required to disclose what data they use to train their models.

Inspired by last year’s months-long strike by Hollywood actors, a California lawmaker wants to protect workers from being replaced by their AI-generated clones – a major point of contention in contract negotiations.

The proposal, backed by the California Labor Federation, would allow artists to withdraw from existing contracts if vague language would allow studios to freely use AI to digitally clone their voices and likenesses. It would also require artists to be represented by an attorney or union representative when signing new “voice and likeness” contracts.

California could also impose penalties for digitally cloning dead people without the consent of their estate, citing the case of a media company that produced a fake, hour-long AI-generated comedy special to imitate the style and material of the late comedian George To impersonate Carlin without permission from his estate.

Risks in the real world abound as generative AI creates new content such as text, audio, and photos in response to cues. So lawmakers are considering requiring guardrails around “extremely large” AI systems that have the potential to spit out instructions for causing disasters — such as building chemical weapons or assisting in cyberattacks — that could cause at least $500 million in damage cause. Among other things, it would require such models to have a built-in ‘kill switch’.

The measure, backed by some of the most renowned AI researchers, would also create a new government agency to oversee developers and provide best practices, including for even more powerful models that don’t yet exist. The attorney general could also take legal action in case of violations.

A bipartisan coalition is seeking to facilitate the prosecution of people who use AI tools to create images of child sexual abuse. Current law does not allow prosecutors to go after people who possess or distribute AI-generated child sexual abuse images if the material does not depict a real person, law enforcement said.

A host of Democratic lawmakers are also backing a bill to crack down on deepfakes in elections, citing concerns after AI-generated robocalls imitated President Joe Biden’s voice ahead of the recent presidential primaries in New Hampshire. The proposal would ban “materially misleading” election-related deepfakes in political mailers, robocalls and TV ads 120 days before Election Day and 60 days afterward. Another proposal would require social media platforms to label all election-related posts created by AI.

Related Post