California laws cracking down on election deepfakes by AI face legal challenges

SACRAMENTO, California — California now has some of the strictest laws in the United States to tackle election deepfakes ahead of the 2024 elections, after Governor Gavin Newsom signed three groundbreaking proposals at an artificial intelligence conference in San Francisco this week.

The state could be among the first to test such legislation, which would ban the use of AI to create and distribute fake images and videos in political ads near Election Day.

But now two of the three laws, including one meant to curb the practice in the 2024 elections, are being challenged in court in a lawsuit filed Tuesday in Sacramento.

One of them is effective immediately and gives anyone the opportunity to claim damages for deepfakes of the election. The other measure requires major online platforms, such as X, to remove misleading material starting next year.

The lawsuit, filed by an individual who made parody videos with altered audio of Vice President and Democratic presidential candidate Kamala Harris, says the laws censor free speech and allow anyone to take legal action against content they don’t like. At least one of his videos was shared by Elon Musk, owner of the social media platform X, prompting Newsom to to promise to prohibit such content on a message on X.

The governor’s office said the law does not prohibit satire and parody content. Instead, it requires disclosure of the use of AI to appear in the modified videos or images.

“It is unclear why this conservative activist is suing California,” Newsom spokeswoman Izzy Gardon said in a statement. “This new election misinformation disclosure law is no more onerous than laws already passed in other states, including Alabama.”

Theodore Frank, an attorney representing the plaintiff, said California’s laws go too far and are designed to “force social media companies to censor and harass people.”

“I’m not familiar with Alabama law. On the other hand, the governor of Alabama did not threaten our client the way the governor of California did,” he told The Associated Press.

The lawsuit appears to be one of the first legal challenges to such legislation in the U.S. Frank told AP he plans to file another lawsuit over similar laws in Minnesota.

State lawmakers in more than a dozen states have introduced similar proposals after the rise of AI began to increase the threat to elections disinformation worldwide.

Of the three bills Newsom signed Tuesday, one goes into effect immediately to prevent deepfakes from entering the 2024 election and is the most far-reaching in its scope. The bill targets not only materials that could influence how people vote, but also any video or image that could misrepresent the integrity of the election. It also covers materials that show election workers and voting machines, not just political candidates.

The law makes it illegal to create and publish false materials related to elections 120 days before Election Day and 60 days afterward. It also allows courts to stop the distribution of the materials, and violators can face civil penalties. The law exempts parody and satire.

The goal, Newsom and lawmakers said, is to avoid eroding public confidence in U.S. elections amid a “charged political climate.”

But critics including free speech advocates and Musk called California’s new law unconstitutional and an infringement on the First Amendment. Hours after it was signed into law, Musk on Tuesday night elevated a message on X sharing an AI-generated video with custom audio from Harris.

“The Governor of California just declared this parody video illegal, in violation of the United States Constitution. It would be a shame if it went viral,” Musk wrote of the AI-generated video, which has a caption identifying the video as a parody.

It’s not clear how effective these laws are at stopping election deepfakes, said Ilana Beller of Public Citizen, a nonprofit consumer advocacy group. The group tracks state laws related to election deepfakes. None of the laws have been tested in a courtroom, Beller said.

The law’s effectiveness could be blunted by the courts’ slowness in dealing with a technology that can produce fake images for political ads and distribute them at lightning speed.

According to Beller, it could take several days for a court to issue an injunction to stop the content from spreading, by which time the damage to a candidate or an election could already have been done.

“In an ideal world, we would be able to take down the content as soon as it’s online,” she said. “Because the sooner you take down the content, the fewer people see it, the fewer people spread it through reposts and things like that, and the sooner you can take it down.”

Still, such a law could act as a deterrent to potential violations, she said.

Assemblywoman Gail Pellerin declined to comment on the lawsuit, but said the law she drafted is a simple tool to prevent misinformation.

“What we’re saying is, hey, just mark that video as digitally edited for parody purposes,” Pellerin said. “And then it’s very clear that it’s for satire or parody.”

Newsom also signed another law on Tuesday that will require campaigns to publicly release AI-generated materials starting next year, after the 2024 elections.