What to know about how lawmakers are addressing deepfakes like the ones that victimized Taylor Swift

Even before pornographic and violent deepfake images of Taylor Swift began circulating widely in recent days, lawmakers in the United States were looking for ways to destroy such non-consensual images of both adults and children.

But in this Taylor-centric era, the issue has received much more attention since she became the target of deepfakes, the computer-generated images that use artificial intelligence to appear real.

Here are things to know about what states have done and what they are considering.

Artificial intelligence entered the mainstream like never before last year, allowing people to create increasingly realistic deepfakes. Now they appear online more often, in different forms.

There is pornography, which takes advantage of celebrities like Swift to create false, compromising images.

There’s music — a song that sounded like Drake and The Weeknd performing together got millions of clicks on streaming services — but it wasn’t those artists. The song has been removed from platforms.

And there are political dirty tricks this election year — just before January’s presidential election, some voters in New Hampshire reported receiving robocalls purportedly from President Joe Biden, telling them not to bother voting. The Public Prosecution Service is investigating.

But a more common circumstance is porn that uses the likenesses of non-famous people, including minors.

Deepfakes are just one area in the complicated world of AI that lawmakers are trying to figure out if and how to deal with.

At least ten states have already passed deepfake-related laws. More measures are being considered in legislatures across the country this year.

Georgia, Hawaii, Texas and Virginia have laws on the books that criminalize non-consensual deepfake porn.

California and Illinois have given victims the right to sue those who create images using their likenesses.

Minnesota and New York do both. Minnesota’s law also addresses the use of deepfakes in politics.

University at Buffalo computer science professor Siwei Lyu said there are several approaches being used, but none are perfect.

One of these is deepfake detection algorithms, which can be used to spot deepfakes in places like social media platforms.

Another – which Lyu said is in development but not yet widely used – is embedding codes in the content people upload that would indicate whether they are being reused in AI creation.

And a third mechanism would be to require companies offering AI tools to include digital watermarks to identify the content generated with their applications.

He said it makes sense to hold those companies accountable for how people use their tools, and that companies can in turn enforce user agreements against the creation of problematic deepfakes.

Model legislation proposed by the American Legislative Exchange Council addresses porn, not politics. The conservative and pro-business policy group is encouraging states to do two things: criminalize the possession and distribution of deepfakes depicting minors engaging in sexual acts, and allowing victims to sue people who distribute non-consensual deepfakes depicting sexual conduct turns out.

“I would encourage lawmakers to start with a small, prescriptive solution that can solve a tangible problem,” said Jake Morabito, who leads the communications and technology task force for ALEC. He warns that lawmakers should not focus on the technology that can be used to create deepfakes, as this could hold back innovation with important other applications.

Todd Helmus, a behavioral scientist at RAND, a nonpartisan think tank, points out that leaving enforcement to individuals who file lawsuits is insufficient. Resources are needed to file a lawsuit, he said. And the result may not be worth it. “It’s not worth suing someone who doesn’t have money to give you,” he said.

Helmus calls for guardrails across the system and says making them work will likely require government involvement.

He said OpenAI and other companies whose platforms can be used to generate seemingly realistic content should make efforts to prevent deepfakes from being created; Social media companies need to implement better systems to stop them from spreading, and there should be legal consequences for those who do.

Jenna Leventoff, a First Amendment attorney with the ACLU, said that while deepfakes can cause harm, free speech protections apply to them too, and lawmakers should be careful not to go beyond existing exceptions to freedom of expression, such as defamation, fraud and obscenity, when trying to regulate emerging technology.

Last week, White House Press Secretary Karine Jean-Pierre addressed the issue, saying social media companies should create and enforce their own rules to prevent the spread of misinformation and images like Swift’s.

A bipartisan group of members of Congress introduced federal legislation in January that would give people ownership of their own likeness and voice — and the ability to sue those who use them deceptively for any reason through deepfake.

Most states are considering some form of deepfake legislation during their sessions this year. They are being introduced by Democrats, Republicans and bipartisan coalitions of lawmakers.

The bills gaining traction include one that would make it a crime to distribute or create sexually explicit images of a person without their consent in Republican Party-dominated Indiana. It was unanimously adopted in the House of Representatives in January.

A similar measure introduced this week in Missouri is called “The Taylor Swift Act.” And another cleared the Senate this week in South Dakota, where Attorney General Marty Jackley said some investigations have been turned over to federal officials because the state lacks the AI-related laws needed to bring charges.

“If you go to someone’s Facebook page, steal your child and put them in pornography, there is no First Amendment right to do that,” Jackley said.

For anyone with an online presence, it can be difficult to avoid becoming a deepfake victim.

But RAND’s Helmus says people who notice they are being targeted can ask a social media platform where images are shared to remove them; inform the police if they are in a place where a law applies; tell school or college officials whether the alleged perpetrator is a student; and seek mental health help if necessary.

___

Associated Press reporters from across the U.S. contributed to this article.