A manipulated video shared by Musk mimics Harris’ voice, raising concerns about AI in politics
NEW YORK — A manipulated video that contains the voice of Vice President Kamala Harris saying things she didn’t say raises concerns about the power of artificial intelligence to mislead with election day only three months away.
The video gained attention after tech billionaire Elon Musk shared it on his social media platform X on Friday evening, without explicitly mentioning that it was originally released as a parody.
The video uses much of the same footage as a real Harris ad, the likely Democratic presidential candidate, released last week to launch her campaign. But the video swaps the voiceover audio with another voice that convincingly imitates Harris.
“I, Kamala Harris, am your Democratic nominee for President because Joe Biden finally showed his senility at the debate,” the voice says in the video. He claims Harris is a “diversity hire” because she is a woman and a person of color, and says she “doesn’t have a clue how to run this country.” The video maintains the “Harris for President” branding. It also adds some authentic past clips of Harris.
Mia Ehrenberg, a spokeswoman for the Harris campaign, said in an email to The Associated Press: “We believe the American people want the real freedom, opportunity and security that Vice President Harris provides; not the false, manipulated lies of Elon Musk and Donald Trump.”
The widely shared video is an example of how lifelike AI generated images, videos or audio clips have been used to both mock and mislead about politics as the United States moves closer to the presidential election. It shows how, as advanced AI tools become much more accessibleSo far, no significant action has been taken at the federal level to regulate its use, leaving the rules for AI in politics largely up to states and social media platforms.
The video also raises questions about how best to deal with content that blurs the lines of what is considered appropriate use of AI, especially if it falls into the category of satire.
The original user who posted the video, a YouTuber known as Mr. Reagan, has announced on both YouTube and X that the manipulated video is a parody. But Musk’s post, which the platform says has been viewed more than 123 million times, only includes the caption “This is amazing” with a laughing face emoji.
X users familiar with the platform may know that Musk’s post should take them to the original user’s post, where the disclosure is visible. Musk’s caption doesn’t instruct them to do this.
While some participants in X’s “community note” feature for adding context to posts have suggested labeling Musk’s post, no such label had been added as of Sunday afternoon. Some users online wondered whether his post might violate X’s Policywhich states that users “may not share synthetic, manipulated, or decontextualized media that could mislead or confuse people and cause harm.”
The policy makes an exception for memes and satire, as long as they do not cause “significant confusion about the authenticity of the media.”
Musk earlier this month endorsed former President Donald Trump, the Republican nominee. Neither Reagan nor Musk immediately responded to emailed requests for comment Sunday.
Two experts specializing in AI-generated media reviewed the audio of the fake ad and confirmed that much of it was generated using AI technology.
According to Hany Farid, a digital forensics expert at the University of California, Berkeley, the video shows the power of generative AI and deepfakes.
“The AI-generated voice is very good,” he said in an email. “While most people won’t believe it’s VP Harris’ voice, the video is so much more powerful when the words are in her voice.”
He said generative AI companies that make voice cloning and other AI tools available to the public should be more careful that their services are not used in ways that could be harmful to people or democracy.
Rob Weissman, co-chair of the advocacy group Public Citizen, disagreed with Farid, saying he thought many people would be misled by the video.
“I don’t think it’s clearly a joke,” Weissman said in an interview. “I’m sure most people who watch it don’t assume it’s a joke. The quality isn’t great, but it’s good enough. And precisely because it plays into pre-existing themes that are circulating around her, most people will believe it’s real.”
Weissman, whose organization advocates for Congress, federal agencies and states to regulate generative AI, said the video is “the kind of thing we warn about.”
Other generative AI deepfakes in both the US and elsewhere have reportedly attempted to influence voters with disinformation, humor or both. In Slovakia in 2023, fake audio clips posed as a candidate who discussed plans to rig an election and raise the price of beer days before the vote. In Louisiana in 2022, a political action committee satirical advertisement superimposed the face of a Louisiana mayoral candidate over an actor who portrayed him as a substandard high school student.
Congress has yet to pass legislation on AI in politics, and federal agencies have taken only limited steps, leaving most existing U.S. regulations to the states. More than a third of states have created their own laws regulating the use of AI in campaigns and elections, according to the National Conference of State Legislatures.
In addition to X, other social media companies have also established policies regarding synthetic and manipulated media shared on their platforms. For example, users on the video platform YouTube must disclose whether they have used generative artificial intelligence to create videos or be suspended.
___
The Associated Press receives support from several private foundations to enhance its explanatory reporting on elections and democracy. See more about AP’s Democracy Initiative hereThe AP is solely responsible for all content.