AI experimentation is high risk, high reward for low-profile political campaigns

Adrian Perkins was running for re-election as mayor of Shreveport, Louisiana, when he was surprised by a hard campaign hit.

The satirical TV commercial, paid for by a rival political action committee, was used artificial intelligence to portray Perkins as a high school student called to the principal’s office. Instead of tongue-lashing for cheating on a test or getting into a fight, the principal berated Perkins for failing to keep communities safe and create jobs.

The video superimposed Perkins’ face over the body of an actor playing him. Although the ad was labeled as being made with “deep learning computer technology,” Perkins said it was powerful and resonated with voters. He didn’t have enough money or campaign staff to counter this, and thinks it was one of the many reasons he lost the 2022 race. A representative for the group behind the ad did not respond to a request for comment.

“One hundred percent the deepfake ad affected our campaign because we were a resource-poor country,” said Perkins, a Democrat. “You had to choose where to focus your efforts.”

While such attacks are a staple of the raucous political campaigns, the ad targeting Perkins was notable: It would be one of the first examples of an AI deepfake deployed in a US political race. It also foreshadowed a dilemma facing candidates in dozens of state and local races this year as generative AI has become more widespread and easier to use.

The technology – which can do everything from streamlining everyday campaign tasks to creating fake images, video or audio – is already deployed in certain national races across the country and has spread much broader elections around the world. Despite its power as a tool to deceive, efforts to regulate it have been piecemeal or delayed, a gap that could have the biggest impact on lower-profile elections.

Artificial intelligence is a double-edged sword for candidates running such campaigns. Cheap, easy-to-use AI models can help them save money and time on some of their daily tasks. But they often don’t have the staff or expertise to combat AI-generated falsehoods, raising fears that an eleventh-hour deepfake could fool enough voters to tilt races decided by narrow margins .

“AI-enabled threats impact close and low-profile races, where small shifts matter and where there are often fewer resources to correct misleading narratives,” said Josh Lawson, director of AI and democracy for the Aspen Institute.

Some local candidates have already faced criticism for using AI in deceptive ways, from a Republican Senate candidate in Tennessee who used an AI headshot to make himself look slimmer and younger. The Democratic sheriff of Philadelphiawhose re-election campaign promoted fake news stories generated by ChatGPT.

One challenge in separating fact from fiction is the demise of local news outlets, which in many places has led to far less coverage of candidates running for state and local office, especially in reporting that delves into the backgrounds of the candidates and how their campaigns work. The lack of familiarity with candidates could make voters more open to believing false information, according to U.S. Senator Mark Warner of Virginia.

The Democrat, who has worked extensively on AI-related legislation as chairman of the Senate Intelligence Committee, said AI-generated disinformation is easier to spot and combat in high-profile races because they are under greater scrutiny. When an AI-generated robocall imitated President Joe Biden to discourage voters from going to the polls in this year’s New Hampshire primary, was quickly reported in the media and investigated: with serious consequences for the players behind it.

More than a third of states have passed laws regulating artificial intelligence in politics, and legislation specifically aimed at combating election-related deepfakes received bipartisan support in every state where it has passed, according to the nonprofit consumer organization Public Citizen.

But Congress has yet to actdespite several bipartisan groups of lawmakers proposing such legislation.

“Congress is pathetic,” said Warner, who said he was pessimistic that Congress would pass legislation this year protecting elections from AI interference.

Travis Brimm, executive director of the Democratic Association of Secretaries of State, called the specter of AI disinformation in down-ballot races an evolving problem in which people are “still trying to figure out the best path forward.”

“This is a real challenge, and that’s why you’ve seen Democratic secretaries jump to address it and pass real legislation with real penalties around the misuse of AI,” Brimm said.

A spokesperson for the Republican Committee on Secretaries of State did not respond to the AP’s request for comment.

As experts and lawmakers worry about how generative AI attacks could skew the election, some candidates for state or local office say AI tools have proven invaluable to their campaigns. The powerful computer systems, software or processes can mimic aspects of human work and cognition.

Glenn Cook, a Republican running for a legislative seat in southeastern Georgia, is less known and has far less campaign money than the incumbent president he faces in Tuesday’s runoff election. That’s why he invested in a digital consultant who creates much of his campaign’s content using low-cost, publicly available generative AI models.

On its website, AI-generated articles are interspersed with AI-generated images of community members smiling and chatting, none of whom actually exist. AI-generated podcast episodes use a cloned version of his voice to tell his policy positions.

Cook said he reviews everything before it’s made public. The savings – both in time and money – allowed him to knock on more doors in the district and attend more in-person campaign events.

“My wife and I have done 4,500 doors here,” he said. “It gives you the freedom to do a lot.”

Cook’s opponent, Republican state Rep. Steven Sainz, said he thinks Cook is “hiding behind what amounts to a robot, rather than authentically communicating his views to voters.”

“I do not rely on artificially generated promises, but on real-world results,” Sainz said, adding that he does not use AI in his own campaign.

Republican voters in the district weren’t sure what to make of the use of AI in the race, but said they cared most about the candidates’ values ​​and outreach on the campaign trail. Patricia Rowell, a retired Cook voter, said she likes that he has been to her community three or four times during his campaign, while Mike Perry, an independent Sainz voter, said he has felt more personal contact from Sainz .

He said the expanded use of AI in politics is inevitable, but wondered how voters would be able to distinguish between what is true and what is not.

“It’s freedom of expression, you know, and I don’t want to discourage freedom of expression, but it comes down to the integrity of the people who are putting it out there,” he said. “And I don’t know how you regulate integrity. It’s pretty tough.”

Digital companies that market AI models for political campaigns told the AP that most of the AI ​​use in local campaigns so far has been minimal and designed to increase efficiency for tedious tasks such as analyzing poll data or drafting texts for social media that meet a certain word limit.

Political advisors are increasingly turning to AI tools to see what works a new report from a team led by researchers at the University of Texas at Austin. More than two dozen political operatives from across the ideological spectrum told researchers that they experimented with generative AI models during this year’s campaigns, even as they feared less scrupulous actors would do the same.

“Elections at the local level will be so much more challenging because people will attack,” said Zelly Martin, lead author of the report and senior research fellow at the university’s Center for Media Engagement. “And what recourse do they have to fight back, unlike Biden and Trump, who have far more resources to fend off attacks?”

There are huge differences in staffing, money and expertise between ballot measures – for state legislator, mayor, school board or any other local office – and races for federal office. While a local campaign may have just a handful of staffers, competing campaigns in the U.S. House of Representatives and Senate may have dozens, and presidential operations can number in the thousands by the end of the campaign.

The campaigns for Biden and former President Donald Trump are both experimenting with AI to improve fundraising and voter outreach efforts. Mia Ehrenberg, a spokesperson for the Biden campaign, said they also have a plan to debunk AI-generated disinformation. A Trump campaign spokesperson did not respond to the AP’s questions about their plans for dealing with AI-generated disinformation.

Perkins, the former mayor of Shreveport, had a small team that decided to ignore the attack and continue campaigning when the deepfake of him being dragged into the principal’s office appeared on local television. He said he thought the deepfake ad against him was a typical dirty trick at the time, but the rise of AI in just two years since his campaign has made him realize the power of the technology as a tool to deceive voters.

“In politics, people always go the extra mile to be effective,” he said. “We had no idea how important it would be.”

___

Burke reported from San Francisco, Merica from Washington and Swenson from New York.

___

This story is part of an Associated Press series, “The AI ​​Campaign,” examining the influence of artificial intelligence in the 2024 election cycle.

___ The Associated Press receives support from several private foundations to improve explanatory reporting on elections and democracy, and from the Omidyar Network to support coverage of artificial intelligence and its impact on society. AP is solely responsible for all content. Find AP’s standards for working with charities, a list of supporters and funded areas of coverage at AP.org