WASHINGTON — The Democratic National Committee watched earlier this year as campaigns across the country experimented with artificial intelligence. That is why the organization approached a handful of influential party campaign committees with the request: sign up for guidelines that require them to use the technology in a ‘responsible’ manner.
The draft agreement, a copy of which was obtained by The Associated Press, was hardly full of revolutionary ideas. It called for campaigns to monitor the work of AI tools, protect against bias and avoid the use of AI to create misleading content.
“Our goal is to use this new technology both effectively and ethically, and in a way that advances – rather than undermines – the values we embrace in our campaigns,” the draft said.
The plan went nowhere.
Rather than promoting an agreement, the guidelines sparked a debate about the value of such commitments, especially those related to rapidly evolving technology. Among the concerns of Democratic campaign organizations is this: Such a pledge could hinder their ability to deploy AI and deter donors with ties to the AI industry. Some committee officials were also annoyed that the DNC gave them only a few days to agree to the guidelines.
The proposal’s demise exposed internal divisions over campaign tactics and those of the party uncertainty about how AI can best be used amid warnings from experts that the technology is boosting the spread of misinformation.
Hannah Muldavin, a senior spokesperson for the Democratic National Committee, said the group is not giving up on finding a consensus.
The DNC, she said, “will continue to work with our sister committees to discuss ideas and issues important to Democratic campaigns and to American voters, including AI.”
“It is not unusual for ideas and plans to change, especially in the midst of a busy election year, and all documents on this topic reflect early and ongoing conversations,” Muldavin said, adding that “DNC and our partners take the opportunities and challenges seriously. presented by AI.”
The bickering comes as campaigns increasingly use artificial intelligence – computer systems, software or processes that emulate aspects of human work and cognition – to optimize workloads. That includes using big language models to write fundraising emails, text supporters and build chatbots to answer voters’ questions.
This trend is expected to continue as the November general election approaches, with campaigns turning to powerful generative AI tools to create text and images, clone human voices and create video at lightning speed.
The Republican National Committee used AI-generated images last year in a television spot in which he predicted a dystopian future under President Joe Biden.
However, much of this acceptance is overshadowed by concerns about how campaigns might use artificial intelligence in ways that mislead voters. Experts have warned that AI has become so powerful that it has made it easy to generate “deep fake” videos, audio clips and other media that target adversaries. Some states have done so legislation passed regulating the way generative artificial intelligence can be used. But Congress has so far failed to pass bills regulating artificial intelligence at the federal level.
In the absence of regulations, the DNC looked for a set of guidelines it could point to as evidence that the party was taking the threat and promise of AI seriously. It sent the proposal in March to the five Democratic campaign committees that want to choose candidates for the House of Representatives, Senate, governors, state legislatures and attorneys general, according to the draft agreement.
The goal was to get every committee to agree to a set of AI guardrails, and the DNC proposed issuing a joint statement proclaiming that such guidelines would ensure campaigns could “use the tools they need to prevent the spread of misinformation and misinformation, while enabling campaigns to act safely.” , use generative AI responsibly to engage more Americans in our democracy.”
The Democratic committee had hoped that the statement would be signed by Chairman Jaime Harrison and the leaders of the other organizations.
Democratic operatives said the proposal came in with a thud. Some senior leaders at the committees worried the agreement could have unforeseen consequences, limiting how campaigns could use AI, according to multiple Democratic operatives familiar with its scope.
And it could send the wrong message to tech companies and executives working on AI, many of whom help fill campaign coffers during election years.
Some of the Democratic Party’s most prolific donors are top tech entrepreneurs and AI evangelists, including Sam Altman, the CEO of OpenAI, and Eric Schmidt, the former CEO of Google.
According to Federal Election Commission data, Altman has donated more than $200,000 to the Biden campaign and his Democratic Joint Fundraising Committee since early last year, and Schmidt’s contributions to those groups topped the $500,000 mark during the same time.
Two other AI advocates, Dustin Moskovitz, the co-founder of Facebook, and Reid Hoffman, the co-founder of LinkedIn, donated more than $900,000 to Biden’s joint fundraising committee this cycle, according to the same data.
The DNC plan caught the committees by surprise because it contained little explanation other than a desire to get each committee to agree to the list of best practices within a few days, said multiple Democratic operatives who spoke on condition of anonymity because they did not goods. t authorized to discuss the matter. Democratic Congressional Campaign and Democratic Senatorial Campaign Committee officials said they felt rushed by a DNC timeline that urged them to sign quickly.
Representatives of the Democratic Attorneys General Association did not respond to the Associated Press’ request for comment. Spokespeople for the Democratic Governors Association and the Democratic Legislative Campaign Committee declined to comment.
The Republican National Committee did not respond to questions about its AI guidelines. The Biden campaign also declined to comment when asked about the DNC’s efforts.
The four-page agreement – “Guidelines for the Responsible Use of Generative AI in Campaigns” – covered everything from ensuring artificial intelligence systems were not trusted without a human monitoring their work, to informing voters when they interact with AI-generated content or systems.
“As the explosive rise of generative AI transforms every corner of public life – including political campaigns – it is more important than ever that we limit the potential threat this new technology poses to voters’ rights, and instead use it to deliver innovative, efficient campaigns and set up a stronger campaign. , a more inclusive democracy,” the proposal says.
The guidelines were divided into five sections with titles such as ‘Providing human alternatives, consideration and pushback’ and ‘Providing notice and explanation’. The proposed rules would have required committees to ensure that “a real person should be responsible for approving AI-generated content and accountable for how, where and for whom it is deployed.”
The guidance outlined how “users should always be aware when interacting with an AI bot” and emphasized that any images or videos created by AI “should be marked as such.” And it emphasized that campaigns should use AI to help staffers, not replace them.
“Campaigns are a people-driven, people-motivated business,” the agreement says. “Use efficiency gains to educate more voters and focus more on quality control and sustainability.”
It also urged campaigns not to “use generative AI to create misleading content. Period of time.”
___
This story is part of an Associated Press series, “The AI Campaign,” examining the influence of artificial intelligence in the 2024 election cycle.
___
The Associated Press receives funding from the Omidyar Network to support reporting on artificial intelligence and its impact on society. AP is solely responsible for all content. Find APs standards for working with philanthropies, a list of supporters and funded coverage areas AP.org
____
The Associated Press and OpenAI have one license and technology agreement giving OpenAI access to some of the AP’s text archives.