WASHINGTON — With voters just five months away from going to the polls, the U.S. may be more vulnerable to foreign disinformation aimed at influencing voters and undermining democracy than before the 2020 election, the leader of the Senate Intelligence Committee said Monday.
Senator Mark Warner, a Democrat from Virginia, based his warning on several factors: Improved disinformation tactics by Russia And Chinathe rise of domestic candidates and groups that are themselves willing to spread disinformationand the arrival of artificial intelligence programs that can quickly create images, audio and video that are difficult to distinguish from the real thing.
Moreover, tech companies have rolled back efforts to protect users from misinformation, even as the government’s own efforts to combat the problem have become mired in debates over surveillance and censorship.
As a result, the U.S. could face a greater threat from foreign disinformation in the run-up to the 2024 election than it did during the 2016 or 2020 presidential elections, Warner said.
“We may be less prepared 155 days from now in 2024 than we were under President Trump (in 2020),” Warner told The Associated Press in an interview Monday.
Similar campaigns in 2016 And 2020Security officials, democracy activists and disinformation researchers have warned for years that Russia, China, Iran and domestic groups in the US will use online platforms to spread false and polarizing content aimed at influencing the race between Trump, a Republican, and President Joe Biden. , a Democrat.
Warner’s assessment of America’s vulnerability comes just weeks after top security officials told the intelligence committee that the U.S. improved his power significantly to combat foreign disinformation.
However, several new challenges will make securing the 2024 elections different from previous cycles.
AI programs have already been used to generate misleading content, such as a robocall imitated Biden’s voice New Hampshire voters are being told not to vote in that state’s primary. Deceptive deepfakes created with AI programs have also emerged ahead of the elections India, Mexico, Moldova, Slovakia and Bangladesh.
Efforts by federal agencies to communicate with tech companies about disinformation campaigns have been complicated by lawsuits And debates about the role of the government in monitoring political discourse.
Technology platforms have largely moved away from aggressive policies banning election disinformation. X, formerly Twitter, has fired most of its content moderators in favor of a hands-off approach that now allows neo-Nazi hate speech, Russian propaganda and disinformation.
Last year YouTube, owned by Google, turned it around its policies ban debunked election claims and now allow videos claiming the 2020 election was the result of widespread fraud.
Questions about China’s influence on TikTok prompted Congress to pass a law would ban the popular site in the US if the Beijing-based owner refuses to divest.
Meta, the owner of Facebook, WhatsApp and Instagram, bans information that disrupts election processes and regularly removes foreign influence operations when it identifies them. The platform also says it will tag content created with AI. But the company also allows political advertisements claiming the 2020 election was riggedwhich critics say undermines his promises.
“I’m not sure that these companies, other than the press release, have done anything in a meaningful way,” Warner said.
Representatives for X and TikTok did not immediately respond to messages on Monday.