In the past 11 months, thousands of fake, automated Twitter accounts — perhaps hundreds of thousands — have been created to provide a stream of praise for Donald Trump.
In addition to posting praise for the former president of the United States, the fake accounts have ridiculed Trump’s critics from both sides and attacked Nikki Haley, the former South Carolina governor and UN ambassador who is challenging her former boss for the Republican presidential nomination. in 2024.
When it came to Ron DeSantis, the bots aggressively suggested that the Florida governor wouldn’t be able to beat Trump, but would make a great running mate.
As Republican voters rate their 2024 candidates, whoever created the bot network tries to put a thumbs up by using online manipulation techniques pioneered by the Kremlin to sway the Twitter conversation about candidates, while he manipulates the algorithms. of the digital platform used to maximize their reach .
The sprawling bot network was discovered by researchers at Cyabra, an Israeli technology company that shared its findings with The Associated Press. While the identities of those behind the network of fake accounts are unknown, Cyabra’s analysts determined it was likely made in the US.
To identify a bot, researchers look for patterns in an account’s profile, list of followers, and the content it posts. Human users usually post on a variety of topics with a mix of original and reposted material, but bots often post repetitive content on the same topics.
That was true for many of the bots identified by Cyabra.
“One account will say, ‘Biden is trying to take our guns; Trump was the best,” and another will say, “Jan. 6 was a lie and Trump was innocent,” said Jules Gross, the Cyabra engineer who first discovered the network, referring to the January 6, 2021 attack on the US Capitol by Trump supporters.
“Those voices aren’t people,” Gross said. “For the sake of democracy, I want people to know this is happening.”
Bots became notorious after Russia employed them in an attempt to interfere in the 2016 election, which Trump won. While major tech companies have improved their detection of fake accounts, the network identified by Cyabra shows they remain a powerful force in shaping online political discourse.
The pro-Trump network
The new pro-Trump network is actually three different networks of Twitter accounts, all created in bulk during April, October, and November. In all, researchers think hundreds of thousands of accounts could be involved.
The accounts all contained personal photos of the alleged account holder and a name. Some accounts post their own content, often in response to real users, while others repost content from real users, amplifying it.
“McConnell…Traitor!” posted one of the accounts in response to an article in a conservative publication about GOP Senate Majority Leader Mitch McConnell, one of many Republican Trump critics targeted by the network.
One way to measure the impact of bots is to measure the percentage of posts on a given topic generated by accounts that appear fake. The percentage for typical online debates is often in the low single digits. Twitter itself has said that less than 5 percent of its active daily users are fake or spam accounts.
However, when Cyabra researchers investigated negative reports about specific Trump critics, they found much more inauthenticity. For example, almost three quarters of the negative posts about Haley could be traced back to fake accounts.
The network also helped popularize a call for DeSantis to join Trump as his vice presidential running mate, an outcome that would serve Trump well and allow him to avoid a potentially acrimonious match if DeSantis meets participates in the race.
The same network of accounts shared overwhelmingly positive content about Trump and contributed to a widespread misperception of his support online, researchers found.
“Our understanding of what the prevailing Republican sentiment is for 2024 is being manipulated by the prevalence of bots online,” the Cyabra researchers conclude.
The triple network was discovered after Gross analyzed tweets about various national political figures and noticed that many of the accounts posting the content were created on the same day. Most accounts remain active, although they have a relatively modest number of followers.
A message left with a Trump campaign spokesperson was not immediately returned.
Bots have an ‘absolute’ influence on the information flow
Most bots aren’t designed to convince people, but to amplify some content so more people see it, said Samuel Woolley, a professor and researcher of disinformation at the University of Texas whose most recent book focuses on automated propaganda.
When human users see a hashtag or piece of content from a bot and repost it, they’re doing the work of the network for it and also sending a signal to Twitter’s algorithms to further boost the spread of the content.
Bots can also succeed in convincing people that a candidate or idea is more or less popular than it actually is, he said. For example, more pro-Trump bots may lead people to exaggerate his popularity in general.
“Bots definitely affect the flow of information,” Woolley said. They are made to give the illusion of popularity. Repetition is the nuclear weapon of propaganda and bots are very good at repetition. They are very good at getting information in front of people.”
Until recently, most bots were easy to spot due to their clumsy spelling or account names containing nonsensical words or long strings of random numbers. As social media platforms got better at detecting these accounts, the bots got more sophisticated.
So-called cyborg accounts are an example of this. They are bots that are periodically taken over by a human user who can post original content and interact with users in human ways, making them much harder to sniff out.
Bots could soon become much more sneaky thanks to advances in artificial intelligence. New AI programs can create lifelike profile pictures and posts that sound much more authentic. Bots that sound like a real person and use deepfake video technology can challenge platforms and users in new ways, said Katie Harbath, a fellow at the Bipartisan Policy Center and a former Facebook public policy director.
“The platforms have gotten so much better at fighting bots since 2016,” said Harbath. “But the types we’re starting to see now, with AI, can create fake humans. fake videos.”
These advancements in technology are likely to ensure that bots have a long future in US politics – as digital foot soldiers in online campaigns and as potential problems for both voters and candidates trying to defend themselves against anonymous online attacks.
“There has never been so much noise online,” said Tyler Brown, a political consultant and former digital director for the Republican National Committee. “How much of it is malicious or even unintentionally unfactual? It is easy to imagine that people can manipulate that.”