ChatGPT accounts suspected of Iranian influence operation shut down by OpenAI
OpenAI has taken down a group of malicious ChatGPT accounts linked to a notorious cybercrime group. The accounts were found to be distributing content aimed at influencing voters.
The posted content discussed a number of topics, most notably the US elections, Israel’s presence at the Olympics, and the Gaza conflict. In his reportAccording to OpenAI, the content failed to generate any meaningful engagement, with most posts receiving few (or no) likes.
The content generated by ChatGPT also included long-form articles posing as both progressive and conservative news sites, with names such as “Westland Sun, EvenPolitics and Nio Thinker.”
Election threats
“OpenAI is committed to preventing abuse and improving transparency around AI-generated content,” OpenAI said. “This includes our work to detect and stop covert influence operations (IO), which attempt to manipulate public opinion or influence political outcomes while concealing the true identities or intentions of the actors behind them. This is especially important in the context of the many elections coming up in 2024. We have been expanding our work in this area throughout the year, including by leveraging our own AI models to better detect and understand abuse.”
The group behind the campaign, Storm-2035, was recently identified by Microsoft as a cluster of threatening activities in a recent report examining Iran’s online influence on the US election.
Microsoft described the campaign as “an active engagement of American voter groups at opposite ends of the political spectrum with polarizing messages on issues such as the US presidential candidates, LGBTQ rights and the conflict between Israel and Hamas.”
Microsoft’s Threat Analysis Center (MTAC) predicted earlier this year that Iran, along with Russia and China, would step up their cyber influence campaigns as the US elections approached.
As the 2024 US presidential election approaches, revival in malicious cyber activity from foreign threat actors has already been reported. Various tactics have been used, such as misinformation campaigns, phishing attacks and hacking operations.
The goals of these offensives seem clear: disrupt the political process. By undermining public trust in information sources, public figures, and political structures, foreign threat actors target the fabric of the American political system. Spreading distrust, chaos, and fear into the hearts of voters serves to further exacerbate the widespread divisions that already plague the American public.
The rise of artificial intelligence has made it possible to develop and spread misinformation with ease, with highly tailored content being generated more than ever before. Our advice is to stay critical and check the source where possible.