OpenAI says it has stopped multiple campaigns using its cybercrime systems

OpenAI, the company behind the famous Chat-GPT generative Artificial Intelligence (AI) solution, says it has recently blocked several malicious campaigns abusing its services.

In a reportthe company said it has blocked more than two dozen operations and deceptive networks around the world so far in 2024.

These operations varied in nature, scale and objectives. Sometimes the crooks used it to debug malware, and sometimes they used it to write content (website articles, fake bios for social media accounts, fake profile pictures, etc.).

Disrupting the disruptors

While this sounds sinister and dangerous, OpenAI says that threat actors have failed to gain any significant traction with these campaigns:

“Threat actors continue to evolve and experiment with our models, but we have seen no evidence that this is leading to meaningful breakthroughs in their ability to create substantial new malware or build a viral audience,” the report said.

But 2024 is an election year – not just in the United States, but elsewhere around the world – and OpenAI has seen ChatGPT being abused by threat actors trying to influence pre-election campaigns. Several groups were mentioned, including one called ‘Zero Zeno. This Israel-based commercial company generated “brief” social media commentary about elections in India – a campaign that was disrupted “less than 24 hours after it started.”

The company added that in June 2024, just before the European Parliament elections, it disrupted an operation called “A2Z,” which targeted Azerbaijan and its neighbors. Other notable mentions included generating commentary on the European Parliament elections in France and politics in Italy, Poland, Germany and the US.

Fortunately, none of these campaigns made any significant progress, and once OpenAI banned them they were shut down completely:

“The majority of social media posts we identified as generated from our models received few or no likes, shares, or comments, although we identified some cases where real people responded to the posts,” OpenAI concluded . “After we blocked access to our models, the social media accounts of this operation that we identified during the election periods in the EU, UK and France stopped posting.”

More from Ny Breaking

Related Post