Many cybercriminals are skeptical about using AI-based tools like ChatGPT to automate their malicious campaigns.
A new Sophos study attempted to gauge the interests of cybercriminals by analyzing dark web forums. Apparently, there are many safeguards in place in tools like ChatGPT that prevent hackers from automating the creation of malicious landing pages, phishing emails, malware code, and more.
That forced the hackers to do two things: try to compromise premium ChatGPT accounts (which, as the research suggests, have fewer restrictions), or target GhatGPT derivatives: cloned AI writers that hackers built to compromise the protections bypass.
Bad results and a lot of skepticism
But many are wary of the derivatives, fearing they may be built just to deceive them.
“While there have been significant concerns about the misuse of AI and LLMs by cybercriminals since the release of ChatGPT, our research has shown that so far threat actors are more skeptical than enthusiastic,” said Ben Gelman, senior data scientist at Sophos. “On two of the four dark web forums we examined, we found only 100 posts about AI. Compare that to cryptocurrency, where we found 1,000 messages for the same period.”
Although the researchers observed attempts to create malware or other attack tools using AI-powered chatbots, the results were “rudimentary and often met with skepticism from other users,” said Christopher Budd, director of X-Ops Research at Sophos.
“In one case, a threat actor, eager to demonstrate ChatGPT’s potential, inadvertently revealed important information about his real identity. In fact, we found numerous “thought pieces” about the potential negative effects of AI on society and the ethical implications of its use. In other words, it appears that, at least for now, cybercriminals are having the same debates about LLMs as the rest of us,” Budd added.