AI is making cyberattacks even smarter and more dangerous
Hackers have a lot to gain from using generative AI tools such as ChatGPT. While the tools are still too young to run malicious campaigns with minimal human input, they can be used to boost human-run campaigns in ways never seen before.
This is according to new analysis from IBM’s Security Intelligence X-Force team. The researchers detailed an experiment in which they compared human-written phishing emails with those from ChatGPT. The goal was to see which email would have a higher click-through rate, both for the emails themselves and for the malicious links within them.
In the end, human-written content won, but by the narrowest of margins. The bottom line is that it’s only a matter of time before AI surpasses human content in terms of credibility and authenticity, and does all the hard work for cybercriminals.
Emotional intelligence
The humans are beating AI in emotional intelligence, personalization and understanding the daily struggles of victims. “Humans understand emotions in ways that AI can only dream of,” the researchers say. “We can weave stories that touch the heart and sound more realistic, making recipients more likely to click on a malicious link.”
When it comes to personalization, people were able to refer to legitimate organizations and provide tangible benefits to staff, making the emails more likely to be opened.
And finally, people understand what makes their targets suspicious: “The human-generated phish had an email subject line that was short and sweet, while the AI-generated phish had an extremely long subject line, potentially raising suspicion even before employees opened the email. e-mail.”
All of these factors can be easily adjusted with minimal human input, making the work of AI extremely valuable. It’s also worth noting that the X-Force team could be given a generative AI model to write a convincing phishing email in just five minutes following five prompts. Writing such an email manually would take the team about 16 hours.
While capabilities – showing that attackers are testing the use of AI in phishing campaigns,” the researchers concluded.
“While even limited versions of generative AI models can be tricked into phishing through simple cues, these unlimited versions could provide more efficient ways for attackers to scale advanced phishing emails in the future.”