Cybercriminals are exploiting AI tools like ChatGPT to craft more convincing phishing attacks, alarming cybersecurity experts
If you've noticed a spike in suspicious-looking emails over the past year, it could be partly due to one of our favorite AI chatbots: ChatGPT. I know – many of us have had intimate and private conversations getting to know ourselves with ChatGPT, and we don't want to believe that ChatGPT would help scam us.
This is what cybersecurity company SlashNext reportsChatGPT and its AI cohorts are being used to distribute phishing emails at an accelerated pace. The report draws on the company's threat expertise and surveyed more than 300 cybersecurity professionals in North America. In fact, it is claimed that malicious phishing emails have increased by 1,265% since the fourth quarter of 2022 – especially credential phishing, which has increased by 967%. trusted person, group or organization via emails or a similar communication channel.
Malicious actors use generative artificial intelligence tools, such as ChatGPT, to craft polished and specifically targeted phishing messages. In addition to phishing, BEC (Business Email Compromise) messages are also a common form of scam by cybercriminals, which aim to defraud companies of financial resources. The report concludes that these AI-powered threats are evolving at a rapid pace, rapidly growing in size and sophistication.
The report found that an average of 31,000 phishing attacks occurred per day, and approximately half of cybersecurity professionals surveyed reported receiving a BEC attack. When it comes to phishing, 77% of these professionals report receiving phishing attacks.
The experts weigh in
SlashNext CEO Patrick Harr shared this that these findings “reinforce concerns about the use of generative AI contributing to exponential growth in phishing.” He explained that AI generative technology allows cybercriminals to boost the speed at which they carry out attacks, while also increasing the variety of their attacks. They can produce thousands of socially designed attacks with thousands of variations – and you just have to fall for it.
Harr then points the finger at ChatGPT, which saw tremendous growth late last year. He states that generative AI bots have made it a lot easier for beginners to get into the phishing and scamming game, and have now become another tool in the arsenal of those who are more skilled and experienced – who can now scale up and expand their can target attacks more effectively. easy. These tools can help generate more persuasive and convincingly worded messages that scammers hope will phish people right away.
Chris Steffen, research director at Enterprise Management Associates, confirmed this when he spoke to CNBC and said: “Gone are the days of the 'Prince of Nigeria'”. He added that emails now “sound extremely convincing and legitimate.” Bad actors imitate and imitate others convincingly in tone and style, or even send official-looking correspondence that looks like it comes from government agencies and financial services companies. They can do this better than before by using AI tools to analyze the writings and public information of individuals or organizations to tailor their messages, making their emails and communications look real.
Moreover, there is evidence that these strategies are already generating returns for bad actors. Harr refers to the FBI's Internet Crime Report, where it is claimed that BEC attacks have cost companies approximately $2.7 billion, along with $52 million in losses due to other forms of phishing. The mother lode is lucrative and scammers are even more motivated to multiply their phishing and BEC efforts.
What it takes to undermine the threats
Some experts and tech giants are pushing back Amazon, Google, Meta and Microsoft have pledged to conduct testing to combat cybersecurity risks. Companies are also making defensive use of AI, using it to improve their detection systems, filters and the like. However, Harr reiterated that SlashNext's research underlines that this is fully justified, as cybercriminals are already using tools like ChatGPT to carry out these attacks.
SlashNext found a particular BEC using ChatGPT in July, accompanied by WormGPT. WormGPT is a cybercrime tool which is being published as “a black hat alternative to GPT models, specifically designed for malicious activities such as creating and launching BEC attacks,” according to Harr. Another malicious chatbot, FraudGPT, has also been reported to be in circulation. Harr says FraudGPT is advertised as an “exclusive” tool tailored to fraudsters, hackers, spammers and the like, and has an extensive list of features.
Some of SlashNext's research involves the development of AI 'jailbreaks', which are quite ingeniously designed attacks on AI chatbots that, when implemented, cause the removal of AI chatbots' security and legality protections. This is also an important area of research at many AI-related research institutions.
How companies and users should move forward
If you feel like this could pose a serious threat professionally or personally, you're right, but it's not all hopeless. Cybersecurity experts are brainstorming ways to counter and respond to these attacks. One measure that many companies are taking is to continually educate and train end users to see if employees and users are actually caught by these emails.
The increased number of suspicious and targeted emails means that a reminder here and there may no longer be enough, and companies will now have to practice very persistently to increase security awareness among users. End users should not only be reminded, but also encouraged to report emails that appear fraudulent and discuss their security-related concerns. This applies not only to companies and company-wide security, but also to us as individual users. If tech giants want us to trust their email services for our personal email needs, they will have to continue building their defenses in ways like this.
In addition to this cultural change in companies, Steffen also reiterates the importance of email filtering tools that can integrate AI capabilities and help prevent malicious messages from even reaching users. It's an ongoing battle that requires regular testing and audits because threats are constantly evolving and as the capabilities of AI software improve, so will the threats that leverage it.
Businesses need to improve their security systems and no single solution can fully address all the dangers of AI-generated email attacks. Steffen argues that a zero-trust strategy can help fill the control gaps caused by the attacks and provide most organizations with a defense mechanism. Individual users need to be more alert to the possibility of being phished and deceived as this number has increased.
It can be easy to give in to pessimism about these types of issues, but we can be more careful about what we click on. Take a moment, and again, and look at all the information. You can even search the email address you received a particular email from and see if anyone else has had problems with it. It is a difficult mirror world online and it is increasingly worthwhile to keep your wits about you.