For a long time, email phishing attacks have often been a poorly worded, typo-ridden, desperate plea for money that will, of course, be returned tenfold. Now that we’re letting our guard down, AI is here to make sure we don’t get too comfortable.
A new hyper-realistic scam is hitting Gmail users, and the AI-powered deceptions are capable of fooling even the most tech-savvy among us. This new wave of fraud combines the classic ‘Gmail account recovery’ phishing attack with an ultra-realistic voice call to send users into a panic.
In a recent one blog postMicrosoft solutions consultant Sam Mitrovic, explained how he almost fell victim to the elaborate scam, and tells of an account recovery notification that was followed by a very real-sounding phone call from ‘Google Assistant’.
Don’t get caught
Mitrovic revealed that repeated emails and phone calls were sent from apparently legitimate addresses and numbers, and that the way he responded to the scam was by manually checking his recent activity on Gmail.
This is part of a worrying broader trend of ‘deepfakes’, which are already targeting businesses and consumers more than ever. Criminals can use ultra-realistic video or audio images to trick unsuspecting users into transferring money or information.
Nearly half of companies have already experienced deepfake fraud by 2024, and this trend appears set to continue.
The key to staying safe from these types of scams is to stay vigilant and take your time. Criminals will almost always try to force you to make a decision or hand over money or details, but taking a step back to evaluate can give you perspective and even an outside assessment from someone you can trust.
Via Forbes