I’m a hacker… here are the five ways scammers are using AI to access your data

A hacker has revealed how cybercriminals are using artificial intelligence to clone people’s voices and steal thousands of pounds.

Dr. Katie Paxton-Fear is a cyber security lecturer at Manchester Metropolitan University and also an ‘ethical hacker’ who ‘hacks companies before the bad guys do’.

She is working with Vodafone Business on a new campaign to raise awareness of the growing threat of AI phishing to UK businesses.

New research from the company shows that juniors in the office are more at risk of AI phishing attacks than any other age group.

The research revealed an ‘age gap’ in awareness – with younger staff aged 18 to 24 appearing more likely to fall for the new breed of AI phishing than their older colleagues.

Gen Z workers appear to be much easier to hack than most workers: almost half (46%) haven’t updated their work password in more than a year, compared to an average of a third (33%) of workers.

Researchers surveyed 3,000 UK office workers and business leaders from small, medium and large companies on a range of cyber security issues, including awareness of AI phishing attacks.

The research shows that the majority of UK businesses (94%) do not feel adequately prepared to tackle the increasing threat of advanced AI-driven phishing attacks.

Dr. Katie Paxton-Fear is a cyber security lecturer at Manchester Metropolitan University and also an ‘ethical hacker’ who ‘hacks companies before the bad guys do’

In a bid to raise awareness, Katie has revealed how easily cybercriminals can use AI to clone people’s voices and mimic them over the phone – often leaving the victim none the wiser

In a bid to raise awareness, Katie has revealed how easily cybercriminals can use AI to clone people’s voices and mimic them over the phone – often leaving the victim unaware.

Hackers only need “three seconds of audio” – like a voicemail – to clone someone’s voice. They often also follow five simple steps to carry out their ‘vishing’ (voice-clone phishing) scam.

To demonstrate this, businessman and entrepreneur Chris Donnelly challenged Katie to hack into his company to see how easily criminals could use AI to defraud him.

Chris has been an entrepreneur for 15 years and is the founder of Lottie, a health technology platform for care homes.

Read on below as Katie explains the steps cybercriminals take to hack into a company using AI voice cloning.

1. Exploration

Recommendations to the UK government to ensure businesses remain protected from AI cyber scams

Launch a ‘Cyber ​​Safe’ PR Campaign: Develop a nationwide PR campaign to promote Cyber ​​Resilience Centers (CRCs) and the Cyber ​​Essentials Certification to companies of all sizes.

Redeploy funding for local cybersecurity training: Redeploy funds within the National Cybersecurity Strategy budget to support targeted local initiatives for businesses, with an emphasis on effective engagement programs.

Enhance cybersecurity skills to prevent AI-led cyber-attacks: Promote the development and adoption of AI-driven cybersecurity tools and provide training to companies on preventing AI-led cyber-attacks.

Expanding Cyber ​​Resilience Centers (CRCs): Establish additional CRCs in underserved regions and expand the capabilities of existing centers to provide tailored support to businesses.

Source: Vodafone Business

Katie said, “Every hack starts with exploration.” A hacker will find a victim and go to their social media.

In this case, Chris is a public figure with thousands of followers on various social media platforms. His profiles reveal details about his staff and what tasks they do for him.

Now a hacker has both an unsuspecting boss and his equally unaware employee in his sights.

2. Voice cloning

Now the hacker browses the boss’s social media pages to find audio or video content.

Katie said: ‘All we have to do is visit Chris’s social media pages, download a video and copy his speaking style. We only need three seconds of audio.”

AI voice cloning software can use the recording to recreate Chris’s voice. All the hacker has to do is type in what he wants his victim to say.

In this case, Katie types “Did you manage to pay the invoice I sent?” – and the message is repeated in Chris’s voice.

3. Make contact

The hacker sends a text message to the employee pretending to be from his boss. Even though it’s from an unknown number, he says to expect a call.

In this case, Chris’s employee receives the text message and waits for the call from his boss.

4. The conversation

Now for the call. The hacker calls the employee from his computer using a piece of software and then simply types the message for the cloned Chris to say.

In the video, the employee hears his boss Chris ask him: “Were you able to pay the invoice I sent?” It is critical that this is addressed immediately.”

New research from Vodafone Business shows that juniors in the office are more at risk of AI phishing attacks than any other age group.

What should the employee do? He has received a direct order from his boss.

5. The waiting

The employee has been given specific instructions on how to make the payment. Now it remains to be seen whether they will do it.

Katie said: ‘The final step is whether or not the victim takes action. Most hackers will know at the end of the phone call whether they have been successful.”

Chris Donnelly, entrepreneur and CEO of Lottie, said: ‘Cyber ​​security has always been a priority for my business, it’s something we think about constantly and we ensure we keep our security protocols as up to date as possible.

“You can imagine my surprise at how effortlessly the ethical hacker was able to breach our defenses using advanced AI phishing tactics such as voice cloning.

“As someone who runs a health technology platform where we manage vast amounts of personal and private data, this experience highlights the importance of staying one step ahead in cybersecurity, especially with evolving AI threats.

Katie warned: ‘AI allows attackers to tailor messages to look highly personalized, making it harder than ever for employees to distinguish a fake email from a legitimate one.

‘It’s a wake-up call for all businesses to strengthen their security measures and provide consistent training for their staff so they can protect themselves against even the most sophisticated forms of fraud. Today, remaining vigilant and adaptive is essential to protect our organization and customers.”

Katie added: ‘AI allows attackers to tailor messages to look highly personalized, making it harder than ever for employees to distinguish a fake email from a legitimate one.

“Businesses, regardless of size, must understand the real risks and take proactive measures to defend against these threats.

“Strengthening cybersecurity practices, implementing advanced detection systems and training staff to recognize AI-driven scams are essential steps to protect valuable data and maintain trust.”

Related Post