Why learning to combat AI fraud has never been more important
Artificial intelligence is not new. But the rapid pace of innovation over the past year means that consumers and businesses alike are more aware than ever of the technology’s potential, and are most likely using it themselves in some form.
The AI revolution also has a downside: it gives fraudsters more power. This is one of the main effects we are witnessing, rather than increased productivity and creativity in the workplace. The evolution of large language models and the use of generative AI gives fraudsters new tactics to explore for their attacks, which are now of a quality, depth and scale that have the potential for increasingly disastrous consequences.
This increased risk is felt by both consumers and companies. Experian’s Identity and Fraud Report 2023 shows that just over half (52%) of UK consumers feel they are now more likely to be a target for online fraud than a year ago, while more than 50% of businesses indicates that it is very concerned about the risk of fraud. It is critical that both businesses and consumers educate themselves about the types of attacks happening and what they can do to combat them.
Managing Director ID&Fraud UK&I, Experian.
Become familiar with the new types of fraud attacks
There are two major trends emerging in AI fraud: the hyper-personalization of attacks and the subsequent increase in biometric attacks. Hyper-personalization means unsuspecting consumers are increasingly being scammed by targeted attacks that trick them into making instant transfers and real-time payments.
For businesses, email compromise attacks can now use generative AI to copy a particular company’s voice or writing style to make more authentic-looking requests, such as encouraging them to conduct financial transactions or share confidential information.
Generative AI makes it easier for anyone to carry out these attacks by allowing them to create and manage many of the fake banking, e-commerce, healthcare, government, and social media accounts and apps out there look real.
These attacks will only increase. Historically, generative AI has not been powerful enough to be widely used to create a believable representation of someone else’s voice or face. But now it is impossible for a human eye or ear to distinguish a deeply fake face or voice from a real face.
As companies implement more layers of identity verification controls, fraudsters will need to exploit these types of attacks.
Types of seizures to look out for
Types of seizures to look out for include:
Imitating a human voice: There has been substantial growth in AI-generated voices that mimic real people. These practices mean that consumers can be tricked into thinking they are talking to someone they know, while companies that use voice verification systems for systems like customer support can be tricked.
Fake video or images: AI models can be trained using deep learning techniques to use very large amounts of digital assets such as photos, images and videos to produce authentic, high-quality videos or images that are virtually indistinguishable from the real thing. Once trained, AI models can blend and overlay images onto other images and within video content at alarming speed.
Chatbots: Friendly, persuasive AI chatbots can be used to build relationships with victims and convince them to send money or share personal information. Following a prescribed script, these chatbots can extend a human-like conversation with a victim over longer periods of time to deepen the emotional bond.
Text messages: Genative AI allows fraudsters to replicate personal exchanges with someone a victim knows with well-written scripts that appear authentic. They can then conduct multiple attacks via text-based conversations with multiple victims simultaneously, manipulating them into performing actions that could involve the transfer of money, goods or other fraudulent gains.
Combat AI by embracing AI
To combat AI, companies will need to use AI and other tools such as machine learning to ensure they stay one step ahead of criminals.
The most important steps you need to take include:
Identifying fraud with generative AI: Using generative AI to screen fraudulent transactions or identity theft checks proves more accurate at detecting fraud, compared to previous generations of AI models
Increasing use of verified biometric data: Currently, generative AI can replicate an individual’s retina, fingerprint, or the way someone uses their computer mouse.
Consolidation of fraud prevention and identity protection processes: All data and controls must feed systems and teams that can analyze signals and build models that are continuously trained on good and bad traffic. Knowing what a good actor looks like can help companies avoid impersonation attempts by real customers.
Educating customers and consumers: Proactively educating consumers in personalized ways across numerous communication channels can ensure consumers are aware of the latest fraud attacks and their role in preventing them. This helps enable a seamless, personalized experience for authentic consumers while blocking attempts by AI attackers.
Use customer vulnerability data to spot signs of social engineering: Vulnerable customers are much more likely to fall for deep fake scams. By processing this data and using it for information and victim protection, the sector can help the most threatened people
Why now?
The best companies took a multi-layered approach – there is no one silver bullet – to their fraud prevention, minimizing the holes that fraudsters try to exploit. For example, using consortia and data exchanges to share fraud data, fraud teams can share knowledge about new and emerging attacks.
A well-layered strategy that includes device, behavioral, consortia, document, and ID authentication, drastically reducing weaknesses in the system.
Combating AI fraud will now be part of that strategy for all companies that take fraud prevention seriously. The attacks will become more frequent and sophisticated, requiring the implementation of a long-term protection strategy that covers every step in the fraud prevention process, from consumer to attacker. This is the only way for companies to protect themselves and their customers from the growing threat of AI-powered attacks.
We’ve rounded up the best identity theft protection for families.
This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of Ny BreakingPro or Future plc. If you are interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro