Identity fraud attacks using AI fool biometric security systems
- Deepfake selfies can now bypass traditional verification systems
- Fraudsters exploit AI to create synthetic identities
- Organizations must adopt advanced behavior-based detection methods
The latest Global Identity Fraud Report from AU10TIX reveals a new wave of identity fraud, largely driven by the industrialization of AI-based attacks.
With millions of transactions analyzed from July to September 2024, the report shows how digital platforms across industries, especially social media, payments and crypto, are facing unprecedented challenges.
Fraud tactics have evolved from simple document forgeries to sophisticated synthetic identities, deepfake images and automated bots that can bypass conventional authentication systems.
Social media platforms saw a dramatic escalation of automated bot attacks in the lead-up to the 2024 US presidential election. The report shows that social media attacks were responsible for 28% of all fraud attempts in the third quarter of 2024, a notable increase from just 3 % in the first quarter.
These attacks focus on disinformation and the manipulation of public opinion on a large scale. AU10TIX says these bot-driven disinformation campaigns use advanced generative AI (GenAI) elements to avoid detection, an innovation that has allowed attackers to scale their operations while bypassing traditional authentication systems.
The GenAI-powered attacks began escalating in March 2024 and peaked in September. They are believed to influence public perception by spreading false stories and inflammatory content.
One of the most striking discoveries in the report concerns the rise of 100% deepfake synthetic selfies: hyper-realistic images created to mimic authentic facial features with the intention of bypassing verification systems.
Traditionally, selfies were considered a reliable method of biometric authentication because the technology needed to convincingly spoof a facial image was beyond the reach of most fraudsters.
AU10TIX emphasizes that these synthetic selfies pose a unique challenge to traditional KYC (Know Your Customer) procedures. This shift suggests that organizations that rely solely on facial recognition technology may need to reevaluate and strengthen their detection methods in the future.
In addition, fraudsters are increasingly using AI to generate variations on synthetic identities using ‘image template’ attacks. These include manipulating a single ID template to create multiple unique identities, complete with random photo elements, document numbers and other personal identifiers, allowing attackers to quickly create fraudulent accounts across platforms by using AI to facilitate the creation of scale synthetic identities.
In the payments sector, the fraud rate fell in the third quarter, from 52% in the second quarter to 39%. AU10TIX attributes this progress to increased regulatory oversight and law enforcement interventions. However, despite the reduction in direct attacks, the payments sector remains the most frequently targeted sector. Many fraudsters are deterred by the increased security and are focusing their efforts on the crypto market, which was responsible for 31% of all attacks in the third quarter.
AU10TIX recommends organizations go beyond traditional document-based verification methods. A crucial recommendation is to adopt behavior-based detection systems that go deeper than standard identity checks. By analyzing user behavior patterns, such as login routines, traffic sources, and other unique behavioral signals, companies can identify anomalies that indicate potentially fraudulent activity.
“Fraudsters are evolving faster than ever and using AI to scale and execute their attacks, especially in the social media and payments sectors,” said Dan Yerushalmi, CEO of AU10TIX.
“While companies use AI to increase security, criminals are weaponizing the same technology to create synthetic selfies and fake documents, making detection nearly impossible.”