Has identity fraud experienced a coup? AI-powered synthesized media attacks, commonly referred to as deepfakes, have turned the fraud landscape upside down, spiking by 3,000% in 12 months and dominating discussions about what could destabilize digital integrity and trust in the coming years.
Deepfakes are quickly becoming the tactic du jour for fraud. The increased availability of generative AI and face-swap apps allows cybercriminals to significantly expand their deepfake fraud attempts through impersonation. It can allow fraudsters to manipulate a person’s facial expression, voice, body language and even skin texture. No matter how advanced the technology has become, it will only continue to evolve and become more compelling. Online fraud tactics are similar to a virus, which mutates to prevent cyber defenses from causing maximum damage.
That’s why companies must stay one step ahead with a robust prevention strategy that can identify and protect against emerging threats and keep end users and their businesses safe. At the same time, this strategy must also be nuanced and recognize that it cannot cause unnecessary friction for legitimate end-users who want to register or access online services. It must be user-friendly and accessible. Achieving this will protect digital integrity amid rising identity theft and maintain critical inclusion and trust in online businesses.
Head of the fraud labs, Onfido.
How the Automation Age Invented Deepfake Fraud
The global fraud landscape has changed significantly in recent years, in line with broader digital trends – particularly the mainstream accessibility of AI and automation tools. Before the pandemic, a fraudster followed the typical pattern of the work week: a nine-to-five shift, with activity typically dropping on weekends. But these habits have changed as fraudsters have understood that AI and automation can allow them to scale their attacks around the clock to hit as many targets as possible. This means that industries that have historically witnessed high fraud volumes due to large monetary incentives, such as gambling and gaming, have seen rates increase by up to 80% in the past year.
As companies take steps to protect their operations against these waves of fraud, bad actors have expanded their library of attack options. While most identity fraud still focuses on producing physical ID forgeries, fraudsters are experimenting with AI to alter digital images and videos – creating deepfakes – to commit identity fraud, bypass cybersecurity systems and create fake online media content. to develop.
Getting more detailed: deepfakes versus cheap fakes
When we think of deepfakes, we often think of sophisticated videos depicting political impersonations or celebrity impersonations. But it’s important to clarify that not all deepfake approaches are the same, and fraudsters will attempt to deploy the technology at different scales, based on resources, technical skills, and desired outcomes. The moniker “cheap fakes” points to the best differentiator: these are significantly less sophisticated than what we would consider a typical deepfake. Think of budget film versus blockbuster: same concept, very different execution. Cheap fakes, also known as superficial fakes or low-tech fakes, use simple video editing software to manipulate images. They may include minor adjustments such as subtitle changes or image cropping, but they are much easier to detect, especially to the untrained eye, because they are not realistic.
But the threat they pose should not be overlooked; they still commit large amounts of identity fraud. Especially in the current climate, where tough economic conditions lead many to become amateur fraudsters, cheap counterfeits are the first option for basement hackers armed with simple malware code. Attacks can still be used in conjunction with larger fraud companies to impersonate legitimate customers and steal identities when onboarding or opening existing accounts. They can also be used for other purposes, for example by launching a tailor-made disinformation campaign that could reach and deceive a large audience, such as Martin Lewis’ deepfake investment scam.
A proactive approach to deepfake detection
In any form, deepfakes can disrupt access to online services, manipulate or mislead people, and dismantle companies’ reputations. Companies must therefore take a proactive approach to limit the threat. But they need to find the right level of friction: ensuring customers can register or access services seamlessly, while keeping bad actors out.
First, companies need to train and learn how to spot a deepfake, and there are certain signs to look out for. In videos, for example, AI has not yet mimicked natural eye movements and blinks, and a closer look at a deepfaked individual can reveal facial abnormalities and prolonged passivity. Videos also often fail to sync audio and visuals seamlessly, so companies must monitor the audio closely and watch the speaker’s pronunciation and any unnatural pauses for inconsistencies. Colors and shadows suffer from the same shortcomings, and perfect accuracy is rare. Companies should look for shadows that look out of place, especially when the person is moving, or colors that change.
Second, companies must invest in their cyber defense assets. Fraud is a cat-and-mouse game and companies need the right partner and platform to strengthen their defenses to stay ahead. Because deepfakes are often hosted in web browsers, as opposed to applications native to a particular operating system, companies should look for a solution that aligns with web-native customer journeys and detects pre-recorded videos, emulators and fake webcams. There will also be times when AI will need to refer more sensitive or complex cases for review. So the right investment will combine the power of AI with human expertise, for a blended and comprehensive security experience. This way, customers are not wrongly rejected and convincing deepfake attempts can be identified by a trained expert.
Avoiding the iceberg
There is no doubt that deepfakes have changed the nature of identity fraud in today’s digital landscape. Deepfakes are prevalent and pose a significant threat to digital trust and integrity, and have the potential to destabilize the relationship between customers and online businesses. Companies must go on the offensive, train their teams to spot deepfake attempts and invest in advanced AI and biometric solutions that can help them stay one step ahead. That way, they will avoid the deepfake iceberg and set themselves up for long-term sustainable growth.
We’ve rounded up the best identity theft protection for families.
This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of Ny BreakingPro or Future plc. If you are interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro