The impact of AI and deepfakes on identity verification

In the digital landscape, where identities are woven into every aspect of our online interactions, the rise of AI-powered deepfakes has become a disruptive force, challenging the very essence of identity verification. In navigating this ever-evolving terrain, CIOs and IT leaders must parse the complex interplay between emerging technologies and their profound impact on the integrity of identity management processes.

Online identity verification today consists of two major steps. First, a user is asked to take a photo of their government-issued ID, which is checked for authenticity. And second, the user is asked to take a selfie, which is biometrically compared to the photo on the ID. Traditionally, identity verification has only been used in regulated know-your-customer (KYC) use cases such as opening online bank accounts, but today identity verification is now used in a range of contexts, from interactions with government services, maintaining the integrity of online marketplace platforms, employee onboarding, and improving security during password reset processes.

Thus, undermining the identity verification process through fraudulent identity presentation, for example by using a deepfake of an individual to bypass the selfie step, poses significant risks to an organization.

Akif Khan

1. Mechanisms to undermine deepfake attacks

As attackers leverage GenAI’s relentless advances to create increasingly convincing deepfakes, CIOs and IT leaders must take a proactive stance and strengthen their defenses with a multi-pronged approach. The key to this is to ensure that your identity verification provider deploys robust liveness detection.

This capability is used during the second step of taking the selfie, to check whether the selfie is actually being taken of a living person who is actually present during the interaction. This liveness detection can be active, where a user responds to a prompt such as turning their head, or passive, where subtle features such as micro-motions or depth perspective are assessed without the user having to move.

The integration of active and passive liveness detection techniques, coupled with additional signals indicative of an attack, provides a holistic defense framework against evolving deepfake attacks. Such additional signals that may indicate an attack can be revealed using device profiling, behavioral analytics and location information. Identity verification vendors can develop some of these capabilities themselves, or use partners to deliver them, but they should be packaged as a single solution that you can deploy.

2. Deploy GenAI to improve identity verification

GenAI’s versatility offers intriguing possibilities for defending against deepfake attacks. By leveraging GenAI’s ability to develop synthetic data sets, product leaders can reverse engineer attack variants and fine-tune their algorithms for better detection rates. In addition to cybersecurity applications, GenAI can also address issues of demographic bias in facial biometric processes.

Traditional methods of acquiring diverse training datasets pose challenges in terms of cost and effort, often resulting in biased machine learning algorithms. However, creating deepfake images using GenAI offers a solution by generating large datasets of synthetic faces with artificially inflated levels of training data for underrepresented demographics. This not only reduces the barriers to obtaining diverse data sets, but also helps minimize bias in biometric processes. Challenge your identity verification vendors to see if they are innovating and using GenAI for positive purposes, and not just treating it as a threat.

Select vendors that have embraced this new world and taken proactive measures, such as introducing bounty programs to challenge hackers to beat liveness detection processes. By encouraging individuals to identify and report potential vulnerabilities, vendors and thus organizations can strengthen their defensive capabilities against deepfake attacks.

As we chart a course toward a secure digital future, collaboration will become the cornerstone of our collective defense against deepfake adversaries. By fostering dynamic partnerships and cultivating a culture of vigilance, CIOs and IT leaders can forge a resilient ecosystem that can withstand the relentless onslaught of AI-driven deception. Armed with insight, innovation and a steadfast commitment to authenticity, you will journey into a future where identities remain sacrosanct despite technological upheaval.

Clutch!

This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of Ny BreakingPro or Future plc. If you are interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Related Post