Passwords may seem like a relatively recent phenomenon of the internet age, but the first digital password dates back to 1961. Other notable events that year included Soviet cosmonaut Yuri Gagarin becoming the first person to orbit the Earth, construction of the Berlin Wall in East Germany beginning, and the Beatles playing their first ever show at Liverpool’s Cavern Club. The world has come a long way since 1961. And yet, after more than half a century of technological and societal advances, the humble password remains our go-to, frontline defense against cybercriminals.
Passwords have never been a particularly reliable defense against family, nosy coworkers, or—least of all—ambitious fraudsters. But the advent of readily available, easy-to-use artificial intelligence (AI) tools has rendered the digital password as we know it virtually obsolete. While it was created to accelerate creativity and innovation, generative AI also allows bad actors to bypass password-based security and use social engineering (via deepfake videos, voice cloning, and incredibly personalized scams) to gain access to our digital bank accounts.
A new survey of 600 fraud management, anti-money laundering, and risk and compliance executives globally found that nearly 70% of respondents believed criminals are better able to use artificial intelligence to commit financial crimes than banks are able to use the technology to stop them.
To combat this threat, financial institutions and banks must innovate.
Global Vice President for BioCatch.
The state of fraud and financial crime in 2024
The UK government currently estimates the cost of cybercrime at £27 billion a year. But a new report from BioCatch reveals that more than half (58%) of businesses say their organisations will have spent between $5 million and $25 million on combating AI-driven threats in 2023. Meanwhile, 56% of finance and security professionals surveyed said they saw an increase in financial crime last year. Worse still, almost half expect financial crime to increase in 2024, and expect the total value of losses due to fraud to also increase.
With the cybercrime threat landscape rapidly evolving by the day, it’s no surprise that fraud fighters expect tougher challenges to come. We’re already seeing cybercriminals launching sophisticated attacks on businesses, creating convincing phishing emails, deepfake videos for social engineering, and fraudulent documents. They’re impersonating government officials and our loved ones with chatbots and voice clones. And they’re creating fake content to manipulate public opinion.
AI has made the senses we’ve used for thousands of years to distinguish between legitimate and fraudulent nearly obsolete. Financial institutions must develop new approaches to keep up and fight back.
Aiming for zero trust
Over 70% of financial services and banking firms identified the use of fake identities when onboarding new customers last year. In fact, 91% are already reconsidering the use of voice authentication due to the risks of AI voice cloning. In this new era, we can no longer guarantee that something looks and sounds right.
The first step to verification in the age of AI is greater internal collaboration. More than 40% of professionals say their company handles fraud and financial crime in separate departments that don’t work together. Nearly 90% also say financial institutions and government agencies need to share more information to combat fraud and financial crime. But simply sharing information likely won’t be enough. This new era of AI-driven cybercrime requires protective measures that can distinguish between humanity and technology, legitimate and fraudulent.
Meet behavioral biometric intelligence.
The difference is human
Behavioral biometric intelligence uses machine learning and artificial intelligence to analyze both physical behavior patterns (mouse movements and typing speed, for example) and cognitive signals (hesitation, segmented typing, etc.) in search of anomalies. An anomaly in user behavior – especially one that matches known patterns of criminal activity – is often a very good indication that the online session is fraudulent. Once detected, these solutions can block the transaction and alert the appropriate bank officials in real time.
Behavioural biometric intelligence can also identify money mule accounts used in money laundering by monitoring behavioural anomalies and changes in activity trends. Research shows a 78% increase in money mule activity among people under the age of 21, while a third of financial institutions cite a lack of resources to monitor mule activity.
Best of all, behavioral biometric intelligence is a non-intrusive and continuous method of risk assessment. It doesn’t slow down or interrupt a user’s experience. It simply enhances security by looking at the different ways people perform daily actions. Traditional controls will still be necessary to combat fraud and financial crime, but adding behavioral biometric intelligence can help banks more effectively achieve both their fraud prevention and digital business objectives.
It seems unlikely that we will ever completely do away with our trusted passwords, but passwords themselves are already dusty relics of the past. It is imperative that we add new solutions to our online banking security stack to ensure the protection of our personal information and digital interactions. Behavioral biometric intelligence must be one of those solutions, helping us stay safe in this unpredictable new era.
We have highlighted the best online cybersecurity course for you.
This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we showcase the best and brightest minds in the technology sector today. The views expressed here are those of the author and do not necessarily represent those of Ny BreakingPro or Future plc. If you’re interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro