How to protect your business in the age of deepfakes
Deepfake technology is not a new concept. In 1991, Terminator 2 became the first film to create a completely computer-generated character with realistic human movements (and, incidentally, one of cinema’s most iconic villains). And since then, media like movies and video games have constructed thousands of realistic, likable, and engaging characters from pixels alone.
Previously, this process was incredibly time-consuming and expensive, reserved for experts. But in recent years, the technology needed to create photorealistic images of people has ended up in the hands of ordinary users. In 2021, a TikTok account called @deeptomcruise started posting humorous deepfake videos of Hollywood star Tom Cruise. It has since amassed more than 5.1 million followers and has even become a leading generative AI company. Meanwhile, as the accessibility of generative AI technology continued to expand, 2022 and 2023 saw a broader rise of deepfake videos spreading like wildfire across social media.
With AI-generated images routinely fooling even the most skeptical internet users, developments in 2024 are expected to have an even greater impact on us. Unfortunately, deepfakes, like many types of initially innocent technologies, are now being exploited for nefarious means, with the latest high-profile victim being Taylor Swift. And in a year of major world events like the British general election and the Paris Olympics, these deepfakes could become mainstream for consumers and businesses alike. So how can we protect ourselves from its negative effects?
The realism and risks of modern deepfakes
Today, nine in ten (90%) of cybersecurity breaches are identity-related. Yet more than four in ten companies (44%) are still in the early stages of their identity security journey. The business value and importance of identity, especially in security, must be prioritized.
Identity is a core element of cybersecurity. And in business terms, identity is about ‘who’ has access to ‘what’ information. In the past, the “who” was typically a person or group of people, and the “what” was a database or application. Today, the “whos” have expanded from internal employees to contractors, supply chain members, and perhaps even artificial intelligence. The “what” has also expanded, as more data moves through more systems – from emails to apps to the cloud and beyond. The more users and access points there are, the harder it is to screen all identities and protect all data from growing threats. Even security measures previously thought to be sophisticated and foolproof, such as voice recognition, are no longer a match for today’s AI-induced risks.
Director of Strategy & Standards, SailPoint.
At SailPoint, we examined the threats of identity theft in a recent experiment. We used an AI tool to listen to recordings of our CEO Mark McClain’s voice and then create our own version. Then both the tool and Mark read a script in a blind test for SailPoint employees. Even though they knew it was an experiment, a third of employees were wrong: the fake AI voice was so good that one in three thought it was Mark.
It’s no wonder that these types of impersonation scams are gaining popularity in the UK. Last summer, trusted consumer finance expert Martin Lewis fell victim to a deepfake video scam, in which his computer-generated twin brother encouraged viewers to back a fake investment project. Lewis described it as “terrifying,” and it would be hard to disagree. As technology continues to evolve, cybercriminals will become increasingly able to breach people’s trust and overcome existing security hurdles with ease.
The broader, real-world impact of generative AI
Many experts are also concerned about the impact of deepfake on public political opinion. Over the past fifteen years, we’ve seen how the internet and social media can influence real-world events – from the Obama team’s groundbreaking use of Facebook in the run-up to the 2008 US presidential election to the Cambridge Analytica personal data scandal in 2018. Moreover, there are the everyday algorithms that control our exposure to ideas and information and therefore unconsciously shape our views. But as AI technology advances, the internet may have an increasingly overt effect on politics, simply by spreading deepfake videos of politicians that become increasingly difficult to distinguish from reality.
A 2023 Guardian article lists some notable examples of deepfake images and videos of political figures making striking, shocking or bizarre statements, which some viewers may have been led to believe were real. It argues that technology risks dangerously disrupting how the public, especially those who are unsuspecting or unaware of its capabilities, think and trust how our world leaders think. With both the US and UK elections due to take place in 2024, plus world events with a huge economic and social impact, such as the Paris Olympics, also on the horizon, we must remain increasingly wary of the impact of these deepfakes in the new year and beyond.
Take advantage of advanced security features
In 2023, we saw cybercriminals ramp up their use of AI deepfake technology through a range of attack vectors. So in 2024, the pressure for potential victims to identify the real content in a sea of counterfeits will become even greater. To combat this escalation, companies will need to increase employee training on how to detect deepfakes, as well as review and strengthen digital access rights so that employees, partners, contractors, etc. only have access to important data as much as their roles and require responsibilities. Data minimization – collecting only what is necessary and sufficient – will also be essential.
Going forward, it will be critical that companies adopt stronger forms of digital identity security. For example, verifiable credentials, a form of identity that is cryptographically signed proof that someone is who they say they are, can be used to “prove” someone’s identity rather than relying on sights and sounds. In the case of a deepfake scam, evidence can then be provided to ensure that the CEO or colleague is actually who he claims to be. Some emerging security tools are now even using AI to defend against deepfakes, with the technology able to learn, detect and proactively highlight the signs of fake video and audio to successfully thwart potential breaches. Overall, we’ve seen that companies that use AI and machine learning tools, along with SaaS and automation, can scale as much as 30% faster and get more value for their security investments through greater capabilities.
Flying into the many faces of danger
Ultimately, stopping just one cybersecurity breach can save millions in lost revenue, fines and reputational damage. Yet more than nine in ten (91%) of IT professionals say budget constraints are a barrier to identity management security. However, with deepfakes making up a large part of the current threat landscape, now is not the time to try and save a few quid. Enterprise IT security teams must be given the tools they need to defend against these types of attacks.
Fortunately, as the accessibility of AI technology increases, so do security tools. Identity platforms, which leverage automation and AI, enable companies to scale identity-related capabilities up to 37% faster than those without. As we enter 2024, investing in these types of instruments should be the one thing we take for granted.
We have listed the best protection against ransomware.
This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of Ny BreakingPro or Future plc. If you are interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro