The growing threat of data breaches in the age of AI and data privacy
It is not unknown that artificial intelligence (AI) has contributed significantly to innovation within the cybersecurity industry. AI provides enhanced benefits to existing security infrastructure due to its ability to automate tasks, detect threats and analyze large amounts of data. In particular, it can be used to identify trends in phishing attacks and is effective at detecting critical coding errors missed by human oversight. The technology can also simplify complex technical concepts and even develop scripted and resilient code. Ultimately, we help keep cybercriminals at bay.
However, AI is a double-edged sword as bad actors can also access these tools and use them for malicious purposes. Following the recent AI Safety Summit in the UK, questions around AI and its impact on data privacy have become more urgent than ever before. As technology evolves in real time, so does the fear surrounding AI, as it becomes difficult to predict how it will continue to evolve in the future.
AI-powered systems rely largely on personal data to learn and make predictions, raising concerns about the collection, processing and storage of such data. With easy access to AI tools like chatbots and image generators, tactics like using deepfake technology to circumvent data privacy regulations are becoming increasingly common. In fact, recent research from Trellix has shown that AI, and the integration of Large Language Models (LLMs), is dramatically changing the social engineering strategies used by bad actors.
With the rapidly growing adoption of AI by cybercriminals, organizations must stay ahead of the curve to avoid falling victim to these attacks and losing vital data. How can organizations do this?
SVP and GM for EMEA at Trellix.
Malicious use of generative AI
The internet is full of tools that use AI to make people’s lives easier. Some of these tools include ChatGPT, Bard or Perplexity AI, which come with security mechanisms to prevent the chatbot from writing malicious code.
However, this is not the case for all tools, especially those developed on the dark web. The availability of these tools has led to the rise of ‘Script Kiddies’. These are individuals with little to no technical expertise who use pre-existing automated tools or scripts to carry out cyber attacks. It is important that they are not dismissed as unskilled amateurs, as the rise of AI will only make it easier for them to carry out sophisticated attacks.
Clearly, today’s AI applications provide powerful and cost-effective tools for hackers, eliminating the need for extensive expertise, time and other resources. Recent developments in AI have led to the emergence of Large Language Models (LLMs) that can generate human-like texts. Cybercriminals can use LLM tools to improve key stages of a successful phishing campaign by gathering background information and extracting data to tailor content. This makes it easy for these threat actors to generate phishing emails quickly and at scale at a low marginal cost.
Infiltrating companies through AI voting fraud
Our recent research found that social engineering tactics are the leading cause of major cyber attacks according to 45% of UK CISOs. Cybercriminals are increasingly turning to technology to automate social engineering, using bots to collect data and trick victims into sharing sensitive information such as one-time passwords. AI-generated voices play a major role in this.
This AI voice fraud mimics human speech patterns, making it difficult to distinguish between real and fake voices. This approach reduces the need for extensive human involvement and minimizes post-attack traces.
Scammers use these voices in addition to psychological manipulation techniques to deceive individuals. They instill confidence and urgency in their victims, making them susceptible to manipulation. In our November threat report, we found that Al-generated voices can also be programmed to speak multiple languages, allowing scammers to target victims in different geographic regions and linguistic backgrounds.
As a result, phishing and vishing attacks continue to increase as threat actors use these tactics during live phone calls to manipulate companies into sharing corporate data. Amid evolving phishing threats, organizations must stay one step ahead of cybercriminals or risk having their system, employees, and valuable data exploited for threats.
Security teams must build more resilient threat environments
Organizations must adopt AI themselves, not only to re-protect the AI tactics used by cybercriminals, but also to reap the benefits this will have on day-to-day processes. AI can be used to improve operational efficiency and productivity, integrated into the decision-making process and helps organizations stay competitive in the rapidly evolving landscape.
As these AI-based cyber attacks can make it increasingly difficult for organizations to detect them, it is becoming increasingly important that they implement the right technology that can anticipate and respond to these threats. This is where implementing an Extended Detection and Response (XDR) solution is critical.
XDR is revolutionizing threat detection and response at a fundamental level. Our research shows that when a data breach occurs, 44% of CISOs believe XDR can help identify and prioritize critical alerts, 37% believe it can speed up threat investigation, and 33% reduce false positive threat alerts has. It also gives security teams better visibility into the broader attack surface and clearer prioritization of risks. As a result, 76% of global CISOs who have suffered a data breach agree that if they had had XDR, the major cybersecurity incident would have had a smaller impact.
Stay informed about AI in the age of data privacy
In general, organizations should approach AI with caution. It’s no secret that its benefits have revolutionized the way people work today by simplifying complex technical concepts. However, due to its dual nature, caution must be exercised in its implementation. Cybercriminals are no strangers to AI and have been using it to their advantage lately by manipulating and creating fake data to cause confusion or impersonate officials, for example in the recent attacks on Booking.com.
To avoid falling victim to these AI attacks, companies must embrace the evolving cybersecurity landscape and invest more in methods that protect against advanced cyber attacks. With a combination of the right technologies, talent and tactics, Security Operation (SecOps) teams will be well-equipped to mitigate cyber threats from within their organization.
We recommended the best encryption software.
This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of Ny BreakingPro or Future plc. If you are interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro