The widespread use of generative AI has made phishing an even greater threat to organizations. Hyper-realistic emails, texts and deepfake voice notes can be constructed using AI tools, and with grammar and spelling making threats more authentic, AI-powered phishing is raising major concerns.
This year we’ve seen an escalation in the complexity and variety of phishing methods, targeting people on new platforms they trust, beyond the standard email, phone call or text message. Concerns have reached the top of the business community. Accenture’s Pulse of Change research shows that nearly half (47%) of C-suites are concerned about the increased risks of cyber attacks and data breaches. Cyber threats based on misleading content, such as realistic phishing emails/messages, were seen as the biggest risk.
Attacks may not be simple, but the motivations are often: financial gain. Attackers use messages from fraudulent websites requesting personal information to trick their victims into sending money or gaining access to their networks. They also know that by posing as senior leaders they can potentially influence people to share data, money or credentials.
Unfortunately, as phishing attempts become more realistic, employees are more likely to fall victim, causing serious disruption, financial loss and potential long-term reputational damage to their organization.
Accenture’s Cyber Resilience Lead in Britain.
Education is the key
It is therefore crucial that employers provide the necessary education – including training and simulations – to prevent attacks from tricking employees into clicking something they shouldn’t.
Simulating an authentic phishing attack is not easy. In fact, companies have tried to educate their employees by replicating public brands with typical consumer and employee communications – such as mimicking delivery companies – to create content for educational purposes. This is because these companies typically have many characteristics that make them ideal targets for social engineering due to brand awareness, regular requests for personal information, and sharing of routine tracking links. Delivery companies share regular emails and text updates, meaning the frequency of communications – and the features that come with them – often go unnoticed and individuals are easily misled.
However, when organizations copy brands in simulations, this can raise legal issues surrounding IP theft if they have not requested permission to use their brand name and company information. It can also lead to reputational damage for the brands themselves by being associated with cyber attacks (even fake attacks).
If a company decides to repeat such an exercise but wants to avoid using a third party as part of the simulation, it can instead implement internal emails from trusted departments such as finance, legal or HR. This means they will still appear credible to employees, as they will resemble emails normally sent directly from internal teams, but they won’t run the risk of ending up in legal hot water from external companies.
How can you protect your business
In addition to training employees, companies can also take preventative measures to stay protected – and turn the tables on attackers by using generational AI themselves.
Because AI increases the risk of being deceived by realistic content, it is also an essential part of an organization’s technological armor. For example, many platform companies and hyperscalers are releasing AI security features in their own environments. Additionally, AI-powered “red teaming” – a cybersecurity technique – mimics an attack to see how individuals would respond. Other examples, including penetration testing, will become mandatory for organizations as regulations evolve. The key to gaining the upper hand in the AI generational era will be embedding security-by-design throughout the journey.
The personal touch
While security tools are critical, ultimately people are an important line of defense. Training programs play a central role in helping employees recognize and report suspicious communications, but they should also be encouraged to trust their instincts. Employees should always ask themselves: “Is this typical behavior of the sender? Is this a platform where they normally contact me? Would I normally verify my information this way?”
There are also cultural factors that underpin an organisation’s defense – and that starts with ensuring companies prioritize the way they work and the wellbeing of their people. Always-on and tired employees are more likely to quickly click on suspicious links, so reducing alert fatigue and burnout among people also has cybersecurity benefits.
Just as there is a human behind the initial creation of a phishing attack, there is always a human recipient of a scam. The best defense always depends on the knowledge of an empowered employee who understands the risks and acts consciously. A healthy dose of human distrust, combined with a strong line of technology-enabled defenses, will put organizations on the right path to defending themselves against phishing attackers, without inadvertently damaging the reputation of other brands.
We recommended the best cloud antivirus.
This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of Ny BreakingPro or Future plc. If you are interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro