Bots – not all friendly automations that want to help
For many, the mention of bots conjures up images of friendly website automations desperate for answers. Submissive avatars programmed to make life easier.
But for those who specialize in cybersecurity, “How can I help you?” is one minor code change away from “How can I harm you?” In the hands of unscrupulous individuals, bots are increasingly being used for malicious gain. Their goal? Any brand that transacts with customers through websites, APIs and mobile applications.
A uniquely exposed attack surface
If online commerce served more than 5 billion people worldwide, it would, if it were a country, have the third largest gross domestic product (GDP) in the world at $6.3 trillion. These massive revenue streams are only made possible because online businesses automate customer interactions at scale. An untold number of checkouts, logins, data requests, product searches, and more combine to drive the inexorable rise of digital businesses.
Unfortunately, threat actors have also noticed the value flowing through these interfaces.
Using malicious automation, threat actors compromise this exposed web attack surface. Attackers use bots with advanced, tailor-made capabilities, effectively equipping themselves with an army of fake website users who can operate with extraordinary precision, speed, volume and stealth. This automated tooling allows attackers to corrupt the underlying business logic, steal money and obtain IP, ultimately damaging the target company’s reputation and degrading website performance.
A bot for all reasons
Threat actors use bots to carry out a variety of attack techniques, the most disruptive of which are scalping, credential stuffing and scraping.
Scalping – Attackers unleash bots to swarm digital shelves. By quickly buying up high-demand items such as event tickets and sneakers, they leave real customers hanging – before reselling them en masse at high prices on secondary markets.
Credential filling – This technique leverages the web attack surface with malicious automation to conduct volumetric identity attacks for fraud. Attackers bombard interfaces with stolen or artificial credentials, ultimately gaining an illegal foothold in customer accounts or creating legions of fake identities for resale in dark parts of the internet.
Scrape – Unique content, price and inventory data residing on the web attack surface is widely scraped and extracted by threat actors. Because malicious automations collect an average of four months of IP before being detected, value is endlessly leached away.
Collapse under the weight of these massive, automated volumetric attacks, websites are slow to recover, adding lost customer and infrastructure costs on top of what has already been stolen.
The impact of malicious automation is cumulative. A gradual, parasitic bleeding of financial, reputational and customer value that flies under the radar of traditional controls.
In total, this typically costs companies $85.6 million per year, dwarfing the average ransomware payment of $1.5 million.
Very real human impact
The impact on people is also cumulative. Research has shown that people, at the mercy of the scarcity created by large-scale scalping attacks, are willing to pay 13% more for goods and services – even if they fear being ripped off.
The normalization of bots also forces some to engage in questionable behavior themselves. More than a quarter of under-35s admit to renting one to secure the goods and services they want, despite knowing they are operating in questionably legal territory. A seemingly endless cycle of immoral behavior and fraud, amplified by technology and made easier by the distance of a keyboard.
An advanced solution for an advanced attack
The legality of bots is confusing. Some of these, for example those that exploit stolen identities, are clearly illegal. Others operate in gray areas, for example subject to the terms and conditions of the website, but not against the law.
Overall, official policy is still catching up. Some regulations – such as the Better Online Ticket Sales (BOTS) Act in the US and even EU laws that seek to limit and control the harms of AI – address some issues but provide only partial coverage.
For the brands under attack, mitigating the threat of malicious automation means overcoming a number of technical challenges. First, bot attacks span the entire web surface, requiring visibility into the massive amounts of traffic passing through websites, APIs, and mobile applications. For major online brands, this makes it difficult to detect sophisticated bots that use an arsenal of disguises and pretend to be real users. This causes older technologies to fail on a large scale, denying access to real customers or allowing bots to pass unchecked.
To effectively tackle the problem, strong regulation and technological innovation are needed. Driven by mounting consumer harm, forward-thinking politicians and lawmakers have realized the magnitude of the impact and are beginning to put pressure on perpetrators. Likewise, new technologies that can intelligently detect bots in massive data sets using machine learning are starting to gain trust among security teams.
What would spur action, however, is greater awareness of the enormity of the problem. Bots are increasing exponentially in scale, speed and effectiveness. The question is: will we respond accordingly?
We’ve highlighted the best business VPN.
This article was produced as part of Ny BreakingPro’s Expert Insights channel, where we profile the best and brightest minds in today’s technology industry. The views expressed here are those of the author and are not necessarily those of Ny BreakingPro or Future plc. If you are interested in contributing, you can read more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro