Dangerous online content is ubiquitous in the EU – and private chats are not the culprit
More than 1,700 websites in the EU may contain unreported child sexual abuse (CSAM) content, a new report shows.
These worrying findings come from a recent study conducted by experts at Surfshark. Researchers have also looked at the issue on a global scale and have noted an increase in the number of CSAM reports filed with authorities. Between 2020 and 2022, there were approximately 83 million, with EU countries accounting for 3.1 million notifications.
This investigation comes a few days after a group of tech companies – including Surfhsark – wrote an open letter urging EU ministers to withdraw from a proposed anti-CSAM regulation that could allow authorities to crack down on private communications of all citizens to scan for dangerous content. By looking at the currently available technical solutions, the VPN service provider aims to ask critical questions about how to tackle this growing problem without infringing on people’s privacy.
Children’s online safety is at risk
“There could be thousands of unreported websites containing CSAM at any given time. Our research estimates as many as 1,720 websites in the EU alone. It’s scary to think how many CSAM-enabled websites are currently live in the rest of the world and have not yet been reported,” Surfshark spokesperson Lina Survila said when commenting on the findings.
As we mentioned, Surfshark researchers have investigated the scale of the problem of child exploitation online in the EU and around the world.
In Europe, Poland seems to have the biggest CSAM problem as the country may be responsible for 16% of cases in the EU (269 unreported local malicious websites). France follows suit with 260 potentially dangerous websites, Germany with 158, Hungary with 152, and Italy with 110.
Across the world, Asia is the top concern for children’s online safety, accounting for two-thirds of the 83 million CSAM reports filed between 2020 and 2022. According to researchers: India accounts for almost 16% of these reports (over 13 million), followed by the Philippines with 7.1 million reports, Pakistan with 5.4 million, Indonesia And Bangladesh with 4.7 million each.
To compile this worrying set of data, researchers used open source information from the 2020-2022 period from the National Center for Missing and Exploited Children (NCMEC), the US agency that Big Tech companies are legally required to to contact these cases. These sources were then compared with data reported by the Communications Regulatory Authority of Lithuania (RRT). You can see more details about Surfhsark’s methodology here.
Technical innovation for privacy-protecting solutions
Perhaps the most important part of Surfshark’s research lies behind the RRT findings. In 2022, the national regulator conducted an experiment in collaboration with proxy service provider Oxilabs to demonstrate how new technology can help combat CSAM issues in a privacy-protective way.
The company developed a new AI-powered tool that can search the Internet to effectively identify illegal content. It analyzes image metadata and determines if there are matches in the police database. These images then pass through a machine learning model that can detect pornographic material.
The pro bono project lasted two months and scanned approximately 300,000 Lithuanian websites. The tool managed to identify 19 local websites that violate national or EU law. This resulted in eight police reports and two preliminary investigations.
According to Survila, Oxilabs’ experiment should serve as an example of how technological innovation can support authorities’ efforts to stop child sexual abuse online. She told me, “While there is no one-size-fits-all solution, the proactive steps taken by some governments can serve as a guiding model for others in tackling these complex challenges.”
Did you know?
The EU Parliament reached an agreement in October last year historic agreementrequesting the removal of the Chat Control clause from the Proposal for scanning child sexual abuse material in the EU. The decision reiterates that privacy is a fundamental right and aims to ensure online security and encryption. However, it is now time for every EU member state to agree on its own position. The ministers expect to reach an agreement in March.
The so-called Chat control proposal appears to be taking a completely different direction, which experts warn could be detrimental to the safety of citizens.
They emphasized that side-scanning of chats is not only an attack on encryption that invades people’s privacy, but that it can also open a backdoor that criminals can exploit.
If it is true that this invasive approach is believed to address broader online dangers, “an individual’s right to privacy should not be negotiated, nor should such laws even be considered before every other possible tool is deployed to prevent abuse.” online,” said the spokesperson. Survila.
She believes the first step for governments should be to try less invasive tools like web scraping to identify and combat publicly available dangerous material.
Denas Grybauskas, head of legal affairs at Oxylabs, believes that the European Commission (EC) understands that such an invasion of citizens’ privacy should only be allowed as a last resort. However, he still believes it is crucial to discuss technology-driven alternatives in more detail.
“I hope that examples like the Oxylabs pro bono project and a wider range of technology options will be openly discussed by the EC as it comes up with regulations that could potentially harm the privacy of all EU citizens,” he told me .
In the meantime, he says, the Oxilabs team will continue to work with RRT to improve the existing AI-powered web scraping tool. The company is also pursuing further initiatives with organizations, students and researchers to develop more software solutions for today’s online threats.
On this point, Grybauskas said, “We are always open to new partnerships with researchers, academia, and public organizations looking to solve critical research questions and missions using public web data.”
We test and assess VPN services in the context of legal recreational use. For example: 1. Accessing a service from another country (subject to the terms and conditions of that service). 2. Protect your online security and strengthen your online privacy abroad. We do not support or tolerate the illegal or malicious use of VPN services. Consuming pirated, paid for content is not endorsed or condoned by Future Publishing.