Twitter ‘failed to pick up on 40 child porn images that had already been flagged as harmful’

Twitter failed to pick up 40 child porn images already flagged as harmful over a two-month period, Stanford research group claims

  • The Stanford Internet Observatory analyzed 100,000 tweets in two months
  • They claim to have found 40 images, all of which appear on PhotoDNA – a database that identifies malicious content

Twitter failed to pick up 40 child sexual abuse images over a two-month period this year, according to a new report from the Stanford Internet Observatory.

The group, which monitors security on the internet and specifically on social media, found the footage between March and May.

The photos were discovered among a trove of 100,000 tweets that were analyzed, and all of them already belonged to databases such as PhotoDNA, which companies and organizations use to screen for such content, according to The Wall Street Journal.

Twitter CEO Elon Musk has not commented on the claims in the new Stanford Internet Observatory report

“This is one of the most basic things you can do to prevent CSAM online, and it didn’t seem to work,” said the Observatory’s David Thiel.

Twitter has not yet responded to the claims.

The Observatory and its researchers fear that monitoring of Twitter’s practices will be limited this year when the price of its application programming interface – API – is raised to $42,000 per month. Previously, admission was free.

Musk has targeted the group in the past, calling it a left-wing “propaganda machine.”

The group was involved in flagging what it perceived as disinformation during the 2020 election on Twitter.

His involvement in the deletion of some tweets came to light in the release of the Twitter Files, documents made public by Musk to show the public how biased the site was before he took over, and how intrinsically connected it was to government.

The SIO report claims that part of the problem is that Twitter allows some adult nudity, making it more difficult to identify and ban child sexual abuse material.

According to the report, researchers told Twitter bosses in April that there were problems with their systems for identifying malicious content, but nothing was done until the end of May.

The Twitter findings were part of a larger project that the group says will be made public in The Wall Street Journal later this week.

In February, Twitter announced it had suspended 404,000 malicious accounts in the month of January alone — an increase of 112 percent.

Related Post