>
According to a report from the Financial times (opens in new tab)Google has been working on a tool that can moderate extremist material for smaller companies, such as start-ups, that may not have the resources to do so.
The internal project, being worked on by Google’s Jigsaw division, which is tasked with challenging threats to open societies, was developed in partnership with UN-backed Tech Against Terrorism.
Google says the initiative is designed to help moderators find and remove potentially illegal content, including racist and other hateful comments, from a website.
Google anti-terrorism
The project was made possible by a database of terrorist items provided by the Global Internet Forum, founded by a collection of tech giants including Google, Meta, Microsoft and Twitter.
It’s specifically designed to support smaller businesses that can’t afford the resources needed for effective moderation, whether it’s large teams of employees or expensive AI tools.
It’s a tool that can be expected to be valuable at a time when extremists banned from major networks are choosing smaller platforms to express their views. It also serves as a protection measure for companies responding to the EU’s Digital Services Act and the upcoming UK online safety law, which will impose fines on companies that fail to remove such content.
For now, it looks like it will operate on an opt-in basis, meaning companies that intend to distribute such messages in the first place will continue to do so even if they face potential fines.
It is believed that two (unnamed) companies will test the code later this year, indicating that a full rollout is still some time away.
Elsewhere, Meta has rolled out its own tool it calls Hasher-Matcher-Actioner (HMA). Like Jigsaw’s project, it is designed to prevent the spread of hateful content and builds on the platform’s existing video and photo content moderation tools.