Meta, Snapchat and TikTok launch new Thrive mental health initiative, and it’s high time

Meta, Snapchat, and TikTok are finally joining forces to address the harmful effects of some of the content hosted on their platforms. And it’s about time.

In collaboration with the Mental Health CoalitionThe three brands use a program called Thrive, which is designed to flag and safely share information about harmful content, with a focus on content about suicide and self-harm.

A Meta blog post states: “Like many other types of potentially problematic content, suicide and self-harm content isn’t confined to one platform… That’s why we partnered with the Mental Health Coalition to launch Thrive, the first signal-sharing program to share signals about abusive suicide and self-harm content.

“Through Thrive, participating tech companies can share signals about infringing suicide or self-harm content so that other companies can investigate and take action when the same or similar content is shared on their platforms. Meta provides the technical infrastructure that supports Thrive… allowing signals to be shared safely.”

When a participating company like Meta discovers malicious content on its app, it shares hashes (anonymized code related to self-harm or suicide content) with other tech companies so they can search their own databases for the same content, as it often spreads across multiple platforms.

Analysis: A good start

(Image credit: Getty Images)

As long as there are platforms that rely on users to upload their own content, there will be platforms that break the rules and spread harmful messages online. This can be in the form of scammers trying to sell fake courses, inappropriate content on channels aimed at children, and content related to suicide or self-harm. Accounts posting this type of content are generally very good at skirting the rules and flying under the radar to reach their target audience; the content is often removed too late.

It’s good to see that social media platforms – which use elaborate algorithms and casino-like architecture to keep their users hooked and automatically serve up content they come into contact with – are actually taking some responsibility and working together. This kind of ethical cooperation between the most popular social media apps is desperately needed. However, this should only be the first step on the road to success.

The problem with user-generated content is that it needs to be constantly monitored. AI can certainly help to automatically flag harmful content, but some content will still slip through – much of it is nuanced and has subtext that a human somewhere in the chain needs to see and flag as harmful. I’ll definitely be keeping an eye on Meta, TikTok, and others as they evolve their policies on harmful content.

You may also like

Related Post