Give Meta my facial recognition data? I’d rather lose my Instagram account

Meta just announced plans to bring facial recognition technology back to Facebook and Instagram. This time it’s a security measure to help combat ‘celeb-bait’ scams and restore access to compromised accounts.

“We know about security issues, and that includes being able to monitor your social media accounts and protect yourself from scams,” the Big Tech giant wrote in a blog post published on Monday, October 21.

Meta wants to use facial recognition technology to detect scammers who use images of public figures to carry out attacks. The company suggests comparing images in advertisements or suspicious accounts with legitimate celebrity photos. Facial recognition technology also allows regular Facebook and Instagram users to regain access to their own accounts if they have been locked or hijacked. They can verify their identity through video selfies, which can then be linked to their profile photos. Convenient, sure, but can I trust Meta with my biometrics?

The Big Tech giant promises to take a “responsible approach”, including encrypting video selfies for secure storage, deleting facial data once it is no longer needed, and not using this data for other purposes. But when I look at Meta’s track record of protecting and misusing its users’ information, I’m concerned.

Facebook’s parent company has repeatedly violated the privacy and trust of its users in the past.

The 2018 Cambridge Analytica scandal was probably the turning point. It sheds light on how the personal information of as many as 87 million Facebook users was misused for targeted political advertising, mainly during Donald Trump’s 2016 presidential campaign.

The company has since made significant changes around user data protection, but Meta’s privacy violations have continued.

Only this year, Meta admitted that she had deleted all Australian Facebook posts since 2007 to train her AI model without offering the option to opt out. The company was also imposed a large fine (91 million euros) in Europe for incorrectly storing passwords for social media accounts in unencrypted databases. The year before, January 2023, Meta was imposed an even higher fine (390 million euros) for offering personalized ads without the option to opt out and for illegal data processing practices.

It’s certainly enough to make me skeptical of Meta’s good intentions and big promises.

It is also worth noting that Meta herself decided to do this shut down its previous facial recognition system in 2021 due to privacy concerns and promised to delete all collected “faceprints”. Now, three years later, it is back on the agenda.

“We want to help protect people and their accounts,” Meta wrote in its official announcement, “and while the hostile nature of this space means we won’t always get it right, we believe facial recognition technology can help us move faster, more accurately and more effectively. We will continue to discuss our ongoing investments in this area with regulators, policymakers and other experts.”

We won’t always get it right – that’s not very reassuring. So something will definitely go wrong at some point? If that’s the case, no thanks, Meta, I don’t trust you with my biometrics. I’d rather lose my Facebook or Instagram account. What is the benefit of solving a problem only to create an even bigger problem?

What is certain is that Mark Zuckerberg does not have to lose any sleep over the EU fines in this regard for the time being. Meta’s facial recognition tests are not conducted worldwide. The company has excluded the UK and EU markets. GDPR offers strict privacy laws surrounding personal information.

Elsewhere, Meta’s testing will ultimately reveal whether or not the new security feature is the right solution to the growing problem of social media scams, or whether it will become yet another privacy nightmare. In the name of my privacy, I’m not sure it’s worth finding out.

Related Post