Facebook and Instagram used “aggressive tactics” targeting children, lawsuit claims

Meta knowingly used “aggressive tactics” in which children became addicted to social media “in the name of growth,” according to a lawsuit against Meta alleging that children have suffered at the hands of Facebook and Instagram

A Meta software engineer claimed that “it’s no secret” how Facebook and Instagram used meticulous algorithms to promote repeated and compulsive use among minors, regardless of whether the content was harmful — and that he was “quite unashamed about it.”

The redacted disclosures were revealed in a lawsuit against Meta, but have been unsealed and obtained by DailyMail.com.

Despite CEO March Zuckerberg publicly saying that his company prioritises profit over safety and well-being is simply “not true,” the files show child sexual exploitation on both platforms and claim “Meta’s engagement-based algorithm abused extreme content to drive more engagement.” stimulate’. reads the document.

The document states that 20 percent of nine- to 13-year-old users on Facebook and Instagram have had a sexual experience with an adult on the sites.

This is despite Meta’s “zero-tolerance policy prohibiting abuses such as child exploitation.”

DailyMail.com has obtained an unredacted version of a lawsuit against Meta brought by parents who allege children have suffered at the hands of its platforms

DailyMail.com has reached out to Meta, who has not commented on specific questions.

A spokesperson for counsel for the court-appointed chief plaintiffs told DailyMail.com: “These never-before-seen documents show that social media companies are treating the youth mental health crisis as a public relations issue rather than a pressing social issue. that is caused. through their products.

“This includes burying internal research documenting these harms, blocking security measures because they reduce ‘engagement,’ and defunding teams focused on protecting young people’s mental health.”

The lawsuit, filed Feb. 14 in California, says more than a third of 13- to 17-year-old children report using one of the defendants’ apps “almost constantly,” admitting that this is “too much.” is, say parents involved in the lawsuit.

The complaints, which were later consolidated into several class actions, alleged that Meta’s social media platforms were designed to be dangerously addictive, causing children and teens to consume content that increases the risk of sleep disorders, eating disorders, depression and suicide.

The case also argues that teens and children are more vulnerable to the adverse effects of social media.

The unedited version was released on March 10.

It states that Thorn, an international anti-trafficking organization, published a report in 2021 on the issues of sexual exploitation on Facebook and Instagram and “provided these insights to Meta.”

Thorn’s report shows ‘neither blocking nor reporting [offenders] protects minors from continued harassment,” and 55 percent of report participants who blocked or reported someone online said they would contact them again.

And younger boys are especially at risk from predators.

The unsealed complaint also alleges that 80 percent of “offending adult/minor connections” on Facebook resulted from the platform’s “People You May Know” feature.

. The files allege that the company knew about child sexual exploitation on Facebook and Instagram and claims that ‘Meta’s engagement-based algorithm exploited extreme content to drive more engagement’

“An internal investigation conducted in or about June 2020 concluded that 500,000 underage Instagram accounts ‘receive’ ‘IIC’ — which stands for ‘improper interactions with children’ — on a daily basis,” reads a redacted statement on pages 135 and 136 of the document. .

“Yet, at the time,” Child Safety [was] explicitly named as a nongoal. . . . So if we do something here, cool. But if we can’t do anything at all, that’s fine too.’

Meta has since improved its ability to reduce inappropriate adult-young interactions.

The company has built technology that allows it to find accounts that may have engaged in suspicious behavior and prevent those accounts from coming into contact with youth accounts.

And Meta claims it doesn’t show young people’s accounts to these adults when they scroll through the list of people who like a post or when they look at an account’s followers or watch list.

However, these changes were made after 2020.

The complaint also states that Meta had considered making teen user profiles “private” by default as early as July 2020, but decided against the move after weighing “security, privacy, and policy wins” against “growth impact.”

On page 135 of the lawsuit, a section redacted claims that Meta knew that allowing adults to contact children on Instagram “makes Apple so angry it threatens to remove us from the App Store” for which the company had no timeline “when we will prevent adults from messaging minors in IG Direct.”

It remained that way even after Meta received reports that a 12-year-old minor had applied to its platform. [the] daughter of [an] Apple Security Exec,” the statement continued.

However, Meta moved to make teen user accounts private by default in November 2022.

A Meta spokesperson told DailyMail.com: “The claim that we have stopped work to support people’s well-being is false.”

The redacted version of the complaint reads, “Instead of”. [this] seriously” and “launching new tools” to protect children, Meta did the opposite.

At the end of 2019, Meta’s “mental health team” stopped doing things, “was reimbursed,” and “stopped completely.” And, as noted, Meta allowed safety tools it knew were broken to be held up as repairs.”

A Meta spokesperson told DailyMail.com that because this is a top priority for the company, “we’ve actually increased funding, as evidenced by the more than 30 tools we offer to support teens and families.” Today, there are hundreds of employees across the company working to build features for this purpose,” said de.

Other “shocking” information in the unsealed complaint notes the existence of Meta’s “rabbit hole project.”

“Someone who feels bad sees content that makes them feel bad, engages with it, and then their IG is flooded with[ith] it,” reads the unedited version.

Meta acknowledges that Instagram users who are at risk of suicide or self-harm are more likely to “encounter more harmful suicide and self-harm content (through surveys, related follower suggestions”).

The document quotes Molly Russel, a London teenager who committed suicide in 2017.

“Meta had done internal research that warned there was a risk of ‘similar incidents like Molly Russell’ because algorithmic product features”[l]directing users to disturbing content,” reads page 84 of the document.

“Our recommendation algorithms will push you down a rabbit hole of more blatant content.”

They have been clear about possible solutions: targeted changes to the algorithm do lead to a ‘meaningful reduction in exposure’ to problematic content.

“But they have resisted making changes, for the explicit, profit-driven reason that such adjustments “had a clear cost of involvement.”

The lawsuit alleges that Meta’s constant stance on the importance of child safety was never serious and just “all theater.”

“Our data as it appears now is incorrect. . . . We share bad statistics externally. . . we guarantee these numbers,” an employee said in the document.

Related Post