MENLO PARK, California — A quasi-independent review board is recommending that Facebook parent company Meta overturn two decisions it made this fall to remove posts “that inform the world about human suffering on both sides” of the war between Israel and Hamas.
In both cases, Meta eventually restored the posts — one featuring Palestinian victims and the other featuring an Israeli hostage — on its own, although it added warning screens to both due to the violent content. This means that the company does not have to do anything about the board's decision.
That said, the board also said it disagreed with Meta's decision to prevent the posts in question from being recommended by Facebook and Instagram, “even in cases where it had identified posts intended to raise awareness enlarge.” And it says Meta's use of automated tools to remove “potentially harmful” content increases the chance of removing “valuable posts” that not only raise awareness about the conflict but may also contain evidence of human rights abuses. It urged the company to preserve such content.
The Oversight Board, established three years ago by Meta, issued its decisions on Tuesday in what it said was the first expedited ruling – taking 12 days instead of the usual 90.
In one case, the board said, Instagram removed a video showing what appears to be the aftermath of an attack on or near Al-Shifa Hospital in Gaza City. The message shows Palestinians, including children, injured or killed. Meta's automated systems removed the post because it violated rules against violent and explicit content. Although Meta ultimately reversed its decision, the board said, it placed a warning screen on the post and demoted it, meaning it wasn't recommended to users and fewer people saw it. The board says it disagrees with the decision to demote the video.
The other case involves a video posted on Facebook of an Israeli woman pleading with her captors not to kill her while she is being held hostage during the Hamas attacks on Israel on October 7.
Users appealed Meta's decision to remove the posts and the cases went to the Oversight Board. The board said that in the weeks after October 7, it saw a nearly threefold increase in the daily average of calls flagged by users involving the Middle East and North Africa.
Meta says she welcomes the board's decision.
“Both expression and safety are important to us and the people who use our services. The board overturned Meta's original decision to remove this content, but approved the subsequent decision to restore the content with a warning screen. Meta previously restored this content, so no further action will be taken,” the company said. “There will be no further updates on this matter as the board has not made any recommendations as part of their decision.”
In a briefing on the cases, the board said Meta confirmed it had temporarily lowered the thresholds for automated tools to detect and remove potentially infringing content.
“While the risk of malicious content was reduced, it also increased the likelihood that valuable, non-infringing content would be accidentally removed from its platforms,” the Oversight Board said, adding that as of Dec. 11, Meta had not reinstated the thresholds to allow prior Oct. 7 levels.
Meta, then called Facebook, launched the Oversight Board in 2020 in response to criticism that it did not act quickly enough to remove misinformation, hate speech and influence campaigns from its platforms. The board has 22 members, a multinational group that includes lawyers, human rights experts and journalists.
The decisions of the board, as in these two cases, are binding, but the broader policy findings are advisory and Meta is not obliged to follow them.