September 23, 2022 – Facebook’s parent company Meta violated Palestinians’ freedom of expression and over-enforced content moderation policies on posts from Arabic-speaking users throughout the May 2021 crisis with Israel, according to a report released on Thursday.
The report, titled Human Rights Due Diligence of Meta’s Impacts in Israel and Palestine in May 2021, highlights the mistakes Meta made with regards to moderation of potentially harmful content on its platforms such as Facebook and Instagram during the escalation that lasted two weeks. More than 200 Palestinians, including 66 children, were reportedly killed in the attacks.
The report was commissioned to consulting firm Business for Social Responsibility (BSR) by Meta on the advice of the company’s independent governing body, Oversight Board, to assess the impact of its policies on Palestinians and Israelis. The report notes that hate speech and incitement to violence was largely carried out on Meta platforms outside the region during the crisis period.
The assessment lays bare Meta’s lack of an effective moderation mechanism for non-English languages, a concern that has repeatedly been raised by digital rights experts and critics of the Big Tech. According to the report, Meta violated Palestinians’ freedom of speech by unfairly removing Arabic content that did not breach the company’s community standards and proactive detection rates of potentially violating Arabic content were higher than those of Hebrew due to the lack of a Hebrew hostile speech classifier.
The “greater over-enforcement” on content in Arabic led to the removal of posts from Palestinians, which Meta said was the result of a “global technical glitch”. A number of journalists reported the blocking of their WhatsApp accounts, which was, again, attributed to a technical error. In the weeks leading up to the violence, Meta had lost its “Hebrew-speaking FTEs and outsourced content moderators”.
The glitch, besides striking down pro-Palestine content, widely impacted the visibility and engagement of posts showing support for Palestine on Instagram (This had triggered a global backlash from human rights defenders, digital experts, and prominent celebrities denouncing Meta for its discriminatory actions). Content moderation errors also included #AlAqsa (referring to the Al-Aqsa Mosque) being added to Meta’s hashtag block list.
While BSR did not identify intentional bias at Meta, the firm “did identify various instances of unintentional bias where Meta policy and practice, combined with broader external dynamics, does lead to different human rights impacts on Palestinian and Arabic speaking users”.
Meta, in response to BSR’s assessment, stated that it will work towards improving moderation of Hebrew content.
“BSR recommended that we continue work developing and deploying functioning machine learning classifiers in Hebrew,” said Miranda Sissons, Director of Human Rights at Meta. “We’ve committed to implementing this recommendation, and since May 2021 have launched a Hebrew ‘hostile speech’ classifier to help us proactively detect more violating Hebrew content. We believe this will significantly improve our capacity to handle situations like this, where we see major spikes in violating content.”