Meta, the owner Facebook and Instagram, approved advertisements calling for harm against Palestinians, according to an investigation.
The ads, submitted to the company for publication on Facebook, were laced with hate speech and incitement to violence against both pro-Palestine individuals and Palestinians, says 7amleh – The Arab Center for the Advancement of Social Media, a nonprofit advocacy group.
The targeted ads went to the extent of calling for fatal harm against Palestinian children and the elderly, in addition to their expulsion from the “West Bank to Jordan”. They also featured dehumanising language and radicalised Gazan children as “future terrorists”.
About 19 ads containing inflammatory content against the Arabs and Palestinians were submitted to Meta in Hebrew and Arabic, all of which were immediately approved for publication on Facebook. The investigation was conducted in the context of the ongoing Israeli aggression over Gaza, which has, since October, 7, left more than 14,000 Palestinians killed, including over 5,000 children.
The ads test into Meta’s moderation practices was launched after 7amleh’s founder came across an ad on his Facebook feed openly calling for the assassination of pro-Palestine activist Paul Larudee. Facebook took down the ad after it was reported. Allowing an ad explicitly inciting the assassination of an individual is against Meta’s community guidelines, yet the platform failed to detect it.
The approval of these life-threatening ads lays bare Meta’s failure of its “automated and manual review enforcement mechanisms” concerning sponsored content and reinstates the long-standing claims, made time and again by both critics of the Big Tech and whistleblowers, that Meta profits off potentially harmful and illegal content as it results in high user engagement.
Considering the large volume of complaints from around the world, the role of Meta during Israel’s ongoing violence across Gaza has been particularly biased against pro-Palestine users. Reports highlighting the suppression of content supporting Palestine and takedown of pro-Palestinian accounts on both Facebook and Instagram have raised pointed questions on Meta’s moderation practices in the wake of the crisis.
What intensifies the scrutiny around Meta is the company’s justification of its questionable actions against pro-Palestinian accounts and users as technical glitches. Be it labelling users, who identify themselves as Palestinians, as “terrorist” on Instagram or depicting Palestinian children as radicalised caricatures wielding firearms in WhatsApp’s AI sticker generator, Meta has repeatedly ascribed these controversial developments to technical problems.
According to a report released by Business for Social Responsibility (BSR) in September 2022, Meta violated the digital rights of Palestinians, including suppressing their freedom of speech and expression by over-enforcing content moderation policies on Arabic posts. The report was commissioned to BSR by Meta itself.
As for ads submitted for publication on Facebook, Meta has failed on multiple occasions to detect hate speech in sponsored content, especially in markets where political instability has given rise to civil unrest. Myanmar and Ethiopia are two major examples of countries where Meta has repeatedly failed to contain hate and incitement to violence against vulnerable groups, leading to real-life harm.
The moderation problem is particularly concerning for non-English languages, for which Meta appears to lack sufficient content moderators in their respective markets. The 2021 Facebook Papers controversy revealed that 85 per cent of Meta’s resources towards misinformation are allocated for the US alone, leaving the rest of the world to do with a mere 15 per cent.