Monday, June 30, 2025
Digital Rights Monitor
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements
No Result
View All Result
Digital Rights Monitor
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements
No Result
View All Result
Digital Rights Monitor
No Result
View All Result

in Exclusives, Featured, Statements

MMfD expresses concern over Facebook’s continuing failure to detect hate speech in ads

DRMby DRM
August 1, 2022
MMfD expresses concern over Facebook’s continuing failure to detect hate speech in ads

Photo: DRM Archives

August 1, 2022, Islamabad – Media Matters for Democracy (MMfD) expresses grave concern over Facebook’s continuing failure to detect hate speech in the advertisements submitted to the platform for publication. This failure not only speaks volumes of Facebook’s parent company Meta’s negligence towards non-English-speaking markets, but also exacerbates the situation in countries grappling with political volatility and ethnic tensions, laying bare the fact that the Big Tech continues to ignore the implications of their mounting influence in these regions.

We find Meta’s ignorance, inadequate moderation mechanisms, and lack of resources to timely and effectively regulate content extremely detrimental to vulnerable groups that are already at the risk of violence around the world. The recent investigation conducted by an international rights group into the tech giant’s ability to filter out advertisements containing harmful content is a glaring example of how Meta continues to ignore the political and ethnic sensitivities in countries like Kenya, where it has repeatedly failed to curb violence promoted through its social-networking platforms. 

Meta approving ads laced with violent hate speech and words openly calling for ethnic cleansing only shows its constant disregard and inaction towards content that leads to real-world harm. Besides inflammatory material in regional languages, Facebook’s failure to catch hate speech in ads with content in even English raises questions over what Meta calls its “super efficient AI models to detect hate speech”.

The fact that this is the third time this year that Facebook has been unsuccessful on the test carried out to examine content regulation on the platform is deeply alarming and must not be ignored. In March, Facebook failed a similar test run with hate speech against the Rohingya people in Myanmar, where it is weaponised to target minorities. The advertisements, containing hateful and divisive content, went undetected by Facebook’s systems and were eventually approved for publication. 

Later in June, Facebook’s inability to classify and reject life-threatening, dehumanising and hateful content surfaced forth again when it approved similar advertisements targeting vulnerable ethnic groups in Ethiopia. Despite having been notified of its constant failure to detect content violating its own policies, Facebook ended up approving more hateful advertisements, only proving that its moderation systems are not compatible with non-English languages. Violence resulting from hate speech perpetuated through Facebook has also been witnessed in South Asian countries, including India, Sri Lanka, Bangladesh, and the Philippines.

Meta’s blatant disregard towards developing countries was publicly exposed in 2021 by one of its former employees, who accused the company of putting profits before public good. The cache of internal documents made public to support these claims also revealed how Meta was aware of the harm caused by its social networking products, including Instagram, and that the company was deliberately choosing not to act in order to gain wider exposure and greater profits. 

We demand that Meta acknowledge its damaging role in the politics of non-English-speaking countries that are prone to instability and the need to take special measures to tackle hate speech on its platforms, given its rapid expansion into foreign markets and lofty annual profits. For a social media giant with users running in billions and every possible resource at its disposal, it is only expected of Meta to invest in setting up mechanisms that detect and remove hate speech from its platforms before it results in real-world harm. 

Tags: Facebookhate speechMeta
Previous Post

TikTok Sharing More Data With Researchers To ‘Strengthen Transparency’

Next Post

Facebook’s Failure To Detect Hate Speech Continues Ahead Of Kenya Elections

Share on FacebookShare on Twitter
As AI Enters Newsrooms, MMfD Launches ‘Sahafat.AI’ to Put Journalists in Control

As AI Enters Newsrooms, MMfD Launches ‘Sahafat.AI’ to Put Journalists in Control

June 30, 2025
IRAN: Ban on WhatsApp, Google Play lifted

National CERT Issues Warning Over WhatsApp Phishing Surge

June 18, 2025
Pakistan Rolls Out New Taxes on Online Shopping, Digital Services, and Social Media Ads

Pakistan Rolls Out New Taxes on Online Shopping, Digital Services, and Social Media Ads

June 16, 2025
No Content Available

Next Post
Facebook’s Failure To Detect Hate Speech Continues Ahead Of Kenya Elections

Facebook’s Failure To Detect Hate Speech Continues Ahead Of Kenya Elections

About Digital Rights Monitor

This website reports on digital rights and internet governance issues in Pakistan and collates related resources and publications. The site is a part of Media Matters for Democracy’s Report Digital Rights initiative that aims to improve reporting on digital rights issues through engagement with media outlets and journalists.

About Media Matters for Democracy

Media Matters for Democracy is a Pakistan based not-for-profit geared towards independent journalism and media and digital rights advocacy. Founded by a group of journalists, MMfD works for innovation in media and journalism through the use of technology, research, and advocacy on media and internet related issues. MMfD works to ensure that expression and information rights and freedoms are protected in Pakistan.

Follow Us on Twitter

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

No Result
View All Result
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements