Friday, September 12, 2025
Digital Rights Monitor
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements
No Result
View All Result
Digital Rights Monitor
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements
No Result
View All Result
Digital Rights Monitor
No Result
View All Result

in DRM Exclusive, News

Facebook Continues To Approve Ads Inciting Violence: Investigation

DRMby DRM
June 13, 2022
Facebook Daily Active Users Declined For The First Time: Company Reports

Photo: Archive

June 13, 2022 – Tech giant Meta and its leading social networking platform Facebook have failed again to detect hate speech in advertisements inciting violence, according to a recent investigation.

A test was conducted by UK-based nonprofit groups Global Witness and Foxglove to ascertain how well Facebook could detect hate speech in advertisements that were submitted to the platform. 

The 12 text-based ads created by Global Witness and Foxglove contained life-threatening hateful content against people belonging to the Amhara, the Oromo, and the Tigrayans – three main ethnic groups in Ethiopia. The cache of internal documents leaked by whistleblower Frances Haugen in 2021 had also showed how the corporation’s ineffective moderation was “literally fanning ethnic violence” in the region. Despite its repeated claims of commitment to counter online hate speech, Facebook approved these ads for publication. The ads, however, were not actually published on the platform. 

“It is absolutely unacceptable that Facebook continues to approve ads inciting genocide in the midst of a conflict that has already taken the lives of thousands of people and forced millions more to flee their homes,” said Ava Lee, Digital Threats to Democracy Team Lead at Global Witness. “Facebook falsely touts its ‘industry-leading’ safety and security measures to keep hate speech ads off its platform but our investigation shows that there remains a glaring gap in Ethiopia, which the company claims is one of its highest priority. The apparent lack of regard for the people using their platform has real-world deadly consequences.”

Facebook, when notified by Global Witness, responded by stating that the ads should not have been approved for publication. A week later, the group submitted two more ads with violent content written in the most widely used language, Amharic, in Ethiopia. They were approved for publication again. 

“We’ve invested heavily in safety measures in Ethiopia, adding more staff with local expertise and building our capacity to catch hateful and inflammatory content in the most widely spoken languages, including Amharic,” said Meta in an emailed statement. The company further added that people and machines can still make mistakes. 

“We’ve picked out the worst cases we could think of,” said Rosie Sharpe, a campaigner at Global Witness. “The ones that ought to be the easiest for Facebook to detect. They weren’t coded language. They weren’t dog whistles. They were explicit statements saying that this type of person is not a human or these type of people should be starved to death.”

In March, Facebook had failed a similar test run by Global Witness with hate speech against the Rohingya people in Myanmar. Facebook was unsuccessful in detecting hateful and inflammatory content in the advertisements which were subsequently approved for publication by its systems.  

Meta’s ineffective content moderation has repeatedly raised concerns and questions as to how the tech corporation is handling hate speech and misinformation across its platforms. The 2021 Facebook Papers revealed that Facebook struggles with non-English languages, which leaves the platform vulnerable to abuse and hate speech, especially in the Global South. About 87 percent of the company’s global budget on classifying misinformation was allocated for the US, while the rest of the world received the remaining 13 percent of it.

 

Tags: Facebookhate speechMetaViolence
Previous Post

Platforms Of Oppression: Conceptualising Digital Colonialism In The Pakistani Context

Next Post

Permission By Proxy: Internet And Children’s Privacy

Share on FacebookShare on Twitter
PTA denies role in massive data leak, says 1,372 sites blocked

PTA denies role in massive data leak, says 1,372 sites blocked

September 11, 2025
Khyber Pakhtunkhwa police crack down on TikTokers for ‘promoting obscenity’

Khyber Pakhtunkhwa police crack down on TikTokers for ‘promoting obscenity’

September 11, 2025
Afghan refugee children at Girdi Jungle refugee camp. Photo credits: Ramna Saeed

Pakistan blocks SIMS of Afghan refugees after deportation deadline

September 9, 2025
No Content Available

Next Post
Permission By Proxy: Internet And Children’s Privacy

Permission By Proxy: Internet And Children's Privacy

About Digital Rights Monitor

This website reports on digital rights and internet governance issues in Pakistan and collates related resources and publications. The site is a part of Media Matters for Democracy’s Report Digital Rights initiative that aims to improve reporting on digital rights issues through engagement with media outlets and journalists.

About Media Matters for Democracy

Media Matters for Democracy is a Pakistan based not-for-profit geared towards independent journalism and media and digital rights advocacy. Founded by a group of journalists, MMfD works for innovation in media and journalism through the use of technology, research, and advocacy on media and internet related issues. MMfD works to ensure that expression and information rights and freedoms are protected in Pakistan.

Follow Us on Twitter

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

No Result
View All Result
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements