06/25/2022

Facebook Continues To Approve Ads Inciting Violence: Investigation

June 13, 2022 – Tech giant Meta and its leading social networking platform Facebook have failed again to detect hate speech in advertisements inciting violence, according to a recent investigation.

A test was conducted by UK-based nonprofit groups Global Witness and Foxglove to ascertain how well Facebook could detect hate speech in advertisements that were submitted to the platform. 

The 12 text-based ads created by Global Witness and Foxglove contained life-threatening hateful content against people belonging to the Amhara, the Oromo, and the Tigrayans – three main ethnic groups in Ethiopia. The cache of internal documents leaked by whistleblower Frances Haugen in 2021 had also showed how the corporation’s ineffective moderation was “literally fanning ethnic violence” in the region. Despite its repeated claims of commitment to counter online hate speech, Facebook approved these ads for publication. The ads, however, were not actually published on the platform. 

“It is absolutely unacceptable that Facebook continues to approve ads inciting genocide in the midst of a conflict that has already taken the lives of thousands of people and forced millions more to flee their homes,” said Ava Lee, Digital Threats to Democracy Team Lead at Global Witness. “Facebook falsely touts its ‘industry-leading’ safety and security measures to keep hate speech ads off its platform but our investigation shows that there remains a glaring gap in Ethiopia, which the company claims is one of its highest priority. The apparent lack of regard for the people using their platform has real-world deadly consequences.”

Facebook, when notified by Global Witness, responded by stating that the ads should not have been approved for publication. A week later, the group submitted two more ads with violent content written in the most widely used language, Amharic, in Ethiopia. They were approved for publication again. 

“We’ve invested heavily in safety measures in Ethiopia, adding more staff with local expertise and building our capacity to catch hateful and inflammatory content in the most widely spoken languages, including Amharic,” said Meta in an emailed statement. The company further added that people and machines can still make mistakes. 

“We’ve picked out the worst cases we could think of,” said Rosie Sharpe, a campaigner at Global Witness. “The ones that ought to be the easiest for Facebook to detect. They weren’t coded language. They weren’t dog whistles. They were explicit statements saying that this type of person is not a human or these type of people should be starved to death.”

In March, Facebook had failed a similar test run by Global Witness with hate speech against the Rohingya people in Myanmar. Facebook was unsuccessful in detecting hateful and inflammatory content in the advertisements which were subsequently approved for publication by its systems.  

Meta’s ineffective content moderation has repeatedly raised concerns and questions as to how the tech corporation is handling hate speech and misinformation across its platforms. The 2021 Facebook Papers revealed that Facebook struggles with non-English languages, which leaves the platform vulnerable to abuse and hate speech, especially in the Global South. About 87 percent of the company’s global budget on classifying misinformation was allocated for the US, while the rest of the world received the remaining 13 percent of it.

 

Website | + posts

No comments

Sorry, the comment form is closed at this time.