August 1, 2022 – Facebook is more than just a social networking platform today and the power it has grown to influence political and social events globally is rewriting the world. One of the persistent problems with Facebook, boasting nearly three billion monthly active users, is its failure to contain hateful and inflammatory content and despite parent organisation Meta’s reassurances, content flouting the company’s own community guidelines can be seen floating around unchecked across the platform, whipping up one controversy after another.
Facebook is known to repeatedly approve advertisements with hate speech and insensitive content, particularly in countries such as Myanmar, Bangladesh, India, and Ethiopia. Most recently, despite risks of violence in East Africa’s biggest economy Kenya, which is preparing for general elections, investigations have laid bare Meta’s continuing failure to detect hate speech in the ads submitted to the platform for publication.
The 2017 Kenyan election (weeks leading up to which saw false information mounting on Facebook and WhatsApp) and the ongoing human rights crises stoked by communal divisions are a grim reminder of the region’s political volatility and losses incurred in the course of it. As for Meta, the real-world consequences of inflammatory material being posted across its platforms have already been documented around the world, with episodes of violence fuelled by religious, ethnic and political violence in South Asian countries being glaring examples of Meta’s questionable and ineffective content moderation practices.
On July 20, Meta shared an update on how the company was preparing for the upcoming presidential elections in Kenya, claiming to make investments in people and technology to reduce the spread of misinformation and take down harmful content across its platforms. The company further claimed that its team, dedicated to Kenyan election, comprises experts who are deeply familiar with the region and will help Meta understand the local landscape better. With these experts’ help. Meta said, it will continue to respond to potential problems and abuse that might emerge in the country, where over 12 million people use Facebook.
“Our approach to content moderation in Africa is no different than anywhere else in the world,” Kojo Boakye, Meta’s director of public policy for Africa, the Middle East and Turkey, told The Washington Post. “We prioritise safety on our platforms and have taken aggressive steps to fight misinformation and harmful content.”
But recent investigations, conducted by UK-based advocacy groups Global Witness and Foxglove, to test whether Facebook has improved its systems to detect hate speech and other insensitive material submitted to the platform paint an entirely different picture. The groups ran similar tests with hate speech against the Rohingya people in Myanmar earlier this year, and they too turned in Facebook’s failure to classify hate speech and timely remove it from the platform.
While previous tests involved ads containing life-threatening hateful content in non-English languages–including the most widely used language Amharic in Ethiopia, Global Witness has now found Facebook incapable of detecting hate speech even in ads with content in English, raising serious questions and concerns on Meta’s “super efficient AI models to detect hate speech”. Meta has time and again been called out by rights activists for its failure to effectively regulate content across its platforms, especially in non-English-speaking countries, despite monumental annual profits and ideal resources at its disposal.
In its latest investigation, Global Witness submitted 10 real-life examples of hate speech in Kenya’s official languages, Swahili and English, to Facebook for approval. The purpose of the test was to ascertain whether Facebook could detect content inciting hatred and violence in English better. All ads containing hate speech were approved by the platform, including one which was initially rejected for not complying with the platform’s Grammar and Profanity policy.
“All of the ads we submitted violate Facebook’s Community Standards, qualifying as hate speech and ethnic-based calls to violence,” revealed Global Witness. “Much of the speech was dehumanising, comparing specific tribal groups to animals and calling for rape, slaughter and beheading. We are deliberately not repeating the phrases used here as they are highly offensive.”
On Friday, Kenya’s National Cohesion and Integration Commission (NCIC) gave Facebook seven days to curb misinformation in connection with the presidential election scheduled for next week (August 8), warning that the social media platform’s operations would be suspended in case of failure. The NCIC stated that Global Witness’ report supported the commission’s own internal findings.
“Facebook is in violation of the laws of our country,” said NCIC commissioner Danvas Makori. “They have allowed themselves to be a vector of hate speech and incitement, misinformation and disinformation.”
But Kenya’s information minister, Joe Muchero, tweeted out on Saturday saying, “Media, including social media, will continue to enjoy press freedom in Kenya.” The tweet added that it was not clear what legal framework the NCIC was planning to use to suspend Facebook. Hence, threats to the internet and violence perpetuated through online platforms continue to loom in the wake of rising tension with the election fast approaching.
In response to the NCIC’s warning, a Meta spokesperson said, “We’ve taken extensive steps to help us catch hate speech and inflammatory content in Kenya, and we’re intensifying these efforts ahead of the election.” However, the statement added that despite the company’s efforts to contain dangerous content, there will still be “examples of things we miss or we take down in error, as both machines and people make mistakes”.
This is not the first time Facebook has found itself in the midst of heated debate and scrutiny surrounding misinformation and accountability during the election period in Kenya. Cambridge Analytica, a former British political consultancy, was exposed to be deploying personal data of millions of Facebook users for targeted political advertising and to spread false information during the 2013 and 2017 elections in Kenya, respectively. The consultancy is believed to have misused the improperly mined data from Facebook to influence the 2016 US presidential elections and Brexit vote as well. Meta CEO Mark Zuckerberg and COO Sheryl Sandberg, who is stepping down from her position after 14 years this fall, are set to face hours-long depositions in a privacy lawsuit over Meta’s covert data-sharing practices with Cambridge Analytica.