July 15, 2022 – Meta Platforms, Inc., the parent organisation of leading social media platforms such as Facebook, Instagram and WhatsApp, has released its first annual report detailing the impacts of its products and policies on human rights globally and the company’s efforts to tackle growing online challenges, including misinformation and hate speech, across its platforms.
The report, released on Thursday, covers 2020 and 2021 and provides details regarding Meta’s overall approach to manage human rights risk, according to an official statement.
“The report includes insights and actions from our human rights due diligence on products, countries and responses to emerging crises,” states Meta. “It also discusses our work protecting the privacy of journalists and human rights defenders, increasing youth safety on Instagram, fighting exploitation across our apps and protecting free and fair elections around the world.”
Foley Hoag, the law firm commissioned to conduct human rights impact assessment of India, notes in its summary the potential for “salient human rights risks” including “advocacy of hatred that incites hostility, discrimination, or violence” involving Meta’s platforms. The assessment, however, did not investigate the “accusations of bias in content moderation”.
Meta’s controversial and discriminatory approaches to content moderation have repeatedly raised concerns and questions as to how the tech corporation handles hate speech and misinformation across its social media platforms. The 2021 Facebook Papers revealed that Facebook struggles with non-English languages, which leaves the platform vulnerable to abuse and hate speech, especially in the Global South. About 87 percent of the company’s global budget on classifying misinformation was allocated for the US, while the rest of the world received the remaining 13 percent of it.
Meta is frequently called out for its failure to contain the spread of sensitive and inflammatory content, particularly hate speech and misinformation, which has manifested serious damages in countries such as India (Meta’s largest market by number of users), Myanmar, Ethiopia, among others. Recently, a test conducted by UK-based nonprofit groups Global Witness and Foxglove to ascertain how well Facebook could detect hate speech in advertisements that were submitted to the platform showed that the company had failed again to detect hate speech in advertisements inciting violence. Earlier this year, Facebook failed a similar test run by Global Witness with hate speech against the Rohingya people in Myanmar. Facebook was unsuccessful in detecting hateful and inflammatory content in the advertisements which were subsequently approved for publication by its systems.
Meta’s summary of its assessment of India, however, has been termed an attempt to “whitewash” the firm’s findings by Ratik Asokan, a representative from India Civil Watch International who participated in the assessment.
“It’s as clear evidence as you can get that they’re very uncomfortable with the information that’s in that report,” said Asokan. “At least show the courage to release the executive summary so we can see what the independent law firm has said.”
Similarly, Human Rights Watch researcher, Deborah Brown, called the summary “selective” and remarked that it “brings us no closer” to understanding Meta’s role in spreading hate speech in India or the commitments that the company will make to address the problem.
In addition to being grilled for placing profits over public good, Meta has faced accusations of narrowing the scope of its assessment of human rights impacts as well as delaying its completion. The corporation has also been called out for not taking action against violence fuelled by ethnic, political and religious intolerance across its platforms, especially in India and Myanmar.