Meta, the owner of leading social media platforms Facebook and Instagram, contributed significantly to ethnic violence in Ethiopia, according to a new report by Amnesty International.
The report, titled “A Death Sentence for my Father: Meta’s Contribution to Human Rights Abuses in Northern Ethiopia”, sheds light on human rights violations Meta platforms may have triggered or exacerbated in the region, according to an official press release. It focuses on the role of Meta’s algorithmic systems and questionable business practices that have time and again raised concerns against the backdrop of ethnic volatility in Ethiopia.
“Three years after its shocking failures in Myanmar, Meta has once again, through its content-shaping algorithms and data-driven business model, contributed to serious human rights abuses,” said Agnès Callamard, secretary general at Amnesty International.
The report looks at Meta’s failures to contain dangerous and harmful material on its social media platforms, including hate speech and disinformation. The company, which has received repeated warnings from Ethiopian civil society and human rights advocates, failed to take adequate and effective measures to tackle harmful material targeting vulnerable groups, particularly the Tigrayan community.
Meta was notified about the dangers it posed to Tigrayans both before and after November 2020 clashes, which saw thousands of Ehiopians fleeing the region and taking refuge in Sudan. However, the drastic consequences of Meta’s negligence or unwillingness to regulate critically harmful content led to a compromise in the physical safety of people belonging to the Tigray community.
The concerns persist from the sheer exploitation of Meta’s platforms for the execution of hate-based and coordinated disinformation campaigns against the targeted communities. Facebook, in particular, has reportedly been weaponised to spread misleading information and to incite violence because of the lack of sufficient moderation resources in the region.
Amnesty International’s investigation reveals that Meta’s algorithms and business models, which prioritise enhancing user engagement, resulted in a disproportionate amplification and dissemination of “dehumanizing, factually inaccurate, and ethnically targeted” material against the Tigrayan community.
One specific case highlighted in the investigation concerns a university professor named Meareg Amare. In November 2021, Mearg was targeted on Facebook, where his personally identifiable information such as name, photo, place of work, and home address, was openly published. This information was accompanied by false accusations that he supported the Tigray People’s Liberation Front (TPLF), which remained classified as a terrorist organisation by the Ethiopian government from 2021 to 2023.
A few days later, Amare was murdered by a group of men at his house. A $2 billion lawsuit was filed in late-2022 against Meta over the professor’s killing. The report’s title refers to the revelations made by Amare’s son to the Amnesty researchers.
According to a civil society member from Ethiopia, Meta’s responses to reported incidents of hate speech and incitement to violence on its platforms are “extremely slow”. This highlights the company’s deficient approach towards managing dangerous online content that tends to spread quickly due to the sensitive nature of the conflicts that have long plagued the region with ethnic volatility and political instability.
Amnesty International has called for “urgent and comprehensive reforms”, such as emergency measures that can reduce algorithmic amplification of critical content in times of crisis.
“Furthermore, it emphasizes the need for states to regulate big tech companies in order to protect human rights and ensure accountability for any human rights violations caused or enabled by these companies, whether through product design choices or a failure to implement proper safeguards,” the statement adds.
Beside the report, several investigations, conducted by rights groups Global Watch and Foxglove, have laid bare Meta’s inability to detect hate speech and violent content in the ads submitted to the platform. The ads, which contained violent hate speech against three main ethnic groups in Ethiopia, including Tigrayans, were approved for publication by Meta, intensifying concerns that the company learnt nothing from its mistakes and continues to ignore its sensitive market in Ethiopia.