Saira’s* eyes are unable to look away from her phone’s screen. India’s ruling party, the Bharatiya Janata Party (BJP), posted a photo on Instagram with a clear message, “No forgiveness to terror spreaders”, with the image of men who seem stereotypically ‘Muslim’ all depicted around a noose. The men, with their beards, prayer caps and white kurta pajama, could be anyone she knew, Saira thought, her father, her brothers, or the uncles who live in her neighbourhood, back in Hyderabad, Telangana. Saira, a 34-year-old HR professional working in Mumbai, has seen content like this online, but never has it been this blatant. She goes back to her chat with her friends, they say the post has been taken down on Twitter, but Instagram is not budging. The longer the post stays up, the more intense all their anxieties become.
While this is a story of communal violence and minority rights in South Asia, this is also a story about social media and the moderation of political content on it. We often think of content moderation in its typical form; as Dr. Sarah T. Roberts, a researcher and an associate professor at UCLA, defines it in her book “Behind The Scenes”, “Content moderation is the organised practice of screening user-generated content on social media platforms.” It is usually conducted in two ways, through Artificial Intelligence (AI) or through manual checks performed by people.
From designing the method to be used, adding qualifiers to allow or disallow content, to implementation of these qualifiers that make up content moderation policies, at every step of the way, human intervention plays a significant role. So it is fair to assume that political biases of these humans involved in the process also impact the way these policies are formulated and implemented. So content moderation is in itself political, since it raises questions about what filters to be used, who decides them, and who implements them? It also raises questions and concerns about freedom of speech, and casts shadows on claims that social media is a space for all. This discussion becomes infinitely more complicated and difficult to traverse when we extend content moderation over political content, posts made by politicians, political parties and governments. How can and should social media react to biases seen in mainstream political discourses? And how can and should social media correct for bad actors in politics, or politics centred around egocentrism and otherization/demonization of other communities?
The greatest example of this we have seen was during the presidency of Donald Trump in the United States. Social media, led by Twitter, leapt onto what the former President would tweet, usually at the late hours of the night. Fact checkers, journalists, and civil society organisations in the USA sprung into action too, holding social media accountable for being the counterweight to all what the then-President would say. Of course, not all social media companies were made equal, Facebook (now, Meta) would often be more reactive than proactive, however would be made to act under pressures of stakeholders in America. The breaking point for all of this was the insurrection of January 6th, 2021, when Trump supporters stormed the US Capitol Building, sending shockwaves across USA’s politics and international image. The reaction, not only from within social media, but from outside of it was massive.
The election of Donald Trump, and other ‘conservative’ leaders around the world shone a bright light on the negative political power of social media; how disinformation, hate speech, and derogatory language, if shared in echo chambers, reverberates through the internet sphere leading to scary changes in the world order. Platforms like Twitter took it upon themselves (in some capacity) to limit such effects, whereas platforms like Meta were taken to task by grand American Congress hearings, and harsh laws in the European Union. These interventions have led to resources, and serious policy thinking being devoted to the moderation of political content.
This is not to say that content moderation did not exist before January 6th, 2021, however, that the process doing so was put through the ringer since then. Questions were asked about its efficiency and whether new systems and oversights needed to be brought in.
These questions are important, especially given the power of social media, the scope of the questions (and their urgency) needs to go beyond that of the USA, and the Global North in general. We need to examine content moderation, especially political content mediation in the rest of the world, but for a start, this article will look at South Asia.
Content Moderation in South Asia
Home to over 1.5 billion people, the South Asian region represents one of the largest user bases for social media, which includes everything from TikTok, Facebook, and also WhatsApp. The need for content moderation for such a large user base is evident, not just for political reasons, but for reasons to do with cultural and religious sensitivities.
So, just how much time does a mega platform like Facebook give us, when it comes to content moderation? Well, the answer was shocking. An unequal distribution of time was to be expected, however, the scales skew massively to the West. According to a leaked 2020 report by the Wall Street Journal, 87 percent of Facebook’s moderation resources are dedicated to the American market which represents less than 10 percent of the platform’s users. The remaining 13 percent of resources is split across the rest of the world. Considering the strict laws around data, content, and hate speech in other countries in the Global North, it can be assumed that South Asia probably gets very little of the 13 percent.
South Asia, in and of itself represents a quagmire for any kind of content moderation, and the reason for that boils down to diversity: in language, dialect, preference of Roman Hindi or Urdu, religion, caste, so on and so forth. Operating in this region demands a more hands-on approach, however, given the numbers we are seeing, that does not seem to be happening anytime soon.
Shmyla Khan, Director of Policy and Research at Digital Rights Foundation (DRF), shared her thoughts about content moderation in general. “Platforms entered markets in the Global South with some degree of arrogance. Their belief that they would be welcomed with open arms led them to not do their due diligence on the market, to understand how it operates and what its sensitivities are.” She further elaborated that when it became apparent that content moderation was needed, it was already too late. Ever since then, social media platforms have only been playing a game of catch up.
To add further complexity, there is the issue of the immense politicisation of social media platforms. From the perspective of content moderation, the politics occurring online is extremely complicated given that parties not only represent ideological standpoints in this region, but they stand on the fault lines of language, religion, and caste. This makes moderation, and control of content, a dangerous game.
Speaking to DRM, Prasanth Sugathan, the Legal Director at Software Freedom Law Centre (SFLC, India), said, “Social media platforms are used in a big way by political parties in this region. Platforms are used to manipulate political opinion and to drown the voices of their opponents.” He further added how ineffective scrutiny of social media power has, according to him, led platforms to take an extremely relaxed approach to the matter at hand.
Prateek Waghre, Director of Policy at Internet Freedom Foundation (IFF) in India, highlighted the economics of the matter. His argument was that platforms earn more from the Global North than they do from the rest of the world. As a corporation, they are bound by the laws of economics and business, so the money and resources follow the revenue. He also further commented that “social media is an adversarial space. People are fighting for attention, and if someone is set on sending out a message, they will find ways around the safety mechanisms and get online.”
These comments raise questions about the responsibility and onus of making content moderation more effective, and consequently online spaces more safer – who does it truly lay with? One answer that should always remain relevant is that platforms themselves are accountable for what happens on their sites and apps and what is allowed, but outside of the platform, there are actors and stakeholders and it is important to discuss what their role is?
Governments and Content Moderation
Sugathan’s comments shine a light on the role of government in this equation. While his comments have to do more with policy directives and law, governments in South Asia are taking a role in content moderation, and that is one of sending requests for content takedown as indicated by the quarterly transparency reports of online platforms. As part of their practice, social media platforms have allowed users to flag posts they find problematic, and one of these ‘users’ have become Governments. State requests for content takedown have become a big part of transparency reports, and these incidents of takedowns show us how governments are using these facilities superficially, and not moving towards a real or concrete understanding of content moderation and their role in it.
India and Pakistan created headlines for topping Twitter’s most government requests list. They were joined by Turkey and Russia in being some of the most active governments who reached out to the platform for content takedown. Twitter elaborated that these requests either were related to matters of the law (wherein hate speech was involved), however requests were also made against journalists who were sharing news regarding the government in power. Another major platform that releases transparency reports about government requests is Google. According to the report’s website, Google receives content takedown requests from all governments, after which an internal team studies the request, keeping in mind legal and/or country specific sensitivities, and decides on whether or not to act on the request. The latest information on this website referred to the time period of June 2021 to December 2021. Here’s how India and Pakistan fared.
Pakistan made a total of 376 takedown requests to Google in the 6-month period, out of which 33 percent had to do with ‘Religious Offence’, 17 percent for ‘National Security’, 17 percent for ‘Obscenity’, and 13 percent for ‘Hate Speech’. Each request is highlighted in full for each country. One such example for Pakistan was, “We received 121 requests from the Pakistan Telecommunication Authority to remove 1737 YouTube URLs for alleged hate speech or sectarian content. The content primarily related to religious arguments between scholars and derogatory remarks against members of different religious groups/sects. Outcome: We restricted access to 1132 URLs from YouTube in Pakistan, 86 URLs had already been removed, 3 of the URLs were malformed, we requested more information for 227 URLs and we did not remove the remaining 289 URLs.”
India on the other hand made a total of 2,311 requests in the same period, where 51 percent were related to ‘Impersonation’, 17 percent for ‘Defamation’, 7 percent for ‘Obscenity’, and 2 percent for ‘Hate Speech’. An example of ‘Impersonation’ was listed on the website as: “We received a request from Ministry of Electronics and Information Technology (MEITY), India, on behalf of an individual to remove 12 URLs from Google Search containing commercial pornography where the user’s phone number appeared in video titles. The request was submitted for alleged violation of the “impersonation” offence under Rule 3(2)(b) of the Indian Intermediary Rules, 2021.”
The data shows that governments are forwarding requests relating to the privacy of individuals, or using local laws to justify the takedown of supposedly offensive content. The data also indicates that governments are employing their own justifications for content that falls under terms like “defamation” and “obscenity”. The data presented by Twitter shows us that points of view other than the government’s can get snubbed, furthermore, both India and Pakistan employed vague terms like ‘decency’ and ‘immorality’ to justify bans and takedowns of TikTok based content (eventually leading to overall bans in Pakistan four times). So, when left to its own devices, the Government cannot also act in a way to protect its population from harmful political content.
Mishandling of Content Moderation
The failure on these two ends, the platform and the Government, leads us to the question of what exactly will the repercussions of such a mishandling of content moderation have on societies it is enacted on? Prateek Waghre of IFF says, “There is no real precedent which we can look towards. The repercussions are still being lived and still being researched. Currently Facebook says that hate speech on its platform is at 0.05 percent, which is a staggeringly underestimated value.” In light of Facebook’s role to incite genocide against Muslim communities in India, and its executives in the country refusing to apply content moderation policies on hateful posts on the platform owing to the company’s political interests with the Indian government, is indicative of the extent of hate speech that exists on the website and the impact the human biases leave on communities on the ground.
Waghre was also quick to point that researchers and policymakers currently may not have the right tools to study this phenomenon, and that any outcome of such a research might lead us to over-index against hate speech, which may push us towards another extreme we do not want. That extreme being, over policing social media and what is said on it.
Shmyla Khan of DRF also added, “The myth of social media is that anyone can say what they went on there, that they have a platform. The truth is anything but that.” According to Khan, social media has become a space where misogyny, hate speech, and sectarianism have come to live and exist, and seemingly it is as if social media platforms like them there since such topics garner users and engagement. Khan also says, “While it is true that Facebook and Twitter did not invent misogyny and hate speech, they are responsible for the fact that their platforms have helped in the propagation of such ideas and hate.” She also goes on to reference how social media has had real life implications for journalists, activists and politicians who have found themselves on the firing line of vicious online smear campaigns. The same applies for minority groups, who as Khan points out, ironically only have social media platforms to voice their concerns and advocate for their rights. Sadly, the online spheres that once gave them the ability to speak are the very same that place massive targets on their backs too.
An incident in Ethiopia, from back in 2020, highlights just how scary social media, disinformation and targeted campaigns can be. Hachalu Hundessa was an Ethiopian singer who belonged to the Oromo ethnic group. He was assassinated in June that year after a disinformation campaign against him became popular on Facebook. The content on Facebook was incendiary and carried a heavy call for violence. The singer’s assassination led to massive riots in the country, which led to mob and ethnic violence. While this example may seem extreme, one must not underestimate the power of disinformation. In Pakistan, cases of blasphemy have been filed based on completely false information. Oftentimes it turns out the person or people, who started the news had malintent to begin with, and used the power of social media, and hate speech, to their benefit. One example of this impact is the murder of Mashal Khan, a student of Abdul Wali Khan University in Mardan, who was lynched to death in April 2017 by the students at his university over posts that were made by a fake account operating under his name.
So the question we are now left with is, what can we do? How do we move forward?
Social media is a powerful force and platform. Political content resides on this platform, this content is followed and watched by million, if not billions. Who does the responsibility fall to? The platform themselves? The Governments? Or the people?
In the USA, Congress has set up grand exhibitions of accountability, hauling in the likes of Mark Zuckerberg, Jack Dorsey, Tim Cook and Sundar Pichai into their hands for questioning. It remains to be seen how and if such questioning and show of State power will bring platforms like Facebook in line, but for now, it seems more like a media event, an opportunity for those in ‘power’ to really grill people from these platforms.
Prateek Waghre laughed about how it is as if these Congresspeople are using these ‘trial’ like proceedings for point scoring with a younger, more aware generation. They are, ironically enough, uploading clips of themselves in these hearings on the very platforms they are holding accountable. If you step back and think about the performance being held, it seems almost dystopian.
Prasanth Sugathan was also quick to look away from the American approach of accountability of social media companies through Congress-style trials. He said, “Although these platforms have been called for hearings in India by the parliamentary committees and committees of a state legislature, these have not been very effective in regulating the platforms or bringing out a meaningful change.” Adding in the fact that this was tried in India, it is yet another strike against an imperfect solution to an even more imperfect platform.
The bottom line is that social media platforms should bear the onus of content moderation, and also bear the responsibility that comes with hosting such a large and powerful space. As Khan put it, “These platforms are beholden to capitalism, and also beholden to their structural racism. They will also put resources into use for the Global North.”
All of the experts who spoke to DRM for this piece were in agreement, that as individual users who live in the Global South, there may be very little we can do, however, collectively, we must hold the platforms accountable, as well as hold political players accountable for how they use social media.
Currently, in South Asia, these discussions are had by and within digital rights groups, which in and of itself is alright, however, for this discussion and discourse to grow and make a positive impact, this has to fundamentally change. The scope of the discussion when it comes to content moderation needs to grow out of the realm of digital rights, and into that of human rights. Digital rights are, after all, human rights. Bringing in this universality, this shared experience of being ‘othered’ by social media platforms needs to be talked about on a much larger scale. The power we speak this truth to has the resources, wealth, and technology to solve this problem, yet they have shown that they do not have the will. They must and have to be pressured into a situation where they focus on the Global South. Till then, users in the Global South, especially minorities, will always face a battle, or at the very least an uncertain terrain when they use social media.
Social media and technology have revolutionised the world in the past two decades. Now, we must push for the change to make social media and technology equitable and equal for all.