Trolling, UAPA and journalists
In June 2021, 56 shanties housing around 270 Rohingya refugees were destroyed in a fire in the Kalindi Kunj area of South Delhi, near the Delhi-Uttar Pradesh border.
Noor, a reporter for an online news platform, went to the Rohingya camp to report on the incident. There she learned that the camps were set up on land belonging to the Uttar Pradesh government. Moreover, the Rohingya refugees told her that Uttar Pradesh (UP) government officials had visited the camp earlier in the day and asked them to vacate the land or face the consequences. The camps caught fire the same day. Noor put out a tweet detailing what she learned from her reporting.
Her Twitter feed was immediately flooded with comments from the right-wing Hindutva ecosystem. They trolled her for spreading misinformation to embarrass chief minister Yogi Adityanath of UP, demanding she be arrested under the Unlawful Activities Prevention Act (UAPA). This law has been misused to silence activists, journalists, and online critics of the present government. Being charged under the UAPA makes it very difficult to obtain bail. Moreover, she was pilloried for her Muslim name. “They called me jihadi, presstitute,” says Noor. She adds that since the Shaheen Bagh movement led by Muslim women against the Citizenship Amendment Act (CAA), National Population Register (NPR) and National Register of Citizenship (NRC), Muslim women have been targeted online in an organized manner.
“I have started tweeting less because there’s no way that I can tweet something against Hindutva ideology and not get trolled. Even my boss gets trolled for hiring a Muslim woman”, says Noor.
The experience is the same for persons from all minority communities in India, especially journalists. Rachel Chitra says she has been trolled quite a few times these last few years for having a Christian name. “Been called a ‘rice bag’ Christian, asked to lick the pope’s ass,” she says. The trolling, Chitra says, is almost always tied to anti-government reporting; or what Hindu trolls think is anti-government reporting when it could just be a plain vanilla reproduction of a Central Bank decision.
Explaining the pertaining situation, journalist and author Anna M M Vetticad says that: “Among India’s religious minorities, Muslims top the Sangh Parivar’s pyramid of hate, and Christians come next. If you belong to either of these communities and you publicly express your political views, hordes will be unleashed on you to punish you for having an opinion in a country that they say belongs less to you than to Hindus.”
The extent of abuse is summed up by Vetticad: “Trolls will drag the Pope and Jesus into anything and everything I say in my articles or on social media. They work hard to perpetuate the long-running lie that Christianity came to India with the European colonisers, and then use this lie to paint Indian Christians as traitors. I am often called an agent of the Pope, my name is routinely twisted to Anna Vatican and Anna Vagina, and the casteist slur ‘rice bag Christian’ is common. I’ve been told I belong to an army built by Sonia Gandhi to convert India to Christianity; this was said to me by an ex-colleague on Facebook, not a stranger. I have been told my mother was raped by the British and that I’m a product of that rape. What kind of human being speaks to another like that?”
A state of denial
Concerned with the growing incidences of religious violence from its platform in India, Facebook deployed researchers to review the situation. The researchers found that platforms like Facebook and WhatsApp generate hatred for the Muslim community, blaming them for spreading Covid-19 and ‘love jihad’ and equating them to pigs. The company’s reports show that it is aware that its largest market is targeted with inflammatory content, and users say Facebook isn’t protecting them.
Mostly people from marginalized communities, minority communities, or marginalized castes face this onslaught on Twitter and Facebook, says independent researcher and activist Aiman Khan.
“We have also seen similar things happening with groups such as Khalsa Aid and other Sikh groups, amplifying the voices of the farmer’s movement. These accounts had been taken down. And then, after much public uproar, Twitter restored their handles. Then again, they had to go through some other level of scrutiny and receive legal notices,” she says.
“I believe that Twitter is under immense pressure from the government. Some of the actions taken by the platform are because they were pressurized by the government. So, I’m unsure whether they are in a position to do something. Still, I believe they are responsible for dealing with this kind of violent content on their website,” she adds.
While people and organizations that work to highlight people’s movements and human rights violations online have had their profiles taken down, pages and profiles of people like Yati Narsinghanand Saraswati, a Hindu priest, on several occasions, have tried to incite communal tensions by spouting hate speech against minority communities, continue to exist on platforms like Facebook.
Divisive Facebook
In an interview with 60 Minutes, Facebook whistleblower Frances Haugen said the company incentivizes “angry, polarizing, divisive content”. She talked about systemic problems with the platform’s ranking algorithm that led to the amplification of “angry content” and divisiveness, evidence of which is in the company’s internal research.
Facebook changed its algorithm in 2018 to promote “meaningful social interactions” because engagement with content generates more interaction among users. Haugen explained that this change in focus has led to “engagement-based rankings,” which means engagement with content determines how widely that content is distributed.
Studies conducted by Facebook have found that content producers and political parties know that “angry content” is more likely to receive engagement.
Facebook has been steadily raking billions of dollars, hitting a billion dollars in revenues in India alone in 2020-21. These profits were driven by strong digital surges during the Covid lockdown and the growth of social media platforms such as Facebook, Instagram, and WhatsApp over the past few years.
Meta, formerly known as Facebook, owns four of the most popular social media platforms worldwide – WhatsApp, Messenger, Facebook, and Instagram — and has more than 300 million users in India alone.
Legal and policy analyst Pranesh Prakash (co-founder of the Centre for Internet and Society), however, disagrees and says that South Asia, particularly India, is not a more profitable market: “The average revenue generated per user by the platform in Asia is a fraction of the revenue they make in the West”.
Facebook’s average revenue per user (ARPU) in the Asia Pacific region in the second quarter of 2022 was $4.54. Facebook’s ARPU amounted to $50.25 in the combined US and Canada market.
Prakash also says that more polarization on the platform does not necessarily lead to more audience engagement, and that polarization isn’t an important focus of social media platforms. “When a social media platform is polarized and hate speech rises, people are less likely to engage (with the content).” He notes that YouTube removed the dislike button despite it driving polarization.
Laws, ethics, and immunity
In 2004, the e-commerce portal Bazee.com was taken to court. Its CEO was arrested for an obscene video uploaded by a seller on the platform. To prevent this kind of situation where an entity not responsible for a particular unlawful act is made accountable, there was a need for a change in the law. Section 79 was brought in through an amendment to the information technology Act 2000. The amendment was passed in 2008.
“This section provides a limited kind of protection, a safe harbour, a shield to intermediaries and protects them from being harassed. It protects them from being held liable for content that other people have done or had made or for unlawful acts…… rather than that other people have done,” says Prakash in this podcast.
Prakash points out that the amendment also provides exceptions to this shield. One exception is that the intermediary has to follow due diligence as laid down by the government.
In 2021, the Indian government introduced new rules to regulate the digital space, called the Information Technology Guidelines for Intermediaries and Digital Media Ethics Code Rules of 2021. These are much stricter and broader in scope compared to the existing rules. These rules seek to provide a grievance redressal mechanism for users of social media platforms, messaging applications, streaming services, and digital news publishers.
The government wants to regulate other manners of online content, even those directly published and not going through or not targeting intermediaries, says Prakash. The new rule targets online news publications even when they don’t act as intermediaries. These rules also target publishers of online curated content. Per Prakash: “So, it’s mostly to do with intermediaries but not exclusively, is how I would put it and mostly to do with rules they need to follow to avoid being held liable for third-party unlawful activities”.
The law requires internet intermediaries to earn legal immunity by discharging specific duties and responsibilities. These include setting up a grievance redressal mechanism, a more accountable take-down system, special expedited procedures for revenge porn cases, the appointment of India-based compliance officers, traceability requirements for certain specific purposes, deployment of automated filtering software and identification of physical addresses for service of legal notices.
Suppose an internet intermediary declines to abide by these new responsibilities. In that case, they lose the immunities offered under Section 79, making them legally liable for the acts of third parties on their platform.
So, it is not just internet service providers and social media intermediaries like Google and Facebook but also digital news publishers and streaming services, explains Prakash.
Government as moderator
“Now the government has a say in actual content moderation decisions,” says Prateek Waghre, policy director at the Internet Freedom Foundation.
He says that if a certain platform takes down a particular account under its various community guidelines for hate speech or harmful speech, that account owner can appeal to the grievance appellate committee. That appeal can overturn the decision of the platform.
“This is speculation, per se, but it does seem like protection against or some sort of check against platforms, taking down accounts that were more amenable to the government,” he adds.
According to Waghre, “There’s a lot of content and, unfortunately, a lot of hate speech, fear speech, and dangerous speech on these platforms. Now, they [social media companies] can do a certain amount.…They can invest more in local resources and build that knowledge to understand how people evade their monitoring mechanism”. However, he points out that we also have to keep in mind that there are limitations as it is a complex problem to solve: “As the platforms start taking action against this type of speech, the people engaging in this type of speech also evolve their methods.”
Then there’s a broader problem of how much we can expect platforms to do for what is essentially a social problem or a broader societal issue. “They can certainly do more, but we have to be realistic about what we expect them to do and the amount of the problem we expect them to solve.”
According to Waghre, big companies can invest in more technical skills to build better hate speech classifiers. Another thing they can do, he says, is to be more transparent about the amount of research they can enable.
“We don’t have broad research in the Indian context. Whatever research we have exists more in the US and European context. We’re trying to extrapolate what that means in India,” he says. So, he says, the platforms can enable universities and academia to do more research. “In India, there doesn’t seem to be a lot of appetite for this type of research. It’s there in small pockets, but it’s not as comprehensive or doesn’t have the same kind of institutional funding that research on white supremacism, for example, in the US, seems to have.”
Court battles
This year, Twitter sued the Indian government in the Karnataka high court to overturn some government orders to remove content from the social media platform in a legal challenge alleging power abuse by officials.
Twitter has filed a lawsuit against the Indian government in the Karnataka High Court in a legal challenge to some of its orders to remove content from the social media platform. Twitter said in a filing that it had received removal orders for content that did not fully comply with the procedural requirements of India’s IT act, without specifying which ones. The IT ministry had threatened to take legal action against Twitter if it did not comply with some orders. Twitter responded by complying so that it would not lose liability exemptions available as a host of content.
In recent times, the Indian government pressured Twitter to remove content on certain accounts critical of the government, including those that support an independent state for Sikhs, provide information about anti-government protests, or criticize the Modi government’s handling of the Covid-19 pandemic. The company has been investigated by the police and has faced a backlash in India for blocking accounts of politicians and other influential individuals, citing violations of its policies.
WhatsApp had also sued the Indian government in the Delhi high court in 2021 over the new internet laws, which, according to the company, will “severely undermine” their users’ privacy. The new law gives the Indian government greater power to monitor online activity, including encrypted apps such as WhatsApp and Signal.
Under the new law, India would require WhatsApp to remove encryption from its messaging services, allowing the government to log and trace all messages sent through the app. The government could then identify and take action against the sender if any content was ruled “unlawful”.
Prakash says that “In the Facebook case involving WhatsApp and end-to-end encryption, not having end-to-end encryption makes it easier for the government to tackle things like hate speech and rumour-mongering. So, [this is] one of the reasons that WhatsApp gives for its inability to curb say, violent speech on WhatsApp.”
According to Prakash, “These issues have arisen in the context of rape videos being circulated on WhatsApp, rumours about child abductors coming from out of town, and people being lynched because of that. So, in these situations, WhatsApp has said that it cannot do anything because it can’t see what is going on its platform. And it cannot answer the police when it is asked who started this rumour or started circulating the rape video? Who started spreading it? WhatsApp is unable to give an answer. And so, the government has required WhatsApp to provide ‘traceability’ for things like hate speech, rumour-mongering that leads to violence, and so on. And WhatsApp is fighting that in court, saying it cannot do so without compromising end-to-end encryption. Whether you take the side of the government or the side of WhatsApp on this depends on your position on hate speech and who’s responsible for hate speech, and how to curb hate speech.”
*Name has been changed to protect privacy
This report is part of DRM’s exclusive journalism series exploring Big Tech’s failure to contain hate speech and lack of corporate accountability across Asia.