Friday, June 13, 2025
Digital Rights Monitor
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements
No Result
View All Result
Digital Rights Monitor
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements
No Result
View All Result
Digital Rights Monitor
No Result
View All Result

in Comment

Selective moderation

Asad Baigby Asad Baig
May 12, 2025
Selective moderation

NOTE: This article was originally published in The News International on May 11,2025.

My readers have likely noticed that my articles often begin with a real-life story: something striking yet relatable; a human moment to ease into the messier truths that usually follow. The incident that prompted this piece is too grotesque to repeat in full. Not just because of its sheer brutality, but because amplifying that kind of hate, even critically, risks further harm. In times of conflict, the line between exposure and escalation becomes dangerously thin. When incitement to violence is thriving, repetition can sometimes serve the very forces it aims to condemn.

It was a video that surfaced on my Facebook feed, a segment from an Indian news channel, though that designation increasingly blurs the line between journalism and spectacle. The guest, a war-monger disguised as a political commentator, made a chilling call to action: a direct incitement to violence against Pakistan’s leadership, framed not as opinion but as encouragement. The language was blunt and the intent was unmistakable. It was not couched in metaphor or political critique. It was a televised invitation to bloodshed, aired without hesitation and apparently with a consequence.

Even in times of active conflict, international law draws clear boundaries. The Geneva Conventions, ratified by nearly every country, serve as the foundational framework for the conduct of war, establishing the bare minimum of humanity that must be upheld even amid hostilities. These safeguards are meant to prevent war from descending into unchecked brutality. Yet, those standards appear to carry little weight in Meta’s content moderation practices, particularly when calls for violence are wrapped in nationalism, broadcast without consequence and propelled by Facebook’s own algorithmic incentives. Meta, for its part, remains unmoved.

Naturally, I reported the video. A broadcast openly inciting one of the most extreme forms of violence I have encountered on live television—especially at a time when tensions are sky-high between two nuclear nations—seemed like a textbook case of prohibited content. But the response was the usual hollow refrain: the post “doesn’t violate our Community Standards;” a disembodied shrug from a platform that no longer even pretends to draw a line.

Social media companies frequently claim to be facilitators of open dialogue. In practice, however, they act as amplifiers of outrage, not by coincidence but by design. Their algorithms are structured to prioritise engagement, and in regions like South Asia, that often means prioritising content that stirs anger, confirms bias and deepens polarisation.

The recent scaling back of moderation by major platforms is often framed as a resource issue, or as part of a broader commitment to “free speech.” But these explanations overlook the more unsettling reality: the decision to step away from responsibility in high-risk regions is neither accidental nor neutral. It reflects a calculated unwillingness to interfere with the very dynamics that keep these platforms commercially successful.

The incentives are straightforward. Content that provokes tends to perform better, it keeps users scrolling, reacting and sharing. This drives ad revenue, which in turn sustains the platform’s business model. In this setup, disinformation is not just a byproduct of poor enforcement. It is part of a system that quietly rewards the most damaging material while obscuring accountability behind vague content policies and appeals to free expression.

What this produces is an information environment where extreme narratives are surfaced more readily than verified facts and where certain perspectives are routinely amplified while others are suppressed, either through takedowns, blocks or algorithmic neglect. In moments of geopolitical tension, the effect is particularly stark. It is not simply that platforms are failing to manage harmful content; they are helping determine which stories are heard and which are not.

This becomes especially problematic when these dynamics align with state interests. In the aftermath of the Pahalgam attack, Indian authorities blocked access to several Pakistani news outlets, cutting off a significant source of alternative perspectives for Indian audiences. At the same time, inflammatory and often misleading content targeting Pakistan, such as the one described in the beginning of this article, continued to circulate on Indian social media, often without moderation or intervention. The result was a lopsided flow of information, with platforms playing an active role in amplifying one narrative while erasing another.

These choices matter. They determine how conflicts are understood by the public and how neighbouring countries perceive one another. In South Asia, where tensions are routinely heightened by miscommunication and historical grievances, the consequences of digital asymmetry are far from abstract.

For platforms that present themselves as politically neutral, such selective application of rules raises difficult questions. When moderation is inconsistent and dangerous content is allowed to spread unimpeded while legitimate journalism is blocked, claims of neutrality become harder to accept. Platforms may not be producing the content, but by allowing it to flourish and by profiting from its reach they are far from impartial.

Moving forward, any serious attempt to address the problem will require more than policy tweaks. It will require platforms to re-engage with the regions they have steadily deprioritised; to invest in moderation structures that reflect linguistic and political realities; and to work transparently with independent fact-checkers across borders. Most importantly, it will require an acknowledgement that neutrality, in the absence of accountability, is no longer a sustainable position.

Previous Post

Pakistan blocks Indian YouTube channels and websites over ‘anti-state propaganda’

Next Post

Neutrality or Complicity

Share on FacebookShare on Twitter
Pakistan Warned of Cyberattack Threat Amid Tensions with India

Pakistanis Urged to Secure Accounts After Global Data Leak Exposes 184 Million Credentials

May 27, 2025

‘Pataal Lok’ sequel

May 24, 2025
BRAZIL: X accessible to many again after comms network update

X Back Online in Pakistan After Global Outage

May 24, 2025
No Content Available

Next Post
PAKISTAN: Police officers barred from social media activity without approval

Neutrality or Complicity

About Digital Rights Monitor

This website reports on digital rights and internet governance issues in Pakistan and collates related resources and publications. The site is a part of Media Matters for Democracy’s Report Digital Rights initiative that aims to improve reporting on digital rights issues through engagement with media outlets and journalists.

About Media Matters for Democracy

Media Matters for Democracy is a Pakistan based not-for-profit geared towards independent journalism and media and digital rights advocacy. Founded by a group of journalists, MMfD works for innovation in media and journalism through the use of technology, research, and advocacy on media and internet related issues. MMfD works to ensure that expression and information rights and freedoms are protected in Pakistan.

Follow Us on Twitter

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

No Result
View All Result
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements