Thursday, September 25, 2025
Digital Rights Monitor
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements
No Result
View All Result
Digital Rights Monitor
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements
No Result
View All Result
Digital Rights Monitor
No Result
View All Result

in Comment, DRM Exclusive

Opinion: Facebook needs to do more to protect women on its platforms

Salwa Ranaby Salwa Rana
July 15, 2020

The internet has emerged as an important space for the modern feminist movement. Some say that the shift to the fourth wave of feminism can largely be attributed to the growing use of the internet in the past decade with social media platforms being used as tools by feminist collectives to mobilise and take action against oppression.

Just like the early feminist movements of the 20th century fought hard to reclaim public spaces, women in the digital age face similar challenges in securing their presence online. The misogyny faced by women over the years translated from offline spaces onto the internet, where it became far easier to identify and target certain sections of the society.

During the peak of the #MeToo movement in 2017, Facebook faced an enormous challenge of countering gender-based violence on its platform. The social media site that has a user-base of over 2 billion people, was expected to strike a balance between curbing hate speech and allowing for the right to freedom of expression, and also to take into consideration the cultural and regional contexts that shape women’s experiences online.

“The magnitude of abuse that women face online cannot compare to a single “dehumanising” comment that a man may come across every now and then on this platform.”

Unfortunately, the platform failed to provide adequate protection for women by following an “applies to all” philosophy when devising a new hate speech policy. Under this, all groups are offered the same level of protection without taking into account the power dynamics that exist both online and offline, and put women at a disadvantage. For instance, there is a huge debate surrounding why “men are trash” is classified as tier-one hate speech in Facebook’s policies, whereas, the debate against this policy claims that this sentence does not incite hatred or violence. Facebook has justified the position by claiming that dehumanising comments that generalise any group will be deemed as hate speech. Therefore, “women are trash” will be taken just as seriously as “men are trash”.

[As an experiment for this article, “women are trash” was posted on both Facebook and Instagram which was instantly taken down.]

Facebook cannot absolve itself of responsibility after creating tone-deaf guidelines on hate speech behind closed doors in a conference room in California. The magnitude of abuse that women face online cannot compare to a single “dehumanising” comment that a man may come across every now and then on this platform. Women face harassment, in sexual form or otherwise, along with death and rape threats on a daily basis, most of which can be found in the comments sections of posts, open for everyone to read. They are also more likely to have their private information leaked, thus putting their offline security, and ultimately their lives, at risk. The current state of content moderation by Facebook does not provide for adequate protection from this abuse.

In the Pakistani context, for instance, the peak of online hate speech is observed around International Women’s Day when women gear up for the annual rally, the Aurat March, where they demand enforcement and protection of fundamental rights. The violence on the internet does not come in the form of regular hate speech, but coordinated and planned attacks that attempt to “raid” a female user’s profile with hundreds of rape/death threats and insults, often with cultural connotation, most of which are in local and regional languages. These comments hardly ever get taken down by Facebook, unless reported multiple times, which takes days to come through. Until then, the damage has already been done.

Facebook has claimed that its content moderators can speak over 50 languages, and that it has the ability to hire translators if need be. Despite having no limitations in terms of increasing capacity, hate speech posted in local or regional languages directed at women users barely gets any attention regardless of it being a massive problem that continues to be highlighted by users and observers on the platform. 

“Even the expectation of a social justice approach in Facebook’s content moderation seems like a distant dream.”

It is widely known how Facebook’s existence is deeply rooted in misogyny. The platform was first started off to rate women on their looks by using their photos without their consent, and has evolved to what it is today. Therefore, it does not come as a surprise that protection of women from online violence is not a top priority there. In fact, Facebook on a number of occasions has been found to allow for extremely misogynistic advertisements on its platforms, along with hypersexualising images of women, and has allowed and promoted racism, but goes on to place content warnings on photos of underweight women and breast cancer survivors that aim to promote awareness. 

These biases are now as transparent as ever, and require immediate attention from the social media giant. Facebook needs to adopt a social justice approach when revisiting its community guidelines against hate speech and online violence, taking into account various power imbalances that exist, and offering more protection to weaker and vulnerable groups of the society. There is also a stronger need for more transparency about moderators and moderation strategies. But given where Facebook is headed with respect to the recent promotion of racist content, its role in promoting violence against Rohingya Muslims in Myanmar, disrupting democracy, even the expectation of a social justice approach in its content moderation seems like a distant dream.

The internet has given people the chance to voice their opinion, stand up to oppression, and to hold the powerful accountable. People should be able to express anger or contempt against their oppressors without being censored under the pretence of hate speech regulation, while real hate speech and incitement to violence goes unnoticed when directed at the less powerful. Social media platforms must do more to de-normalise online misogyny and hatred.

Tags: Aurat MarchContent ModerationFacebookhate speechPakistan
Previous Post

TOR Project’s website inaccessible in Pakistan on multiple networks

Next Post

Petition filed in Lahore High Court to ban TikTok in Pakistan

Share on FacebookShare on Twitter
NCCIA charges three YouTubers for promoting illegal gambling apps

NCCIA charges three YouTubers for promoting illegal gambling apps

September 25, 2025
Shaza Fatima says 5G rollout ‘months away’ despite internet disruptions

Shaza Fatima says 5G rollout ‘months away’ despite internet disruptions

September 24, 2025
Karachi man sentenced for 6 years for making a fake profile of a woman, uploading objectionable content after she rejected his proposal

Karachi man sentenced for 6 years for making a fake profile of a woman, uploading objectionable content after she rejected his proposal

September 23, 2025
No Content Available

Next Post

Petition filed in Lahore High Court to ban TikTok in Pakistan

About Digital Rights Monitor

This website reports on digital rights and internet governance issues in Pakistan and collates related resources and publications. The site is a part of Media Matters for Democracy’s Report Digital Rights initiative that aims to improve reporting on digital rights issues through engagement with media outlets and journalists.

About Media Matters for Democracy

Media Matters for Democracy is a Pakistan based not-for-profit geared towards independent journalism and media and digital rights advocacy. Founded by a group of journalists, MMfD works for innovation in media and journalism through the use of technology, research, and advocacy on media and internet related issues. MMfD works to ensure that expression and information rights and freedoms are protected in Pakistan.

Follow Us on Twitter

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

No Result
View All Result
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements