Friday, September 12, 2025
Digital Rights Monitor
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements
No Result
View All Result
Digital Rights Monitor
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements
No Result
View All Result
Digital Rights Monitor
No Result
View All Result

in DRM Exclusive, News

Facebook Abandons its Facial Recognition Technology; Data To Be Deleted

Mishaal Ashrafby Mishaal Ashraf
November 5, 2021
Facebook Abandons its Facial Recognition Technology; Data To Be Deleted

November 5, 2021 – Mark Zuckerberg has been introducing quite a few substantial changes in his company, amidst a lot of negative PR after a whistleblower Frances Haugen shared tens of thousands of internal company documents detailing now-Meta’s disregard of human rights violations on its family of apps. Soon after the news of the rebranding of Facebook Inc. to Meta, the company worth over US$900 billion has, on November 2, announced that it will shut down its Face Recognition technology that it incorporated on its social media platforms almost ten years ago.

According to the blog post by Jerome Pesenti, the VP of Artificial Intelligence at Meta, the company is limiting the use of its facial recognition system on its social media platform Facebook in the future. Resultantly, the post mentions that the “people who have opted in to our Face Recognition setting will no longer be automatically recognised in photos and videos.” The facial recognition template that would identify more than a third of Facebook’s active users who have opted in for would be completely deleted as well. An estimated more than a billion facial recognition templates are to be deleted with Meta’s decision to shut down the system.

The latest change will also affect the Automatic Alt Text (ATT) system, which generates descriptions for visually impaired users on the platform. The ATT technology currently identifies people in about 4 percent of the images. After the face recognition system gets deleted, ATT will only be able to identify how many people are there in the photo, but will not be able to detect their faces or suggest who is in the photo.

However, the company will continue to keep the widely criticised DeepFace software and does not guarantee that it will not be used in its future products. The DeepFace is a machine learning facial recognition system developed by researchers at Facebook. The software is trained on four million images that Facebook users uploaded on their profiles.

The facial recognition feature was introduced on Facebook in 2010 and further developed in 2015, and would let users to identify them and friends and family in photos and videos, allowing them to tag each other. It would also automatically notify people when they appeared in photos or videos posted by others, as well as provide recommendations for who to tag in photos.

Facial recognition technology has acted as a powerful tool to verify people’s identification, or to prevent fraudulent activities. This is also how Facebook verified owners of compromised accounts to assist them in getting their accounts back. However, the technology continued to receive criticism from rights and privacy groups around the world for being intrusive and in light of growing concerns regarding the lack of protection of privacy being offered to Facebook users. The Face Recognition technology of Meta was also challenged in a class-action lawsuit in the state of Illinois in the US in 2015 for scanning users’ faces, and settled the case in 2020 by paying a penalty of $650 million. In addition, the Federal Trade Commission (FTC) also imposed a fine of $5 billion on then-Facebook Inc. for violating US and Canadian users’ privacy repeatedly and for selling it off for revenue through advertising.

The introduction and advancement of facial recognition technology has generated legitimate privacy concerns around the world. The technology, while has been used by companies and authorities to identify people and criminals respectively, it has also resulted in wrongful identification and detention of people by the police. The repeated failure and abuse of facial recognition technology in the name of providing security has led to its use either being banned or limited in various states of the United States, raising questions about its legitimacy and use in other countries as well.

In December 2018, the Big Brother Watch, a civil liberties organisation, described citizens as “walking ID cards” citing research that found police use of facial recognition tools identified the wrong person nine out of ten times. As the citizens’ worry grows, government and regulatory bodies face increased pressure to introduce controlling measures. 

Steven Furnell, a senior member of Institute of Electrical and Electronic Engineers (IEEE) and a Professor of Information Security at Plymouth University says that security and privacy are often used interchangeably which poses a challenge for legislators to deal with this subject, making it difficult to tackle. “Security and privacy are often mentioned in the same breath and even treated as somehow synonymous, but this is a very good example of how they can actually work in contradiction,” he says.

He argues that the application of facial recognition tools is done in an attempt to improve security, but it comes at a direct cost to people’s privacy.

Meta’s announcement to discontinue usage of its Face Recognition software on Facebook comes as it finds itself amidst an increased scrutiny of lawmakers, researchers and journalists as they review the Facebook Papers – a set of tens of thousands of internal documents detailing Meta’s inadequacy to deal with human rights abuses on its platforms.

Technology is not all bad. It has the potential to alter the infrastructure, but at the same time raise a lot of concern. Similarly, facial recognition is both helpful but also damaging to Facebook users. The debate of whether or not people need this sort of invasion and surveillance is still open. Technology experts, regulatory bodies, and civil society groups continue to work to find a solution where technology could benefit the masses with minimal costs to bear.

Tags: #PrivacyHumSabKeDeepFaceFacebookFacial RecognitionMetaprivacysurveillance
Previous Post

US Government blacklists NSO Group and three others for concerns of national security

Next Post

This Week in Digital Rights – November 5, 2021

Share on FacebookShare on Twitter
PTA denies role in massive data leak, says 1,372 sites blocked

PTA denies role in massive data leak, says 1,372 sites blocked

September 11, 2025
Khyber Pakhtunkhwa police crack down on TikTokers for ‘promoting obscenity’

Khyber Pakhtunkhwa police crack down on TikTokers for ‘promoting obscenity’

September 11, 2025
Afghan refugee children at Girdi Jungle refugee camp. Photo credits: Ramna Saeed

Pakistan blocks SIMS of Afghan refugees after deportation deadline

September 9, 2025
No Content Available

Next Post
This Week in Digital Rights – November 5, 2021

This Week in Digital Rights - November 5, 2021

About Digital Rights Monitor

This website reports on digital rights and internet governance issues in Pakistan and collates related resources and publications. The site is a part of Media Matters for Democracy’s Report Digital Rights initiative that aims to improve reporting on digital rights issues through engagement with media outlets and journalists.

About Media Matters for Democracy

Media Matters for Democracy is a Pakistan based not-for-profit geared towards independent journalism and media and digital rights advocacy. Founded by a group of journalists, MMfD works for innovation in media and journalism through the use of technology, research, and advocacy on media and internet related issues. MMfD works to ensure that expression and information rights and freedoms are protected in Pakistan.

Follow Us on Twitter

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

No Result
View All Result
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements