Facebook Abandons its Facial Recognition Technology; Data To Be Deleted

November 5, 2021 – Mark Zuckerberg has been introducing quite a few substantial changes in his company, amidst a lot of negative PR after a whistleblower Frances Haugen shared tens of thousands of internal company documents detailing now-Meta’s disregard of human rights violations on its family of apps. Soon after the news of the rebranding of Facebook Inc. to Meta, the company worth over US$900 billion has, on November 2, announced that it will shut down its Face Recognition technology that it incorporated on its social media platforms almost ten years ago.

According to the blog post by Jerome Pesenti, the VP of Artificial Intelligence at Meta, the company is limiting the use of its facial recognition system on its social media platform Facebook in the future. Resultantly, the post mentions that the “people who have opted in to our Face Recognition setting will no longer be automatically recognised in photos and videos.” The facial recognition template that would identify more than a third of Facebook’s active users who have opted in for would be completely deleted as well. An estimated more than a billion facial recognition templates are to be deleted with Meta’s decision to shut down the system.

The latest change will also affect the Automatic Alt Text (ATT) system, which generates descriptions for visually impaired users on the platform. The ATT technology currently identifies people in about 4 percent of the images. After the face recognition system gets deleted, ATT will only be able to identify how many people are there in the photo, but will not be able to detect their faces or suggest who is in the photo.

However, the company will continue to keep the widely criticised DeepFace software and does not guarantee that it will not be used in its future products. The DeepFace is a machine learning facial recognition system developed by researchers at Facebook. The software is trained on four million images that Facebook users uploaded on their profiles.

The facial recognition feature was introduced on Facebook in 2010 and further developed in 2015, and would let users to identify them and friends and family in photos and videos, allowing them to tag each other. It would also automatically notify people when they appeared in photos or videos posted by others, as well as provide recommendations for who to tag in photos.

Facial recognition technology has acted as a powerful tool to verify people’s identification, or to prevent fraudulent activities. This is also how Facebook verified owners of compromised accounts to assist them in getting their accounts back. However, the technology continued to receive criticism from rights and privacy groups around the world for being intrusive and in light of growing concerns regarding the lack of protection of privacy being offered to Facebook users. The Face Recognition technology of Meta was also challenged in a class-action lawsuit in the state of Illinois in the US in 2015 for scanning users’ faces, and settled the case in 2020 by paying a penalty of $650 million. In addition, the Federal Trade Commission (FTC) also imposed a fine of $5 billion on then-Facebook Inc. for violating US and Canadian users’ privacy repeatedly and for selling it off for revenue through advertising.

The introduction and advancement of facial recognition technology has generated legitimate privacy concerns around the world. The technology, while has been used by companies and authorities to identify people and criminals respectively, it has also resulted in wrongful identification and detention of people by the police. The repeated failure and abuse of facial recognition technology in the name of providing security has led to its use either being banned or limited in various states of the United States, raising questions about its legitimacy and use in other countries as well.

In December 2018, the Big Brother Watch, a civil liberties organisation, described citizens as “walking ID cards” citing research that found police use of facial recognition tools identified the wrong person nine out of ten times. As the citizens’ worry grows, government and regulatory bodies face increased pressure to introduce controlling measures. 

Steven Furnell, a senior member of Institute of Electrical and Electronic Engineers (IEEE) and a Professor of Information Security at Plymouth University says that security and privacy are often used interchangeably which poses a challenge for legislators to deal with this subject, making it difficult to tackle. “Security and privacy are often mentioned in the same breath and even treated as somehow synonymous, but this is a very good example of how they can actually work in contradiction,” he says.

He argues that the application of facial recognition tools is done in an attempt to improve security, but it comes at a direct cost to people’s privacy.

Meta’s announcement to discontinue usage of its Face Recognition software on Facebook comes as it finds itself amidst an increased scrutiny of lawmakers, researchers and journalists as they review the Facebook Papers – a set of tens of thousands of internal documents detailing Meta’s inadequacy to deal with human rights abuses on its platforms.

Technology is not all bad. It has the potential to alter the infrastructure, but at the same time raise a lot of concern. Similarly, facial recognition is both helpful but also damaging to Facebook users. The debate of whether or not people need this sort of invasion and surveillance is still open. Technology experts, regulatory bodies, and civil society groups continue to work to find a solution where technology could benefit the masses with minimal costs to bear.

Mishaal is a Project Coordinator at Media Matters for Democracy. She is a Public Policy graduate with past experience as content strategist and research writer. Her main areas of interest are political science, world history, and public policy.

No comments

Sorry, the comment form is closed at this time.