November 21, 2018

Trustworthiness indicator: Facebook insists no “centralized reputation score” to rate users

ISLAMABAD: Facebook does not have a “centralized reputation score” rather a process to identify people that indiscriminately flag news as false irrespective of whether it is false or not. This was stated by a Facebook spokesperson while talking to Digital Rights Monitor.  

Facebook has been under immense criticism after Washington Post reported that the company was “rating the trustworthiness of its users on a scale from zero to 1”.

Quoting the Facebook’s lead on misinformation Tessa Lyons, the Post reported that these assessments were being used as part of the company’s fight against fake news.

The post also wrote: “A user’s trustworthiness score isn’t meant to be an absolute indicator of a person’s credibility, Lyons said, nor is there is a single unified reputation score that users are assigned. Rather, the score is one measurement among thousands of new behavioral clues that Facebook now takes into account as it seeks to understand risk. Facebook is also monitoring which users have a propensity to flag content published by others as problematic and which publishers are considered trustworthy by users.”

Meanwhile, calling the Washington Post headline as “misleading” the Facebook spokesperson said:  The idea that we have a centralized “reputation” score for people that use Facebook is just plain wrong.”

Instead, according to the spokesperson, the company had developed a system to “protect against people indiscriminately flagging news as fake and attempting to game the system.”

Sharing details of the system, the Facebook spokesperson revealed that as part of their fight against fake news, they relied on machine learning to predict articles for fact checkers to review. Also as part of fighting fake news, according to spokesperson, Facebook allowed people to report news stories to company that were false. However, this provided a challenge; Some people would try to manipulate the system by reporting factually correct news content as false news.

Thus in order to counter this behavior and be more effective against misinformation, a system had been put in place. “if someone previously gave us feedback that an article was false and the article was confirmed false by a fact-checker, then we might weight that person’s future false-news feedback more than someone who indiscriminately provides false-news feedback on lots of articles, including ones that end up being rated as true,” revealed the spokesperson.

Digital Rights advocates call for more transparency around the indicator

Responding to this, digital rights advocates have called upon Facebook to be more transparent about its “trustworthiness” indicator.

Expressing concern at the reputation indicator on Twitter, Association for Progressive Communication, APC, questioned the effectiveness of this indicator to fight against disinformation and urged them not to place burdens on users.

Quoting UN Special Rapporteur on Freedom of Opinion and Expression David Kaye, APC asked:

Digital Rights advocate Eduardo Carrilo from Paraguay (Latin America) called this indicator problematic. “Any decision making that is based on Algorithms is an opaque process and ranking people based on their trustworthiness is problematic. Any effort to counter misinformation deserves acknowledgement but automated process from companies that are opaque are definitely not the answer.”  He noted that such non transparent processes could be extended in other scenarios to rank people if not checked.

Another Open Internet advocate Kathleen Ndongmo from Cameroon (Africa) also questioned Facebook’s credibility and called for more transparency around it: “The criteria used to score people is not detailed so one can’t really know exactly how this will work or if it will work at all. I am not sure how much – given precedence – we can trust Facebook’s trustworthiness with its algorithms to actually weigh in on how trustworthy its users are. Facebook is known for not being as transparent as it should be with what stories it decides to show therefore how do you rate people’s trust in flagging them as real or fake?”

Written by

Talal Raza is a Program Manager at Media Matters for Democracy. He has worked with renowned media organizations and NGOs including Geo News, The Nation, United States Institute of Peace and Privacy International.

No comments

leave a comment