January 22, 2019

An Interview with UN Special Rapporteur David Kaye

The latest 2018 Freedom House report presents a dismal picture of internet freedom. Out of 65 countries studied for the report, internet freedom deteriorated in 26 countries including United States of America. The study also noted that more than a dozen countries were putting in place restrictive measures in the name of fighting “fake news”, increasing surveillance and weakening digital safeguards to easily get more access to users’ data.

In the midst of this situation, Professor David Kaye is one of the prominent voices on the international front that calls for holding governments and corporations accountable in facilitating this global clampdown on internet freedom. A professor of law at University of California, Irvine, Mr. Kaye has also been serving as the United Nations Special Rapporteur on the Promotion and Protection of the Right to Freedom of Expression and Opinion since 2014. As part of his work as the UN Special Rapporteur, Mr. Kaye has undertaken fact finding visits to around six countries and submitted multiple thematic reports to UN Human Rights Council, UNHRC, and UN General Assembly, UNGA, respectively including reports on Freedom of expression, states and the private sector in the digital age, the contemporary challenges to freedom of expression and online content regulation.  His upcoming report, for which he is accepting submissions until 15 February 2019, will be about how surveillance technologies are used to undermine human rights.

Digital Rights Monitor, DRM, got a chance to chat with Mr. Kaye on the sidelines of a consultation organised by FORUM-Asia in Thailand earlier in December 2018. Professor Kaye spoke about the response from Internet companies, the worst threats to online freedom of expression, and the role of journalists in highlighting content violations.

DRM: In Pakistan, some of the most common threats we are noticing regarding freedom of expression have moved online to platforms, such as Facebook and Twitter, and many of the freedom-of-expression grievances of civil society are also directed towards these companies. In your thematic report earlier this year, you made some recommendations for Internet companies regarding their community standards, transparency, and accountability. Could you please talk about how the companies have responded to the recommendations?

David Kaye (DK): The recommendations that we made fell into substantive baskets and process baskets. So, on the substantive side, we are recommending that the companies apply human rights standards – standards of freedom of expression we find in human rights law – to their content moderation rather than simply company rules. And, you know, I think the companies are starting to understand that. Jack Dorsey said on Twitter at one point that he got it, and he understood that that was the direction Twitter needed to go. There have been statements from senior people at Facebook suggesting they understand that, as well. But until they actually apply those in a formal way, get a commitment from the most senior people, and put it into their rules (we cannot be sure of their commitment.) We are still waiting.

On the process side, I think it is mainly about transparency and how (the Internet companies) share what they are doing in response to government requests. But not just government requests, trolls (and) troll armies that try to shut you down, and things like that. Until they provide more transparency, (we will not have a guarantee). They are starting to do a little bit more but they are still pretty opaque. Example of (Indian-held) Kashmir is a very good example where you know accounts are taken down or suspended and people just have no idea why. The companies need to be hearing more and more from more people. So they hear from me. They should hear from other activists internationally, locally, (and) nationally, and maybe they will be more responsive. I think the most important thing in a way is for journalists to cover these stories because the companies respond to bad press.

DRM: It’s interesting that you mentioned journalists. We did a study on the practice of self-censorship among Pakistani journalists in April and we found that around 9 in 10 of the respondents censored their personal or professional expression online because of diverse threats. Have you noticed a similar trend of self-censorship among journalists elsewhere in the world during their work due to online threats and intimidation?

DK: Yes, I mean, it is a very difficult situation for journalists, especially journalists, in a way more than other actors. Because on the one hand you need to report or you want to report these issues that you generally cover. But your reporting puts you in a position where the government would say “oh, you’re reporting on terrorism, you are a terrorist” or “you’re reporting on a demonstration, you must be a protester” when you are really just trying to pursue your professional work. So you’re doubly squeezed, in a way. It’s a real problem, and I understand why journalists would feel they need to censor themselves in their work. And I imagine that goes to what you’re covering and who you are interviewing and what you say about them and the anonymity that you have to give to them in order to protect them and to protect yourselves. That’s putting aside digital security issues where you get pressure not just about what you are writing but about who you can contact.

DRM:  In the context of the Pakistani general elections in July 2018, we noticed lots of local content on Twitter and Facebook that can be categorised as disinformation and misinformation. A lot of this was political propaganda and often the messages tended to target or undermine dissent, opposing political views, and accurate reporting. Based on your work around the world, how do you think the online disinformation issue can be addressed?

DK: I do think everybody is struggling on how to deal with disinformation. Part of the problem is, and not speaking specifically to Pakistan, but generally, governments are real offenders when it comes to disinformation. You know, governments are putting out false information. I mean, in my own country, the United States, the worst perpetrator of false information is the President of the United States. So one thing is how do we deal with government “fake news” — if you want to call it that, but propaganda is a better phrase for it. So on that front, I think journalists need to be covering it. And again that’s hard for people in societies where you face all sorts of threats. So that’s something: You need to do it; it’s time consuming; it takes away from other reporting you might do, and when you do it you can come under pressure from the government or other actors. So that’s one thing is that journalists can be doing some of this.

The platforms, I think, can do things that are more technical as long as they are not evaluating content. There are things they can do. They can’t just zap it and say, “This is fake news, it’s off the platform.” But they can do things like identify how long is this Twitter account that’s tweeting all of this information, how long has it been in effect? Was it created three hours ago? Well then, maybe it should be restricted. You know, how many followers does this one have? Are those followers bots? So I think there are technical things where the companies can more and more treat spam and decrease the number of accounts that are bots; which is also tricky because there are good bots and bad bots.

So there are things that everybody needs to be doing. Civil society has a role to play in this, too. I mean, if organisations are not doing the work themselves, they should be working with fact-checking organisations and others to try to identify which are the bad actors out there.

DRM: You have mentioned previously that there is a “general backsliding” globally when it comes to rights, including the Right to Freedom of Expression. What do you think were the biggest threats to freedom of expression online in 2018?

DK: I mean there is such variation around the world. The worst ones I think are when individuals are criminalised for their posts. I mean, that’s the worst thing. That somebody you know posts or shares or likes something, and the next moment the security services are at their door bringing them to prison, and they suffer torture or prosecution and so forth. That leads to all sorts of other threats, too. With that I would also put in addition to government, there is a kind of mob rule, particularly against women — women journalists and women activists. And we have seen in Pakistan and in India, for example, situations where online attacks against women in particular move offline and people are killed. So those are the worst kinds of online threats. But I think there are other kinds of threats, too. Like we have seen (with) Twitter in Kashmir. So just the lack of information when an account is taken down or a tweet is removed, and you don’t know like why did that happen or how did that happen. Did government do that? Did the company do it on its own accord? So that kind of opacity I think generates a lot of mistrust with the platforms, with government, (and) with one another. It’s poisonous.

DRM: And how do you see some reduction in these threats in 2019?

DK: Well, it’s hard because, you know, in Europe and the United States, countries that in the past have been very much advocates for civil society in the face of authoritarianism online, they are very much caught up in their own problems right now. So we don’t see them as active. So that’s one problem.

But I do think journalists understand these problems more. And I think that as we have seen a lot more reporting about privacy violations, my hope is that journalists will also start to see that there should be more attention paid to content violations as well. And that they will see censorship is very much connected to the violations of privacy, and even where it is not connected, it is just as serious a question of trust as privacy violations.

DRM: You have been vocal about the need for the community standards of Internet companies and social media platforms to be rooted in human rights law. At the same time, there is an ongoing debate about the need for cyber norms, these standards of appropriate behaviour online. How are these different from the human rights law, and do you think we might need a new Universal Declaration of Human Rights for the Internet?

DK: I don’t think we need a new set of norms or laws. The idea that there could be more encouragement of better civic culture online, that’s not a problem. In fact, I would welcome it as long as it is about promoting tolerance and respect of other views and things like that. That has to be starting with governments, you know, because governments need to also present a model of respect for dissenting views. As long as it does not lead to civil complaints and complaints against people because they were rude or something like that or criminalisation because you said something mean, I think that’s fine. But you need to be careful about that line between encouraging respect and mandating respect.

DRM: Your 2019 report to the General Assembly is going to be about the way surveillance technologies are used to undermine human rights, and you were able to speak with human rights defenders from South Asian and South East Asian countries on this issue. How useful was some of their feedback?

DK: Yes, so it was really valuable to talk to activists, defenders, and civil society organisations from this region because the thing that I have learned from them was just how pervasive surveillance is. That it is not just online (but also) it is offline, and those things connect with one other very much and it’s useful for me to see the environment as a whole.

DRM: Thank you so much for speaking with DRM, and all the best for your work in the future.

 

Editor’s note: The questions and answers have been edited for brevity and clarity.

Written by

Waqas is a journalism instructor and former journalist. He has previously worked as a news reporter at The Express Tribune, the Scripps Howard News Service, and the Columbia Missourian. He has taught journalism and communication courses as an assistant professor at the NUST department of mass communication. Waqas completed his MA from the Missouri School of Journalism and was trained in data journalism at the National Institute of Computer-Assisted Reporting (NICAR). His research interests include digital journalism practices, political economy of the news media, and the concerns related to the rights of access to information and freedom of expression. Waqas in Media Matters for Democracy is leading a data and investigative journalism project, that aims to produce public interest journalism content mainly using Right to Information as an information collection tool. As a part of this initiative, Media Matters for Democracy is also demonstrating the potential of RTI as a tool for journalists. The initiative is primarily aimed at creating a demand for data journalism in news consumers and inspiring mainstream journalists to incorporate data elements into their stories and to access public data through RTI.

No comments

leave a comment