In October 2021, Mark Zuckerberg, the CEO of Facebook Inc., rebranded the company to Meta in a move to suit a larger number of products and services that it has been working on. But the decision was most importantly to support Facebook’s plan to enter the virtual reality industry of metaverse. Metaverse, as explained by Satya Nadella, the Chairman and CEO of Microsoft Corporation, is a way “to embed computing into the real world, and real world into computing, bringing real presence into any digital space.”
Zuckerberg presented it with multiple demonstrations in his Connect 2021 keynote session where he explains in depth what the metaverse will look like, and calls it “the next frontier.” He emphasises that it was always his dream to “feel present with the people we care about,” adding, “Isn’t that the ultimate promise of technology?”
Facebook’s vision of the metaverse is deeply connected with its older vision of connecting people through technology, and with constant evolution in digital tech, this vision is being realised in three dimensions. Metaverse, which is dependent on the concept and technology of virtual reality, puts digital avatars of people in an immersive digital world and enables them to experience it as if they are part of it – with avatars of other people. Facebook’s VR platform Horizon Worlds, developed under the Oculus – a product of Facebook, is in its early stages with very limited details available to the public, and enables people to join the virtual “social experience.”
However, while the visuals and the idea seem futuristic, many concerns have emerged about the safety of individuals on the 3D platform. For one, Meta has failed to accumulate support of many of its critics and users when it comes to privacy violations and content moderation on the social media spaces that the company currently owns. It finds itself being summoned by the lawmakers in the United States, United Kingdom and Europe owing to the complaints that have been raised against the company’s failure to comply with local laws and in providing adequate protection to users on the platform. From its involvement in disrupting democratic and electoral processes, to inciting genocide against communities and groups in countries like India and Myanmar, to its constant failure to protect women and gender minorities against violence, and more recently, the revelation of its harmful impact on mental health of young users of the platform – the company has failed to fully address any of these concerns on a regulatory level or its own policy level. With over 4 billion monthly active users, Meta is too big a company for itself to be handled successfully, and the failure to moderate content as well as successfully eliminate concerns of the users have proven to be a challenge for the company. This is especially true as over just 15,000 overworked, underpaid, and exhausted human content moderators try to moderate violent, gruesome and disturbing content of various kinds with no support from the company to deal with the trauma that they endure. The result of this is a failed content moderation policy that not only disregards the wellbeing of individuals implementing the said policy, but also the users’ who are subjected to targeted violent content that is sitting there to be accessed and viewed by the billions on the platforms.
Without addressing and, more importantly, solving these problems that impact real people, the metaverse will only mirror and amplify these issues and give them another dimension of reality.
Online violence is either a product of offline abuse, or results in offline consequences whose severity cannot be determined until the violence has finally happened. There are many reported incidents of extreme violence as a result of online stalking leading to offline stalking, kidnapping, human trafficking, sexual violence and murder. Gender based violence on Facebook and Instagram have been rampant for years with the company well-aware of these occurrences all this time, yet its failure to intervene and stop becoming a tool for this violence indicates that it is not ready to enter virtual reality.
Nina Jane Patel, a psychotherapist and co-founder of Kabuni – a platform for safe metaverse for kids, reported being sexually assaulted by three male avatars in Facebook’s Horizon Venues within seconds of her joining the starting point in the game – the lobby. She recalls, “A group of male avatars surrounded me and started to grope my avatar while taking selfies,” adding, “I tried to move away but they followed me, laughing and shouting. They were relentless.” Nina had to take off the VR headset Oculus that enables players to join the game, in order for the assault to stop. She says that while she tried to leave, the avatars assaulting her yelled, ‘Don’t pretend you didn’t love it; this is why you came here.”
“My physiological and psychological response was as though it happened in reality,” she says.
The experience is not unique to Nina, in fact, many users sharing reviews on Oculus’ official website share the similar concern. One user wrote on March 28, “Every time I go on here minding my own business trying to watch a show, there’s always someone harassing me or throwing random insults at me.” Another one wrote, “Spent 5 minutes in there and immediately was bombarded with conversations that I wanted zero part of. What is it with gaming culture and the need to be loud and offensive for no reason.” While someone else wrote, “Grown man asking a young boy for a blow job. Absolutely disgusting!”
These incidents are not new to the online world, rather mirror the social networking experience as it currently stands. Online sexual harassment impacts this experience for everyone already and has serious real life ramifications. In a November 2020 blogpost, the Web Foundation writes, “For many women in public life, threats of violence online can lead to them fearing for their physical safety offline and ultimately compromise their ability to do their jobs.” As a result, they isolate themselves, and distance themselves from public and social spaces out of fear for their safety.
In the metaverse, these threats and attacks become real, as if the person is being assaulted and attacked in real life.
In a 2019 study, titled “Harassment in Social Virtual Reality: Challenges for Platform Governance”, it was noted that, “In immersive virtual reality (VR) environments, experiences of harassment can be exacerbated by features such as synchronous voice chat, heightened feelings of presence and embodiment, and avatar movements that can feel like violations of personal space (such as simulated touching or grabbing).”
Is Pakistan Ready for Metaverse?
Facebook owned platforms have long been at the centre of this online gender based violence around the world, but particularly in Pakistan. A recent report by the Digital Rights Foundation (DRF), reveals that its flagship project, the Cyber Harassment Helpline has received 11,681 complaints in the five years of its existence, out of which 56 percent complaints were from women, and WhatsApp and Facebook were reported to be the most common platforms where these incidents of violence occurred.
The prevalence of online harassment and gender based violence has impacted the way women access the internet in Pakistan. Hyra Basit, the project manager of the Cyber Harassment Helpline at DRF, tells the Digital Rights Monitor, “There is of course the incredible negative impact that it has on women’s mental health because they fear for their safety at the same time as they’re fighting off victim blaming and gaslighting.”
Online gender based violence takes many forms on the Pakistani internet. From sexual innuendos to targeted slut shaming to making memes to harass the victim, blackmailing, non-consensual use of intimate images, to rape threats and death threats. Women who are vocal on the internet have faced many such organised campaigns, especially women’s rights activists in the country involved in or attending the annual Aurat March. For example, the organiser of Aurat March Lahore, Leena Ghani, has been the target of such campaigns on Facebook repeatedly. Men shared her photos and made videos with captions and music targeting her character and attacking her personally. Leena has been vocal about this abuse, but she says, “That was very difficult to come to terms with.”
Saman Jafri, a former member of National Assembly, and a human rights activist from Karachi, has been a target of violent social media attacks. She says, “I have been trolled based on the way I speak, and body shaming has been a common experience as well. Instead of criticising my political, logical or social beliefs and opinions, the aspects of my personality and my gender have been the target.” She adds, “Not only personal attacks, but I have also received comments like ‘you should be raped’, from extremists and also from supporters of mainstream political parties.”
She says that she saw this online harassment translate into offline harassment as well. “When I was a member of the national assembly, I was facilitating matters of the constituencies during the census in 2016. During that time, because I was the only woman MNA from my party on the ground, the vile comments that I was receiving on the internet were also being hurled at me offline. Someone also tried to physically push me.” Saman adds that these incidents were repeated in 2018 when she was submitting her national assembly nomination papers in the city court on the direction of her then-party leaders. “A group of young boys followed me and threatened physical violence while I was submitting the papers.” Saman says that she was lucky that she was not subjected to anything severe, but shares that “online hate has reached a point that it has become offline hate against me multiple times.”
Hyra says, “It is very easy for the boundaries between offline and online harassment to blur.” She adds, “When women are made to feel unsafe and unwelcome in online spaces, one of their first instincts or response, as evidenced at the Helpline, is to drastically reduce their online presence. Because they don’t see viable solutions or protective measures and much greater consequences to the harassment they face, they retreat from these spaces and therefore face significant loss in educational opportunities, social activities, business and career growth channels, and overall connectivity with the wider world.” She adds that the consequences of online harassment seep into the real world, “where women’s physical safety and movement is greatly impacted and restricted either themselves or by their families.”
Saman believes that there are no strategies that can effectively counter this violence. “The most I can do is block these accounts, but then they come from other accounts and repeat the abuse.” A July 2019 study, titled “The Internet As We See It”, by Media Matters for Democracy (MMfD), finds that women believe the harassment they face online has serious real life consequences, and as a result, they actively change their behaviour by self-censoring on the internet. Whereas, another October 2019 study, titled “Hostile Bytes”, finds that 8 out of 10 respondents have started to self-censor because of online harassment they faced as a strategy to counter further violence, whereas, online violence impacted the mental health of 9 out of 10 respondents who participated in the study.
Social media companies have done very little to curb this violence. Saman adds that reporting the violent and abusive posts has rarely resulted in favourable outcomes. Salwa Rana, a digital rights lawyer from Islamabad, says, “Facebook, for example, is unable to moderate or take down this content because it lacks effective policies required to do that.” She adds, “For starters, they don’t have enough content moderators who speak and understand Urdu, and automated content moderation is not designed to address content in local languages.” Salwa also believes that context needs to be understood in such instances. “The nature of online harassment can vary significantly for every country. What constitutes harassment in Pakistan may not be considered as such in the US where these policies are drafted. Moderating online gender based violence requires that those moderating the content understand local context as well as the post’s context,” she says.
Online harassment generally is the mirroring of offline behaviours by those who occupy both of these spaces. Someone who does not respect boundaries in the offline, real world, would continue to believe that boundaries do not exist in cyberspace as well. Recently, it was heavily reported that Pakistani men have been harassing women in Turkey by recording them in offline public spaces and sharing these videos on the internet, leading to fear in women who occupy public spaces in the country. Salwa says, “These men are not some otherworldly creatures, they are very much part of our society. They have always thought that it’s ok to harass women in Pakistan, and have probably done so all their lives, so when they go to other countries, they project the same behaviour.” Salwa attributes the continued existence of this predatory behaviour to the perpetuation of patriarchal and misogynistic ideals within their homes and communities, “The rot is much deeper than we think. Generations over generations have normalised this behaviour, and now it is just part of life. Nobody asks men to change, but everyone demands women to lock themselves in their homes if they don’t want to be harassed.”
She strongly believes that such men are not suited to occupy a digital space like metaverse that is meant to imitate the real world in how it looks and feels. “Just keep these men away from it. There is enough trauma that [women] have to deal with, a hint of reality to digital violence is not something we are ready for.”
In 2017, Naila Rind, a university student from Hyderabad, died by suicide in her hostel room after being subjected to blackmailing and online harassment for three months. She had no help from her university, did not trust law enforcement authorities, and felt that her family would not support her either. Later, it was found that the man who was blackmailing her had photos and videos of multiple other women in his possession that he had recorded without them knowing. Self-blame and lack of trust in the legal system among victims and survivors of gender based violence arise from the experiences of many women who did not receive support from their families and the law enforcement system that is known to victim-blame women who come to seek their help. So women routinely find themselves at the mercy of social media companies that very often fail to provide necessary assistance.
Where Saman Jafri believes that the government and social media companies are not doing enough to facilitate victims and survivors of online gender based violence, Hyra thinks some efforts have been made but they have a room for a lot of progress. She says, “Social media companies need to reevaluate the priority they give to Pakistan and other South Asian countries and the efforts they place in recontextualizing their policies according to our cultures, for example.”
Even then, the onus of security and safety from abuse and any kind of negative experience on these social media platforms is forced on the victims and survivors through policies disconnected from on-ground realities. The expectation that the woman or the one at risk of being targeted will have to keep their privacy settings up to the mark, and that they will have to adopt strategies to counter gender based violence they are subjected to, and they will have to report and constantly follow up on their complaints, relieves the perpetrators of any responsibility of changed behaviour or accountability.
This same expectation is mirrored in Facebook Horizon, the company’s metaverse focused social space. It requires the one being targeted or under attack to enter a safety feature “Safe Zone” which separates them to an isolated space on the virtual reality game, and gives them the option to mute, block and report the avatar of other people. The security features in the Horizon also enable the user to send the footage of the past few minutes of their interaction when they choose to report what they experienced. The Oculus headset, which is an enabling tool to join Horizon, records and internally stores the footage when the person enters the VR space, but according to the safety video by Horizon, it is only shared with the safety teams when the user submits the report. Those who are found to be violating the conduct policies of Horizon “may be banned” from the platform.
As mentioned earlier, this not only puts the responsibility of seeking safety on the person being targeted with inappropriate content, but there is a larger conversation of gaps in content moderation that needs to be had. Considering that the platform is launched and will be used globally, including in Pakistan, does Horizon have enough content moderators and ‘safety specialists’ to moderate visual and audio-visual content of the millions (or billions) of avatars in the space often talking over each other? Another important question that arises is whether these content moderators will have the capacity and understanding of local context and local languages of the region and/or country the content is being reported from?
This goes without saying that while the idea of metaverse and connecting with people at a personal level rather than with just names and a photo, is futuristic and the one that moves technology towards further advancement, but the fact remains: users, companies and the governments are not ready. Countries like Pakistan where structural issues like patriarchal hold over women and gender minorities, prevalent gender based violence, lack of accountability, and no effective legal system, continue to exist, online platforms with an ability to make abuse feel real may not be the right fit just yet.
As Hyra rightly points out, “As long as the overall culture of Pakistan can be described as misogynist, violent, and generally dangerous for women and gender minorities, it is difficult to imagine a digital realm where GBV won’t be a significant and alarming problem.”