This article was originally written for the Youth Impact Network
Introduction
The internet has revolutionised the communications infrastructure around the world. Bringing distant and diverse communities closer together and bridging the gap between territorial boundaries, it has brought about a profound and perpetual change in channels of both individual interaction and mass-level communication. Besides enabling swift and convenient drivers of communications that can hardly be transposed with conventional mediums in today’s digital world, the internet — with the continuing rise in accessibility to and convenience offered by social media platforms — has opened doors to lucrative avenues for both individuals and entities, providing them with a range of enterprising options and empowering them across the economic spectrum.
While the role of technology in transforming the communications and economic spheres cannot be underestimated, the challenges associated with its accessibility, convenience, and creativity continue to pose significant challenges, many of which require comprehensive and nuanced management and containment approaches. Among these challenges and threats, information disorders top the list. Misinformation and disinformation are two of the most widely known terms to be associated with risks and threats persisting from the weaponisation of the internet, particularly social media platforms, which are now a virtual home — and by extension, a direct reflection of physical presence — to billions of people around the globe.
Over the past decade, and especially after the Covid-19 pandemic, challenges stemming from mis- and disinformation have significantly amplified across the digital sphere, demonstrating a radically adverse impact in various physical arenas, including social, political, economic backdrops in several countries. Mis- and disinformation have a symbiotic relation with hate speech, which, in itself, remains one of the most complex challenges to tackle online.
While this article will focus on disinformation and deconstruct the elements that embolden its propagators, it will also provide a condensed outlook on how disinformation breeds other forms of virtual threats that spill over into the real-time events.
Defining disinformation
Although misinformation and disinformation are frequently documented in close proximity, it is important to note that these terms cannot be used interchangeably and have distinct determiners of their own. The United Nations (UN) distinguishes the two as accidental and intentional, respectively. Take a look at this definition, for instance.
“While misinformation refers to the accidental spread of inaccurate information, disinformation is not only inaccurate, but intends to deceive and is spread in order to do serious harm.”
According to the UN, there is no universally accepted definition for disinformation. However, as mentioned earlier, understanding its key characteristics can help in identifying misleading or false content or recognising a targeted attack through textual, visual, or auditory forms of information. As an informed individual, such content may leave you questioning its accuracy and authenticity. Disinformation can be propagated by “state or non-state actors”. It can adversely impact human rights in various ways, and escalate tensions in times of emergency and conflict.
The European Union (EU), on the other hand, defines disinformation as “false or misleading content that is spread with an intention to deceive or secure economic or political gain, and which may cause public harm”.
Over the years, disinformation has been abundantly deployed during times of elections and other intense crises, including armed conflicts both between and within countries. It is not that disinformation is a novel threat and has emerged with the rise of the internet only. It has been prevalent in society since ancient times and has been deployed in various forms and through various means in lines of communication available and established at a given time in history. However, what the internet has done is amplify it and provide more convenient and accessible channels for the dissemination of misleading content.
Disinformation and hate speech
Disinformation and hate speech have a symbiotic relation. Hate speech and false information, spread intentionally, may lead to discrimination, stigma, and widespread violence. Throughout history, we have seen how hatred has disrupted social and political dialogue and caused serious harm, particularly during high political activity. With the help of technology, spreading false information has become easier, allowing propagators of hate to target vulnerable groups in broader ways.
Disinformation and hate speech impede the free exchange of ideas online and undermine the inherently inclusive nature of the internet. Social media debates are often intense and divided, and hate speech and disinformation push certain groups out of the conversation, leaving them with a shrunk space to express themselves. Coordinated disinformation campaigns laced with hate speech often lead to violence offline.
Leading social media platforms such as Facebook, TikTok, YouTube, Instagram have their guidelines against hate speech in place. However, the lack of adequate oversight and scant moderation resources allocated to markets other than countries of their origin allow disinformation and hate speech to flourish unchecked. There are many reasons for this insufficient moderation, majorly the failure to acknowledge linguistic, cultural, and social nuances specific to these tech giants’ foreign markets.
Pakistan is among the markets facing critical challenges related to disinformation and hate speech on digital platforms. The frequent coordinated disinformation campaigns, engineered on hateful content, continue to create a highly hostile environment against women (case in point: campaigns against the Aurat March), transgender persons, and members from marginalised and at-risk communities. The scarcity of moderation resources — coupled with the negligence of tech companies and a failure to acknowledge these challenges and implement effective digital policies — are significant factors that allow disinformation and hate speech across social media platforms in the country.
Disinformation and AI
With the rise of generative artificial intelligence (AI), the threats persisting from disinformation have reached an entirely new level, necessitating more robust approaches to contain and tackle it. The advancements in AI have accelerated the spread of disinformation to such an extent that the challenges it poses to sensitive democratic events, including elections, now require a more concentrated regulatory mechanism.
According to a recent study, titled “Fake Image Factories”, popular generative AI tools such as ChatGPT Plus, Midjourney, Dream Studio, and Image Creator by Microsoft have made it more convenient for anti-democratic elements to propagate disinformation through images. The deployment of material generated through AI, including deepfakes, has already raised concerns in leading markets such as the US, where frequently emerging digitally altered media depicting politicians have peppered news reports.
In Pakistan, an alarming trend of weaponising deepfakes for political gains was documented during the recent general elections (held on February 8, 2024), according to Facter, which is a newsroom-only digital investigation initiative launched by Media Matters for Democracy (MMfD). Social media posts monitored and documented by Facter mostly included deepfakes of several electoral candidates announcing boycotting the election. Images and posts supplemented with doctored audio were also circulated actively on social media to undermine election integrity and to twist voter perceptions. However, Facter noted that a number of regular social media users were timely debunking these misleading items in the comments sections, pulling up original images and videos to show how they had been digitally altered using AI tools.
Conclusion
While the swift flows of misleading information cannot be entirely prevented on the internet, disinformation can be detected and encountered in various ways. Online tools such as Google’s reverse image search and InVID can help you debunk images and videos that have either been shared out of context or been digitally manipulated. As for deepfakes, there are several indicators that can assist in determining whether an audio-visual piece has been manufactured using generative AI. (Signs include odd lip movements, out-of-sync audio, unrealistic skin texture or blurry patches, missing or non-identical fingers, nails, etc.) You can also refer to digital investigation initiatives such as Soch Fact Check, AFP Fact Check (Pakistan), and Geo Fact-Check to keep abreast of major developments unfolding in cases focussed on disinformation.