Thursday, November 20, 2025
Digital Rights Monitor
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements
No Result
View All Result
Digital Rights Monitor
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements
No Result
View All Result
Digital Rights Monitor
No Result
View All Result

in Exclusives, Features, News

In South Asia, Deepfakes Are Increasingly Used to Inflict Gendered Harm

Saddia Mazharby Saddia Mazhar
November 20, 2025
In South Asia, Deepfakes Are Increasingly Used to Inflict Gendered Harm

Cricket league matches in Pakistan were under way when, on 8 May 2025, a key clash between Peshawar Zalmi and Karachi Kings was scheduled at the Rawalpindi Cricket Stadium. Just hours before the game, reports of a drone incident near the venue led the Pakistan Cricket Board (PCB) to postpone the match and later suspend fixtures amid heightened tensions . Within minutes, Indian media outlets began claiming that the stadium had been hit or damaged in the strike, circulating dramatic images of a “destroyed” arena. The pictures spread like wildfire across social media, fueling panic and outrage .

With the growing panic, fact-checking organisations including Alt News, BOOM, and NewsMobile stepped in and exposed the viral visuals as AI-generated deepfakes, confirming that no such destruction had taken place. The stadium still stood intact but the damage to public trust was already done.

The disruption experienced during the Pakistan Super League match set the stage for a more unsettling turn. On 15 May 2025, during a Senate session, Foreign Minister Ishaq Dar held up what he claimed was a Daily Telegraph front page declaring the Pakistan Air Force the “undisputed king of the skies”. But Pakistani fact-checkers quickly flagged it as fake. 

The team of iVerify Pakistan pointed to multiple red flags: typographical errors, awkward phrasing, and a layout inconsistent with the British newspaper’s design. Even India’s Press Information Bureau (PIB) confirmed that the front page was AI-generated and had never appeared in The Daily Telegraph. By sharing the image from the Senate floor, Dar inadvertently gave official weight to synthetic propaganda, shifting it from fringe misinformation to a political statement that carried institutional credibility.

When misinformation comes from senior officials, it poses a unique challenge, as we know their words carry institutional legitimacy and spread faster through mainstream media, said TCM Director News Wahid Ali. “At TCM, our responsibility is to apply the same standards of verification, regardless of the source. However, when it comes from official quarters, we tend to frame the correction with greater care, emphasising evidence without turning it into a political attack. The goal is not just to debunk but to protect public trust and ensure audiences understand the difference between verified information and fabricated content no matter who is telling this.”

While Pakistan has seen some of the most dramatic examples, from fake drone strikes at cricket stadiums to fabricated newspaper front pages waved in Parliament, the problem is not confined within its borders. Across South Asia, countries are grappling with the same tidal wave of synthetic misinformation. 

Two months ago, Dr Swarnim Wagle, the vice president of Nepal’s Rastriya Swatantra Party and a Member of the House of Representatives, filed a complaint with the Cyber Bureau of the Nepal Police after a deepfake video was made using his voice, linking him to a neighbouring country, India, and portraying him as anti-Nepal. 

The video led to him receiving a lot of negative comments. Such content creates a lot of confusion among the general public because they do not understand what deepfake technology entails. 

Nepal has a literacy rate of 76.3%, but many people who are not literate still have smartphones, shared Bhasha Sharma, a senior journalist. They often consider all information they see on social media platforms like Facebook and YouTube to be correct and therefore cannot distinguish between deepfakes and false information. As a result, AI-generated videos and false information are causing a lot of problems, she remarked. 

Similarly, manipulated videos that contain misinformation related to China and Nepal are also being produced and circulated in India, without users even realising that they are false.

A recent study titled “Media coverage of DeepFake disinformation: An analysis of three South-Asian countries”  published in the UNY Journal, examined 203 news articles from 16 media outlets in Bangladesh, India, and Pakistan. The findings showed a clear pattern: in Pakistani newspapers, over 50% of deepfake-related coverage frames synthetic media as a threat  often tied to politics, security, or morality — rather than discussing preventive strategies or even the entertainment uses of the technology.

The implication is stark. By casting deepfakes almost exclusively as a danger, the Pakistani media helps shape a public perception steeped in fear and helplessness. Similar trends exist in India and Bangladesh, where newsrooms often highlight the risks of deepfakes but offer little in terms of literacy, detection tools, or critical analysis. 

In other words, South Asia’s information ecosystem is primed for panic but not prepared for resilience , a vacuum that bad actors exploit with ease.

Disinformation campaigns against women 

Deepfakes pose a profound threat to human rights in South Asia. This intersects with technology, societal norms, and institutional failures. Their impact manifests in violations of dignity and privacy, suppression of free expression, and exacerbation of slow and inequitable justice systems. A deeper examination reveals how these challenges disproportionately harm vulnerable groups, undermine democratic processes, and erode public trust, necessitating a human rights-centered approach to mitigation. 

Deepnudes and other sexualised content disproportionately target women, stripping them of agency and exposing them to severe violations of dignity and privacy. These synthetic media exploit gendered vulnerabilities, weaponising cultural stigmas around honour and sexuality. Victims, often women and girls, face intense social pressure to remain silent to avoid further reputational harm or physical retaliation, including ‘honour-based’ violence.

Deepfakes are not merely a technological challenge but a human rights crisis that intersects with gender, power, and justice, Dr Haroon Baloch, senior program manager at Bytes for All shared with DRM. Sexualised deepfakes silence women by exploiting societal norms, while political deepfakes undermine democratic integrity by manipulating public perception. Weak institutional responses amplify these harms, leaving victims, especially women and minorities, exposed to reputational damage, psychological trauma, and physical danger. Framing deepfakes as human rights violations shifts the focus from technological misuse to the protection of individual dignity, equitable access to justice, and the preservation of democratic debate, he remarked. 

In 2021, Pakistani journalist Asma Shirazi became the target of a vicious smear campaign using deepfake content and vulgar thumbnails on YouTube and social media. 

“Even I sometimes mistake such AI-generated material as real so how can the public tell the difference?” she said. Shirazi believes such attacks are part of organised harassment campaigns, often led by political groups. She notes that women are singled out because targeting them drives higher viewership. “They use women and attack women to increase their ratings,” she explained. 

According to her, many women journalists have withdrawn from online spaces out of fear. She describes the powerful online presence of Pakistan Tehreek-e-Insaf as a “new establishment” that silences critical voices. For Shirazi, deepfakes and AI-driven disinformation now pose the greatest threat to independent journalism in Pakistan.

In early 2024, a fake, sexually explicit AI-generated video featuring Punjab Information Minister Azma Bukhari circulated widely across Pakistani social media. The deepfake, reportedly disseminated by PTI activist Falak Javed, triggered immediate public outrage. Bukhari promptly filed a formal complaint with Pakistan’s Federal Investigation Agency (FIA), the former agency responsible for investigating cybercrimes in the country, demanding an inquiry into the creation and spread of the doctored content — and the person is in prison now. 

Another alarming episode involved an AI-manipulated video portraying Pakistan Muslim League-Nawaz (PML-N) leader Maryam Nawaz meeting UAE President Mohamed bin Zayed Al Nahyan during his official visit to Pakistan. The fabricated footage insinuated secret political dealings, sparking widespread speculation. 

In Pakistan, the Digital Rights Foundation (DRF) has repeatedly warned that deepfakes are becoming a dangerous weapon of harassment and disinformation. Their reports show that women are disproportionately targeted, with fake intimate videos circulating online to shame and silence them, while political figures have also been hit with AI-generated clips designed to spread distrust and confusion. 

In 2023 alone, DRF recorded 155 deepfake-related complaints out of 1,371 it received, yet only a handful translated into FIRs or convictions, as most victims prioritise content removal over lengthy legal battles. According to the Digital Security Helpline’s 2024 Annual Report, at least 42 cases of generative AI or deepfake images were reported, showing a steady increase in the number of these types of cases. 

During the campaign period of the  2024 General Election in Pakistan, we saw what were some of the very first uses of genAI on social media in Pakistan, targeting journalists and politicians, shared Hyra Basit, senior project manager at DRF. This, in a way, started the normalisation and use of deepfakes and generative AI to target general citizens, women in particular. Furthermore, the accessibility of generative AI tools, and the lack of content moderation also contributes to the surge, 

The Digital Security Helpline primarily receives cases of technology-facilitated gender based violence (TFGBV), which includes deepfakes and the use of genAI. These tools are used to generate non-consensual intimate images (NCII) for sextortion, blackmail, defamation, and the general intimidation of women and girls. However, at DRF, we also monitor online spaces and social media regularly, particularly during incidents of social and political interest, to conduct research on the prevalence of dis- and misinformation and how they intersect with generative AI, hate speech, and TFGBV. Our studies have shown that there has been a significant rise in the use of deepfakes during these incidents and the subsequent narrative that unfolds online, she added. 

Similar instances have been reported in Nepal too. A few months ago, Samikshya Adhikari, a singer from Nepal, filed a complaint with the country’s Cyber Bureau, alleging that a deepfake video was being used to assassinate her character. Adhikari stated, “I have filed a complaint with the Cyber Bureau against the person who assassinated my character. Please do not engage in illegal activities that affect the life of any girl for the sake of cheap popularity.” She also said that such videos, which affect her personal and family life, also harm society and the nation.

In Nepal, deepfake incidents are frequently reported to the Cyber Bureau of the police, journalist Sharma remarked. Many fake social media accounts are being created in the names of prominent women in society, and these accounts are used to harass them. There have been incidents of people attempting to assassinate their character by creating various types of fake photos and videos. While some victims, who are educated and aware, are able to file a police complaint, others remain victims in silence, the journalist added. 

Rise in Deepfakes across the Globe

Over the years, there has been an increase in the use of deepfakes to spread propaganda and target politicians across the globe. 

In Ukraine, deepfakes have already been weaponised as tools of propaganda and character assassination, from the infamous fake video of President Volodymyr Zelensky urging citizens to surrender, to manipulated clips targeting his wife Olena Zelenska, including a viral video falsely claiming she had purchased a luxury car in Paris. 

Similar tactics have appeared in France, where President Emmanuel Macron became a target of fabricated nightclub “dancing” videos, a fake L’Hémicycle magazine cover depicting him as a plucked rooster,  and even a deepfake that mimicked the voice of a FRANCE 24 journalist in a fabricated news broadcast. 

These incidents highlight how synthetic media is increasingly used to undermine trust, spread disinformation, and destabilise democracies. 

In Indonesia, children, teenagers and adults alike are becoming very familiar with using AI tools. 

“When it comes to deepfakes and disinformation, I believe the situation is becoming dangerous,” Mohammad Jafar Bua, former field producer at CNN Indonesia, said. During the mass demonstrations in Jakarta from late August to early September, various AI-generated fake videos, hate speech, and disinformation were deliberately spread by certain groups and people found it too easy to believe disinformation without filtering it. One video, for example, showed Indonesian soldiers slapping police officers for repressing demonstrators. In reality, the footage was fake and created by AI. But many citizens had already believed it and reshared the video, Bua explained.  

“AI is like two sides of the same coin: good and bad. It can be good when used to ease the work of writing reports, creating media illustrations, and other productive tasks. But it becomes harmful when it is exploited to produce deepfakes, fabricated videos, or baseless propaganda,” he added. 

Role of Big Tech 

According to Media Matters for Democracy’s 2023 report, over 75% of viral misinformation in Pakistan now originates from social media platforms such as Facebook, WhatsApp, X (formerly Twitter), and TikTok.  These platforms have become the primary conduits for disinformation, with millions of users spreading viral rumors at an alarming rate. 

Fact-checkers are struggling to keep pace, even with the help of AI-powered tools like InVID, Deepware Scanner, and reverse image search engines. However, these tools are proving inadequate in the face of rapidly evolving deepfake technology.

The biggest risk from deepfakes in Pakistani journalism is the loss of public trust. They can spread false narratives, manipulate politics, and mislead audiences, especially during elections, posing a serious threat to credible journalism and democracy, said Fayyaz Hussain, senior sub-editor at Geo Fact Check. To ensure accuracy, we combine AI tools with human oversight. AI helps spot patterns and possible manipulation, but we always cross-check with trusted sources, experts, and field reporters, especially for regional content. This balanced approach avoids overreliance on automation, he  remarked. 

In May 2025, Pakistanis witnessed fake military footage and fabricated statements spreading swiftly, sometimes repeated by authentic major media outlets or pushed by partisan pages, Baloch remarked. Even after they were debunked, the first wave stuck, shaping public opinion and fueling sharper diplomatic talk. In times of military conflicts, this kind of emotional misinformation raises the chance of mistakes and leaves less room for careful reporting, which is when reliable information matters most. To reduce the risk, tools like rapid verification by mainstream media, cross-border fact-checking, and platform action are crucial, he recommended. 

Legal Reforms 

The primary cybercrime law in Pakistan, called Pakistan Electronic Crimes Act and (its subsequent amendments) were not drafted with AI-generated harms in mind; it’s being stretched to cover deepfakes but lacks clear definitions, thresholds, and tailored remedies (forensics, burden of proof, platform takedown standards).

Here is how those changes are playing out in practice. The federal government has amended Schedule 1 of the Anti-Money Laundering Act 2010 (AMLA) to bring a wide range of cyber-offences under its ambit, formally granting the newly-established National Cyber Crime Investigation Agency (NCCIA) both investigative and prosecutorial powers. The changes mean that earnings from disinformation campaigns can now be treated as financial crime — a notable shift since the notification expressly lists “spreading false or misleading information on social media” among the included offences. 

At the same time, the PECA Amendment Act 2025 introduces Section 26A, which criminalises anyone who “intentionally disseminates, exhibits or transmits false or fake information likely to cause public fear, panic, disorder or unrest,”yet offers no definition of what constitutes “false or fake”information or the threshold of “public fear or disorder.” 

Legal analysts warn that this open-ended phrasing leaves the law open to broad or arbitrary interpretation, particularly when deployed against political critics or journalists. 

Keeping in view recent cases of not only Pakistani citizens but also politicians becoming the victims of malicious synthetic media, Pakistan urgently needs a mix of legislation, capacity building and platform accountability measures to stop deepfakes from harming citizens and democratic processes, Basit remarked. Our existing laws were not drafted keeping in mind generative-AI risks and need updating, she shared. 

The approach must focus on citizen education and platform accountability, instead of only focusing on criminalisation, Baloch said. The current framing of the laws exposes the fact that the states’ priority is to silence critical dissent under the guise of addressing the deepfake issue. In Pakistan, PECA and its 2025 amendments, along with a proposed social media authority, expand criminal penalties for fake news,  but sidestep synthetic media. It seriously lacks the scope of addressing this issue. India’s IT Rules and related court actions show intent to regulate but remain contested, leaving platforms uncertain. Bangladesh has moved faster on new criminal provisions, yet its Cyber Security Act  still carries vague offences and lacks strong safeguards. None of these frameworks embed technical standards like C2PA attributions that guide us about digital authenticity. Clear due-process pathways for cross-border removal, making targeted updates on provenance, reporting, forensic capacity, and rights safeguards are other essentials that the legal frameworks must carry, Baloch added.

Pakistan can take cues from international precedents: China requires both visible and technical watermarks on all AI-generated content, the US has issued an executive order mandating provenance and labelling standards, and the EU’s Hiroshima Process International Guiding Principles  for Advanced AI system obliges providers to disclose when content is artificially generated or manipulated, Basit recommended. Aside from legislation, Pakistan also needs to build its technical capacity to address deepfake misuse adequately. It should strengthen PKCERT’s role by equipping it to issue deepfake advisories, detection tools, and response playbooks, she remarked. 

Pakistan should also mirror international moves to make platforms more accountable by pushing for platforms to implement effective notice-and-takedown for harmful synthetic media, publish transparency reports, and provide accessible appeal channels for citizens and trusted partners, the activist added. 

In South Asia, a multi-layered approach adapted to local realities is needed: major platforms should adopt interoperable attribution and labelling systems such as watermarking and metadata with localised takedown workflows, Baloch said. 

Tech giants like OpenAI, Google, Meta, Microsoft should introduce deepfake detection systems , i.e. tools to detect videos generated with Google VEO-3, Gemini, Sora, AI Alive, rapid verification hubs, etc  and link fact checkers and newsrooms across borders with shared tools that work in local languages, Baloch suggested, adding that investigators, prosecutors, and judges should be trained in multimedia forensics so remedies come faster without overreach. 

NGOs and women’s rights groups should run confidential reporting, legal aid, and psychosocial support for victims of gendered abuse; and public campaigns should build media literacy, so people learn to check sources, look for provenance data, and wait for verification before sharing. These measures should be paired with safeguards like due process, narrow definitions, and oversight to prevent misuse. While South Asia also needs localised adaptations of global protection mechanisms, for instance regulator-led adoption of attribution and watermarking, platform labelling aligned with UNESCO guidelines, etc , Baloch added. 

Previous Post

Standards for Ethical AI Use in Pakistani Newsrooms Launched at Sahafi Summit 2025

Share on FacebookShare on Twitter
In South Asia, Deepfakes Are Increasingly Used to Inflict Gendered Harm

In South Asia, Deepfakes Are Increasingly Used to Inflict Gendered Harm

November 20, 2025
Standards for Ethical AI Use in Pakistani Newsrooms Launched at Sahafi Summit 2025

Standards for Ethical AI Use in Pakistani Newsrooms Launched at Sahafi Summit 2025

November 14, 2025

Standards for AI in Journalism: Safeguarding Integrity, Innovation, and Trust

November 13, 2025
No Content Available

About Digital Rights Monitor

This website reports on digital rights and internet governance issues in Pakistan and collates related resources and publications. The site is a part of Media Matters for Democracy’s Report Digital Rights initiative that aims to improve reporting on digital rights issues through engagement with media outlets and journalists.

About Media Matters for Democracy

Media Matters for Democracy is a Pakistan based not-for-profit geared towards independent journalism and media and digital rights advocacy. Founded by a group of journalists, MMfD works for innovation in media and journalism through the use of technology, research, and advocacy on media and internet related issues. MMfD works to ensure that expression and information rights and freedoms are protected in Pakistan.

Follow Us on Twitter

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

No Result
View All Result
  • DRM Exclusive
    • News
    • Court Updates
    • Features
    • Comment
    • Campaigns
      • #PrivacyHumSabKe
    • Vodcasts
  • In Media
    • News
    • OP-EDs
  • Editorial
  • Gender & Tech
    • SheConnects
  • Trends Monitor
  • Infographics
  • Resources
    • Laws and Policies
    • Research
    • International Frameworks
  • DRM Advocacy
    • Exclusives
    • Featured
    • Publications
    • Statements