Pakistan’s National AI Policy 2025, approved by the Federal Cabinet in July 2025, represents a comprehensive strategy to harness the power of artificial intelligence (AI). The policy outlines six strategic pillars aimed at transforming Pakistan into a knowledge-based economy by integrating AI into key sectors such as healthcare, education, agriculture, and finance. Central to the policy is the establishment of an AI Regulatory Directorate, tasked with ensuring ethical AI practices, data protection, and algorithmic transparency. A key feature of the policy is the development of a national AI infrastructure, with plans to allocate 2,000 MW of electricity to power AI data centers and blockchain technologies.
Amnesty International has expressed concerns regarding Pakistan’s National AI Policy 2025, particularly around the potential risks of privacy violations and data misuse. The policy’s emphasis on AI-driven surveillance and data collection raises alarms about the lack of robust safeguards to protect citizens’ fundamental rights. Amnesty has highlighted that the rapid adoption of AI technologies in sectors like law enforcement and public health could lead to increased government overreach and discriminatory practices, especially without stringent oversight and accountability mechanisms. The organisation calls for the inclusion of stronger data protection laws and human rights frameworks to ensure that AI implementation does not infringe on privacy or lead to abuses, urging the government to prioritize the protection of individual freedoms in the digital age.
Amnesty International’s Lead Advisor on Artificial Intelligence and Strategic Communications for the Technology and Human Rights Programme, Hajira Maryam works on strategic communications, research and advocacy focusing on the human rights implications of technologies such as artificial intelligence, targeted and mass surveillance, and those enabled by tech corporations. Her work is dedicated to researching and raising awareness about how government-deployed technologies can deepen inequalities and discrimination against marginalized groups.
We had a discussion with her regarding Pakistan’s AI policy and Amnesty International’s evaluation of it.
The draft AI Act aims to regulate AI technologies. How does Amnesty International assess its effectiveness in safeguarding human rights, particularly concerning privacy, freedom of expression, and non-discrimination?
Amnesty International calls on the Pakistani authorities to ensure that AI regulation in Pakistan is effective in terms of following human rights standards. The authorities must ban AI-based practices which are incompatible with human rights, including systems used for public facial recognition, social scoring, predictive policing, biometric categorisation, emotion recognition, risk assessment and profiling tools that violate people’s rights, including those of migrants, refugees and asylum seekers. These recommendations are based on Amnesty International’s extensive research, which also calls for a ban on the development, production, sale, use, and export of remote biometric surveillance technologies by all public and private actors which lead to mass and discriminatory surveillance.
The draft proposes establishment of a National Artificial Intelligence Commission. What are your views on its structure and independence? Do you believe it has sufficient oversight and accountability measures to prevent misuse?
While the Commission and Federal Government are granted vast enforcement powers, public accountability and oversight measures proposed by the draft AI Act are not sufficient. In addition to the welcome proposal of conducting annual audits of the activities of the Commission by the Auditor General of Pakistan and making them available to Parliament and general public, Amnesty International recommends strengthening the proposed oversight measures by:
- Strengthening Parliament’s role in overseeing the implementation of the AI Act, including through the establishment of a relevant parliamentary committee with the ability to monitor and evaluate the enforcement of the Act by the Commission and seek formal clarifications on questions relevant for effective oversight from the Commission and the Federal Government;
- Requiring a mechanism of parliamentary vetting and approval of members of the Commission proposed by the Federal Government;
- Guaranteeing the role of the Parliament in the development and implementation of policy directives and other implementation measures under the Act;
- Ensuring that the above processes are transparent and accessible to the general public.
Amnesty emphasises the importance of involving impacted communities in the development and implementation of AI regulations. How can Pakistan ensure meaningful public consultation and participation in AI governance?
Pakistani authorities can do this by consulting civil society organisations, activists and impacted communities that face or have experienced marginalisation. These include communities that are surveilled, have faced digital exclusion to have access to fundamental rights and services and various marginalized ethnic groups. The authorities should make it obligatory for deployers of AI technologies to provide meaningful information to individuals impacted by AI-assisted decision-making. If any data collection is being done, then the authorities should collect data for those specific purposes and nothing beyond. Authorities must also grant public interest organisations the right to support impacted people seeking remedy and to lodge cases on their own initiative.
The draft AI Act includes provisions for complaints mechanisms. What improvements would you suggest to ensure these mechanisms are accessible, effective, and free from barriers that could deter marginalised groups from seeking redress?
Charging fees for processing complaints can create substantial financial barriers to access this avenue of redress and remedy by people and communities who are at most risk of marginalisation. To empower people and communities and offer meaningful safeguards against violation of their rights and freedoms, Amnesty International recommends that the Act:
- Affirms the right to information and meaningful explanation of AI-supported decision-making for impacted people, including about the use and functioning of AI in the system and its effect on the final decision;
- Establishes effective, accessible and cost-free judicial and non-judicial pathways to remedy for rights violations of people and communities;
- Grants public interest organisations the right to support impacted people seeking remedy and to lodge cases on their own initiative.
The Digital Nation Act has been associated with increased digital surveillance. How does Amnesty International view the balance between national security and individual privacy rights under this legislation?
We recommend that the Act be revisited and undergo several critical amendments, with a primary focus on safeguarding human rights through stronger protections against privacy violations, surveillance, and discrimination. The revised Digital Nation Act should reaffirm Pakistan’s obligation to uphold the rights to privacy, equality, and non-discrimination as guaranteed under international human rights law. It should also ensure that individuals retain access to alternative, non-digital means of obtaining essential services, acknowledging that digital technologies can exacerbate existing inequalities and create new barriers for marginalized communities.
Moreover, the Act should require relevant authorities to conduct comprehensive data protection and human rights impact assessments, as well as evaluations of the broader societal implications, prior to the implementation of any digital ID systems. Finally, the Act should incorporate core data protection principles: including lawfulness, fairness, transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality, and accountability directly into its legal framework.
Pakistan should adopt a comprehensive privacy and data protection law aligned with international human rights standards and with effective and independent public accountability mechanisms, to protect people from unchecked and non-consensual collection and processing of their data. Existing legal instruments should additionally be brought in line with international human rights standards to protect people from state surveillance. It is imperative that the Pakistani authorities ensure that human rights underpin any proposed or enacted legislation.
Amnesty has called for a more inclusive approach in digital policy-making. How can Pakistan foster a collaborative environment where civil society organisations actively contribute to shaping digital laws and policies?
For the AI Act and the Digital Nation Act to protect and promote human rights of everyone, especially communities most at risk of adverse impact of digital harms, including of AI-based harms, Amnesty International emphasises the need for developing and implementing these laws with representatives of those communities, as well as human rights and digital rights advocates and organisations. Consequently, Pakistani authorities should improve the draft AI Act and reopen the Digital Nation Act for amendments through a meaningful public consultation before implementing these Acts. Proactive measures should be taken to involve community and civil society organizations, including through allocating resources needed for engaging in the lengthy and resource-intensive policymaking processes, as well as implementation down the line.
What is Pakistan’s role in international AI governance? How can Pakistan align its national policies with global human rights frameworks?
Pakistan is a country in the Global Majority where the use of technology is constantly used to exacerbate ongoing inequalities, further deepening human rights harms. Even if we look at tech issues that persist beyond the development or deployment of AI, recently Amnesty International conducted a groundbreaking investigation which exposed how companies across the West and China are exporting technologies that fuel online censorship and surveillance through telecommunication networks. With a track record of using technologies to inflict harm, it is essential that whatever technology is being used or deployed by the authorities follows core human rights standards.
When it comes to artificial intelligence, the most significant documented human rights challenges are discriminatory outcomes and unlawful mass surveillance. These harms are often built into AI systems because of their dependence on large, data-driven models that reflect existing biases. Such issues disproportionately affect marginalised communities, where AI technologies are frequently tested and deployed.
Yet, AI’s risks extend further, encompassing opaque decision-making, automated judgments, and insufficient human oversight.
Addressing these challenges requires two key approaches. First, AI must be deployed with careful attention to context. More particularly to who defines the problems it is meant to solve. Second, AI should be examined within its broader social, environmental, and political dimensions, as it often amplifies existing inequalities.
As discussed throughout, Pakistani authorities must involve key stakeholders from civil society organizations to impacted communities to understand and implement a human rights centric AI regulation.
At a broader level, companies developing AI products must carry out thorough human rights due diligence to identify and address potential harms that may arise at any stage of the supply chain or product lifecycle. All providers and deployers of AI systems should proactively disclose sufficient information to allow for assessment of their systems’ human rights impacts and to uphold meaningful transparency. Transparency remains a cornerstone of accountability and effective redress.
For too long, companies have largely been permitted to self-regulate often through issuing “ethics policies” and similar initiatives that lack real enforceability. To prevent human rights violations linked to AI technologies, it is essential to establish binding and enforceable regulations grounded in human rights principles, with strong provisions ensuring corporate transparency and accountability.
Pakistan is one of the countries most impacted by the climate crisis. How do you see the use of AI regarding climate change? Can AI help Pakistan mitigate climate catastrophes such as floods, droughts, heatwaves etc?
I can’t comment much on the scientific aspect of how AI can help us predict or mitigate climate catastrophes. But what I can say is that recently, across the media, I have observed how there is a lot of narrative and discussion on the construction, as an effort to lure foreign investments to build AI and Bitcoin data centers to help Pakistan reach its true potential in reaching tech innovation.
Such framing has a lot of flaws and requires genuine soul searching by the authorities and shows complete disregard of human rights. It is questionable whether the authorities have conducted much needed due diligence and human rights impact assessments. Time and again it’s been raised internationally that the construction of data centres can lead to disastrous consequences for the environment and the climate. Such expansions are always sold under the guide of job creation, and to expand efforts towards Pakistan’s digital transformation. However, little is known on how this “transformation” will be achieved by keeping human rights at the heart of what the authorities aim to pursue. These techno solutionist narratives are extremely dangerous for governance, people, and their rights.

