This opinion piece was orginally published on Dawn News. You can visit the original piece here.
IT’S earnings season. Once again, the markets are moving to the rhythm set by the ‘Magnificent Seven’ dominating the Nasdaq. Their numbers are in, and the mood on Wall Street is electric. Much of that excitement, let’s be honest, is about just one thing: AI.
What started as a buzzword a few years ago has turned into a full-blown economic engine. Investors are no longer just buying into hype, but the results. Nvidia has posted a staggering year-over-year revenue increase of nearly 70 per cent, underscoring the explosive demand for its AI chips. Meta is investing heavily in AI for its core ad functionality, boosting engagement, margins and investor confidence through more efficient monetisation. Microsoft, Amazon, Go-ogle, they are all re-architecting their business models around AI, and the market is rewarding them with soaring valuations.
The logic is simple: the future has arrived early, and investors have already priced in the gains, undermining the ‘bubble theory’, which suggests that the AI-based company valuations are inflated without real substance and could burst like a figurative bubble. Much like the Dot Com crash of the early 2000s.
But the earnings of the ‘Mag 7’ tell another story. They signal that AI has moved beyond being the next big thing, to the thing. And we are only beginning to tap its full potential.
With the US gradually easing chip export restrictions to China, a new phase of AI acceleration is taking shape. Automation is giving way to autonomy, and cars, drones and infrastructure systems are beginning to operate as intelligent agents, capable of learning, adapting and coordinating with other machines in real time.
The next AI wave will reinvent entire sectors. Generative design in manufacturing. AI-discovered drugs. Language models embedded in judicial, health and financial systems. Smart cities built not around traffic lights but predictive analytics. If this feels like science fiction, you haven’t been paying attention. We are moving fast, past the age of data, deep into the age of autonomous decisions.
Now here is the uncomfortable part: are we as a country ready for any of this? The answer is complicated.
On one hand, the newly released National AI Policy signals intent: it outlines large-scale commitments, from training a million professionals in AI and related technologies to integrating AI into key areas of governance, healthcare, education and agriculture. The policy envisions the use of AI to streamline civic services, enhance public sector efficiency and enable data-driven decision-making at scale. It also proposes the establishment of oversight bodies to ensure ethical deployment, data privacy and algorithmic accountability, an attempt to build a governance framework around a rapidly evolving technology.
In essence, the document offers a blueprint for a future where AI moves beyond being an abstract innovation reserved for elites or those who can afford it, and becomes a foundational part of national infrastructure and a key driver of state capacity.
On the other hand, several critical gaps risk undermining the policy’s promise. First, while the vision is bold, the execution framework is vague. Ambitious targets, like training a million people or deploying national-scale civic AI projects, lack operational detail, funding clarity and realistic timelines. Without institutional capacity-building, these goals may remain rhetorical.
Second, the policy assumes that ministries, boards and provincial departments will be able to digitise, standardise and share data rapidly. However, it does not fully address existing challenges related to fragmented systems, limited interoperability and bureaucratic hurdles that may impede effective im-plementation.
Third, it doesn’t take into account geopolitical constraints. With ongoing chip export controls and rising global competition over compute, Pakistan’s lack of sovereign AI infrastructure, whether in silicon, data or foundational models, poses a major strategic vulnerability. Without plans to build resilience, the country risks dependence without capability.
And perhaps the most important question is this: if a policy repeatedly invokes ethics and responsible use but does not clarify, in concrete, actionable terms, how AI systems will safeguard fundamental rights such as privacy, freedom of expression, and protection from discrimination (especially where regulation operates in legally ambiguous digital spaces, and where explicit safeguards against algorithmic bias, data misuse or non-transparent decision-making remain absent or vague), can such governance truly protect the most marginalised?
And if not, should it not be anchored much more firmly and explicitly in constitutional and human rights principles? Food for thought.