Saeeda Usmani
The most consequential battles of the 21st century will not be fought with drones, hypersonic missiles, or cyberattacks. They will be waged inside the human mind. As artificial intelligence is evolving, it is becoming a weapon not merely for physical destruction but for psychological domination. A tool to hack emotions, exploit cognitive biases and rewrite the narratives that shape human behaviour. This shift marks the new age of global power struggle, where controlling the perceptions gives superiority more than controlling territories.
Traditional warfare is all about breaking an enemy’s body, basically manslaughter, whereas
AI-driven psychological warfare seeks to break their sense of reality. It is unlike the rough and obvious propaganda of the 20th century with simplistic messages. AI, on the other hand, enables hyper-personalised manipulation of the human mind-set. It collects vast data to identify individual fears, desires, and ideological preferences, then generates narratives in such a way that feel intimately true. A teenager from Pakistan might see a post framing gender activism, specifically feminism, as a Western plot, while a European receives a video warning that immigrants will erase her culture and will take their jobs in the “for you page”. Their worldviews are fragmented by algorithms that they cannot see or comprehend. “Seeing is believing” is prioritised over researching and finding out the truth.
This represents a spontaneous shift in the global power dynamics. For millennia, humans have struggled to dominate land, resources, or institutions. Now, the battlefield is the “consciousness realm”, the realm of the known but unknown. States and corporations using AI can easily bypass the traditional institutions like governments, schools, and media’s ability to instil ideas directly into the young generation’s minds.
The danger lies not only in what people are made to believe but also in how they believe it. AI can exploit the brain’s vulnerability. It can trigger emotional responses really quick. When fear, anger, tribalism, or ethnic segregation overtakes logic, societies are fragmented into little bubbles of “truth”. And the shared reality of democracy, cooperation, and human rights is crushed into rubble.
This erosion of realities has geopolitical consequences. Imagine a future where AI systems, trained on decades of cultural data, simulate entire populations to test propaganda strategies. In the world of Matrix, let’s suppose a foreign power deploys AI-generated influencers, with uncanny resemblance to humans, to destabilise a rival nation. Issuing statements that further fuel the existing ethnic tensions, undermining trust in elections, promoting unrealistic standards, redefining needs or simply inciting the teenage population toward illicit activities. Unlike the nuclear weapons, these tools leave no radioactive trace. So, there is “No trial, no repercussions”.
The result is a world where conflict is constant, borders and traditional defence are meaningless.
Every individual is a potential combatant in a war they don’t know “exists” and a threat lingering over their heads.
Yet the gravest threat may be existential. AI, however, can weaponize this evolutionary adaptation. By using the stories that unite the communities, using religions, national myths, and history, even the popular consensus, it could damage the fabric of human civilization. A society that cannot agree on basic facts cannot tackle climate change, pandemics, or inequality. It becomes a collection of paranoids, each group claiming that others are deluded.
To survive this paradigm, humanity must confront two urgent questions. First, how do we regulate AI’s role in shaping perceptions without impacting the right to free speech? Complete bans on algorithms or data collection are impractical and hinder the potential achievements human civilisation can gain, but transparency could help in solving this dilemma. For instance, introducing laws requiring AI-generated content to carry digital “watermarks”.
Second, and more critically, we must redefine human agency in the age of cognitive warfare.
Education systems must prioritise critical thinking over memorization. Media literacy programs, and grassroots movements to rebuild communal trust must not be considered optional as they are defences against mental colonization. Just like vaccines are used to make the immune system stronger and able to recognise pathogens, engraving these can improve our cognitive strengths against manipulation. Pakistan, as a developing state, urgently needs policy reforms, considering this a national emergency.
Ultimately, the challenge is not technological but philosophical. And the crucial question here is:
What makes a thought authentically “human,” when AI at this point can mimic creativity, empathy, and intuition? It is not far when it will be impossible to distinguish human from AI. If we outsource the curation of reality to machines, we might become a species that no longer understands its own emotions. Never to forget that the future of conflict will not be won by those with the smartest algorithms, but by those who remember that the human mind is not a machine to be programmed, but a universe to be understood.
The writer is a Researcher, China-Pakistan Study Centre, Institute of Strategic Studies Islamabad