We live in a world where information spreads at lightning speed, but the truth is increasingly fragile. Artificial intelligence, once seen as a force for good, has been weaponized to undermine trust and sow discord. The digital sphere, initially envisioned as a global utopia of shared knowledge, has become a battleground. Here, deepfakes and AI-generated fabrications don’t just bend reality; they reshape it, pixel by pixel, to manipulate emotions and fracture societies.
This is not a hypothetical future.
On March 17, 2025, a video titled “Parachinar Massacre Exposed” exploded on TikTok, shared by the account Parachinar News. The clip showed a seemingly distraught soldier admitting to state-sponsored violence. Within hours, it garnered millions of views, triggering outrage and protests. However, anomalies soon emerged: the soldier’s beard flickered, his tears lacked physical consistency, and his voice had an unnatural cadence. Forensic analysis confirmed the video was an AI fabrication; the soldier was a digital construct.
The creators of the video employed a range of sophisticated tactics. They utilized deepfake synthesis to clone facial expressions and voice from unrelated footage, nurturing emotional manipulation with expertly crafted dialogue that exploited existing grievances. In a further layer of deception, real footage of an injured constable was interwoven with fabricated elements to enhance the video’s credibility.
When visual and auditory evidence can no longer be trusted, the very foundation of society crumbles.
The fallout was swift and severe as conspiracy theories spread rapidly. Even after the debunking, a large number of citizens believed the video contained “some truth.” This incident highlighted AI’s unique danger: it doesn’t just distort facts; it replaces reality itself.
When visual and auditory evidence can no longer be trusted, the very foundation of society crumbles. The credibility of institutions, the media, and even eyewitness accounts is undermined. As one social media user put it, “If a crying soldier can be fake, what can we believe?”
AI-driven disinformation thrives on division. The fabricated soldier video exploited existing ethnic tensions in Pakistan, with groups like the Pashtun Tahafuz Movement (PTM) and Baloch separatist movements amplifying it to fuel anti-state sentiment. The result was a nation grappling with doubt, not just about a single video, but about its own shared history.
Hostile actors now possess a potent, low-cost weapon: AI. By flooding social media with synthetic content, they can destabilize democracies, influence elections, and incite violence, all without deploying military force.
Groups such as PTM and Baloch separatists are using AI to reshape historical narratives. Their strategy involves generating emotionally charged content, fake mass graves, manipulated speeches, to radicalize supporters and gain international attention,
Content is tailored to exploit platform algorithms (e.g., prioritizing inflammatory clips based on engagement).
Debunked content is reposted under new accounts to evade detection. Real grievances are combined with fabricated evidence to legitimize extremism.
Governments must invest in AI-detection systems, such as digital watermarks for synthetic media. Social media companies should be mandated to deploy deepfake detectors and label AI-generated content. A media literacy curriculum is essential to educate students on identifying inconsistencies, like unnatural shadows and mismatched audio. Public awareness campaigns, leveraging influencers and targeted advertising, can help educate the public about the dangers of deepfakes.
Moreover, the creation of malicious deepfakes should be criminalized, with political ads mandated to disclose the use of AI. International cooperation through treaties is crucial to track and counter disinformation networks. Tech companies, governments, and civil society must collaborate, establishing AI ethics charters (voluntary agreements aimed at limiting the misuse of generative AI tools) and forming fact-checking coalitions that unite journalists and platforms.
Democracy depends on a shared understanding of reality. If AI can manufacture “truth,” then consent becomes manipulation, and accountability becomes impossible.
The writer is a freelance columnist.