In this age of hyperrealism, we must defend truth not only with facts, but also with institutions, vigilance, and above all, imagination, the imagination to picture a democracy where authenticity still matters
By Viiveck Verma
In the long battle between truth and falsehood, the latest and most formidable weapon is no longer misinformation but hyperrealism. Deepfakes — AI-generated images, videos, and audio that mimic real people with uncanny precision, are fast becoming the frontline threat to democratic integrity. They don’t just bend the truth. They manufacture it, pixel by pixel.
Unlike traditional forms of political propaganda, which rely on half-truths and selective editing, deepfakes fabricate entire realities. A political leader declaring war, a judge accepting bribes, a journalist confessing to bias, none of it need actually happen. All it takes is a few hours of training an AI model on publicly available content, and the result can be indistinguishable from reality. In a hyperconnected world where perception often trumps fact, that is a dangerous prospect.
Believe Nothing
The danger is not hypothetical. In the 2024 election cycle, India, one of the most populous democracies and among the world’s most active social media markets, already saw political deepfakes in circulation. A clip of a prominent leader appearing to make inflammatory comments circulated widely before being debunked. It was convincing enough to go viral, damaging enough to dominate news cycles. The fact that it was fabricated barely mattered by the time the truth caught up.
In a society where WhatsApp forwards often replace newspapers, the viral often beats the verified. But the problem with deepfakes is not merely one of detection, it is one of erosion. Erosion of trust, erosion of credibility, erosion of the very idea that truth is knowable. Once the public becomes aware that anything can be faked, the result is a kind of epistemic nihilism. People stop believing not only in what’s false but in what’s real. This is the true crisis: not that we will believe lies, but that we will believe nothing at all.
Legal Apparatus
This is not a problem democracies are equipped to handle easily. Regulation is slow, deliberative, and territorial. Deepfakes are fast, global, and decentralised. A video created in one country can swing an election in another before fact-checkers have even had their morning coffee. Laws around digital impersonation remain patchy.
The problem with deepfakes is not merely one of detection, it is one of erosion — of trust, credibility, and the very idea that truth is knowable
In India, Section 66D of the Information Technology Act addresses impersonation using electronic means, but it wasn’t crafted with AI-generated avatars in mind. The upcoming Digital India Act may offer sharper tools, but the legal apparatus still lags far behind the technology it seeks to govern. Some argue for technical countermeasures, digital watermarks, blockchain authentication, and AI-detection tools.
These are important but not foolproof. Detection models often struggle to keep up with newer-generation deepfakes, which are rapidly improving in resolution, motion fidelity, and audio matching. Even if a fake is detected, the platform it was shared on may be too sluggish — or too indifferent — to act decisively.
Others call for platform responsibility, urging tech giants to label AI-generated content, limit its virality, or provide contextual overlays. But these solutions run into complex ethical territory. Who decides what is labelled real or fake? Should Meta or X be the arbiters of political reality? In a world of contested narratives and deep political polarisation, such moves risk being seen as censorship rather than protection.
Still, others look to media literacy as a long-term antidote. Equip citizens with the tools to question what they see and hear, build a culture of critical consumption. In principle, this is the most democratic solution. In practice, it’s a race against time, and attention spans. In India, where digital literacy remains uneven, especially in rural areas, the challenge is formidable.
Philosophical Dilemma
At the core of this debate is a philosophical dilemma: what happens to democracy when its foundational assumption, that people can make informed choices, no longer holds? The idea of the informed citizen rests on the availability of verifiable information. If technology can fabricate voices, faces, speeches, and events, and if those fabrications spread faster than corrections, then democracy begins to function in a hall of mirrors. What we’re confronting is not just a technological disruption but a crisis of perception.
In previous eras, propaganda distorted the truth. In this one, deepfakes obliterate the boundary between real and artificial. And yet, the solution cannot be to halt technological progress. AI has legitimate and even exciting applications in art, education, and accessibility. The question is not how to stop the technology, but how to reassert human agency in the face of it.
This requires a coalition of responses, legal, technical, educational, and ethical. Governments must draft sharper laws, and do so with urgency, not after a democratic disaster. Tech companies must be pressured, not politely asked, to take responsibility for the tools they unleash.
Educators and civil society must push for new literacy programmes that teach people to read digital content with a forensic eye. And finally, we as citizens must confront our own complicity, our willingness to share the sensational before the verified, to trust what flatters our biases rather than what challenges them.
Deepfakes are not the death of truth, but they are its most cunning adversary yet. In this age of hyperrealism, we must defend truth not only with facts, but also with institutions, vigilance, and above all, imagination, the imagination to picture a democracy where authenticity still matters, even when everything can be faked.
(The author is founder and CEO, Upsurge Global, co-founder, Global Carbon Warriors and Adjunct Professor, EThames College)