The Indian government has proposed amendments to the IT Rules, 2021, to address the growing threat of AI-generated deepfakes and synthetic media. The draft mandates clear labelling, traceability, and metadata embedding for such content, with stricter obligations for large social media platforms.
New Delhi: The government on Wednesday proposed changes to IT rules, mandating the clear labelling of AI-generated content and increasing the accountability of large platforms like Facebook and YouTube for verifying and flagging synthetic information to curb user harm from deepfakes and misinformation.
The IT Ministry noted that deepfake audio, videos and synthetic media going viral on social platforms have demonstrated the potential of generative AI to create “convincing falsehoods”, where such content can be “weaponised” to spread misinformation, damage reputations, manipulate or influence elections, or commit financial fraud.
The proposed amendments to IT rules provide a clear legal basis for labelling, traceability, and accountability related to synthetically-generated information.
Apart from clearly defining synthetically generated information, the draft amendment, on which comments from stakeholders have been sought by November 6, 2025, mandates labelling, visibility, and metadata embedding for synthetically generated or modified information to distinguish such content from authentic media.
The stricter rules would increase the accountability of significant social media intermediaries (those with 50 lakh or more registered users) in verifying and flagging synthetic information through reasonable and appropriate technical measures.
The draft rules mandate platforms to label AI-generated content with prominent markers and identifiers, covering a minimum of 10 per cent of the visual display or the initial 10 per cent of the duration of an audio clip.
It requires significant social media platforms to obtain a user declaration on whether uploaded information is synthetically generated, deploy reasonable and proportionate technical measures to verify such declarations, and ensure that AI-generated information is clearly labelled or accompanied by a notice indicating the same.
The draft rules further prohibit intermediaries from modifying, suppressing, or removing such labels or identifiers.
“In Parliament as well as many forums, there have been demands that something be done about deepfakes, which are harming society…people using some prominent person’s image, which then affects their personal lives, and privacy…Steps we have taken aim to ensure that users get to know whether something is synthetic or real. It is important that users know what they are seeing,” IT Minister Ashwini Vaishnaw said, adding that mandatory labelling and visibility will enable clear distinctions between synthetic and authentic content.
Once rules are finalised, any compliance failure could mean loss of the safe harbour clause enjoyed by large platforms.
With the increasing availability of generative AI tools and the resulting proliferation of synthetically generated information (deepfakes), the potential for misuse of such technologies to cause user harm, spread misinformation, manipulate elections, or impersonate individuals has grown significantly, the IT Ministry said.
Accordingly, the IT Ministry has prepared draft amendments to the IT Rules, 2021, with an aim to strengthen due diligence obligations for intermediaries, particularly significant social media intermediaries (SSMIs), as well as for platforms that enable the creation or modification of synthetically-generated content.
The draft introduces a new clause defining synthetically generated content as information that is artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that appears reasonably authentic or true.
A note by the IT Ministry said that globally, and in India, policymakers are increasingly concerned about fabricated or synthetic images, videos, and audio clips (deepfakes) that are indistinguishable from real content, and are being blatantly used to produce non-consensual intimate or obscene imagery, mislead the public with fabricated political or news content, commit fraud or impersonation for financial gain.
The latest move assumes significance as India is among the top markets for global social media platforms, such as Facebook, WhatsApp and others.
A senior Meta official said last year that India has become the largest market for Meta AI usage. In August this year, OpenAI CEO Sam Altman had said that India, which is currently the second-largest market for the company, could soon become its largest globally.
Asked if the changed rules would also apply to content generated on OpenAI’s Sora or Gemini, sources said in many cases, videos are generated but not circulated, but the obligation is triggered when a video is posted for dissemination. The onus in such a case would be on intermediaries who are displaying the media to the public and users who are hosting media on the platforms.
Over the treatment of AI content on messaging platforms like WhatsApp, sources said that once it is brought to their notice, they will have to take steps to prevent its virality.
India has witnessed an alarming rise in AI-generated deepfakes, prompting court interventions. Most recent viral cases include misleading ads depicting Sadhguru’s fake arrest, which the Delhi High Court ordered US digital giant Google to remove.
Earlier this month, Aishwarya Rai Bachchan and Abhishek Bachchan sued YouTube and Google in a lawsuit that seeks Rs 4 crore in damages over alleged AI deepfake videos.