Home |Editorials |Editorial Baby Steps To Fight Global Ai Menace
Editorial: Baby steps to fight global AI menace
The Indian government's proposed draft rules on AI-generated content for social media platforms is a welcome step towards addressing the growing misuse of synthetically generated information, including deepfakes
Artificial Intelligence (AI) is a double-edged weapon; it is transforming our lives in myriad positive ways at a pace never heard of in human history, but at the same time, its spin-off technologies have the potential to create chaos and tear apart societies. The rogue application of AI — Deepfake, for instance, — has become a major global concern. As nations around the world are grappling with the enormity of the misuse of generative AI tools, India has taken the first step to tackle the menace. The Ministry of Electronics and Information Technology has proposed draft rules that call for mandatory labelling of AI-generated content on social media platforms. This is a welcome move to check the growing misuse of synthetically generated information, including deepfakes. As per the draft rules, for visual content, the identifier or label should cover “at least 10% of total surface area”, while for audio content, it should cover the “initial 10% of its duration”. Social media platforms will now be required to ask users if the content is “synthetically generated information”. However, they will also have to deploy reasonable and proportionate technical measures to verify themselves, and thus take a more proactive approach in addressing these issues. This effectively puts the onus on these platforms, and justifiably so. The initial reaction to the draft rules has been positive, with experts describing the move as a timely step to combat rampant misinformation, illegal commercialisation, and misuse of algorithmic creativity. However, merely labelling the AI-generated content is not enough.
Concerted efforts must be made to improve AI literacy and devise a regulatory system that takes a user-based approach. There is also a need for strong collaboration between the state agencies and private companies. The government’s move signals recognition of a rapidly evolving digital landscape where the distinction between real and artificial has grown dangerously thin. With concerns about misuse, accountability, and transparency at the forefront, the proposed policy aims to empower users to discern authentic content while balancing innovation with regulationin a rapidly evolving technologicallandscape. Both creators and hosting platforms, such as YouTube, Instagram, and X are equally responsible for ensuring proper labelling and may lose safe harbour immunity if they fail to comply. Deepfake cases in India have surged by 550% since 2019, with losses projected at Rs 70,000 crore in 2024, underlining the growing economic and security threat posed by synthetic media. These tools enable easy creation of politically motivated fake videos, such as doctored speeches of leaders. During the 2024 general elections, fake videos of politicians were disseminated across social media platforms, eroding public trust and posing risks to democratic integrity. Such misinformation campaigns are often designed to polarise voters or discredit opponents. As the World Economic Forum’s Global Cybersecurity Outlook 2025 emphasises, the deepfake threat represents a critical test of our ability to maintain trust in an AI-powered world.