There is no consensus among nations over what should be the right approach to tame the genie unleashed by Artificial Intelligence (AI). The flip side of the technology — especially its potential for misuse and spreading misinformation — is already a cause for global concern. However, there is no magic wand to make the dangerous spin-offs of the technology, like Deepfakes, disappear instantly. Morphing tools can be used to commit crimes, harm reputations, influence polls, and undermine trust in democratic institutions. Every nation must figure out solutions, keeping its needs and ecosystems in view. In doing so, the overreach of state authorities and invasion of privacy should be avoided. The Ministry of Electronics and Information Technology has now released the AI Governance Guidelines that seeks to prioritise a people-centric approach so that the technology can catalyse inclusive growth. The guidelines have been drafted by a high-level committee under the chairmanship of Prof Balaraman Ravindran, IIT-Madras. These are intended to guide policymakers, researchers, and industry to build better national and international cooperation for safe, responsible, and inclusive AI adoption. The key takeaway is that the government is not in a hurry to introduce any kind of prohibitive regulation at this stage. Instead, it wants to allow innovations to guidethe industry, laying stress on accelerating AI innovation at any cost. The new framework seeks to promote innovation with guardrails, without throttling AI adoption. This is the right and balanced approach, given the global trends.
The guidelines envisage the establishment of the AI Governance Group, supported by the Technology and Policy Expert Committee and the AI Safety Institute. It will be a small, permanent, and effective inter-agency body responsible for overall policy development and coordination on AI governance. This oversight framework is aimed at ensuring accountability and risk reduction. India’s approach contrasts with that of the European Union, which has adopted a binding AI Act categorising systems by risk levels. The US, on the other hand, has left it to market forces to determine the rules. India’s framework, by comparison, seeks a middle path, promoting AI as a driver of inclusion and competitiveness, while relying on adaptive governance rather than rigid regulation. This is a unique approach. Countries like India, where AI tools have the transformative potential, must take a balanced approach to ensure the free flow of ideas and innovations while effectively addressing the risks at the application level. The committee looked at India’s needs, its ecosystem, and then built the framework completely ground up. The committee’s current assessment is that many of the risks emerging from AI can be addressed through existing laws. The use of deepfakes to impersonate individuals can be regulated by provisions under the Information Technology Act and the Bharatiya Nyaya Sanhita, and the use of personal data without user consent to train AI models is governed by the Digital Personal Data Protection Act.