India’s call for a global framework for ethical artificial intelligence (AI) tools is a timely and welcome initiative as the time is ripe for a wider international debate over regulating the emerging sectors that have a profound impact on societies. With New Delhi gearing up to host the G20 summit a few days from now, Prime Minister Narendra Modi has pitched for an international consensus on how to formulate regulations. This came close on the heels of technology giant Microsoft proposing a five-point blueprint for governing AI in India. It suggests five policy ideas for AI governance, including measures such as deploying safety frameworks, developing legal frameworks, promoting transparency and forming public-private partnerships. The company also emphasises the significance of international cooperation and multilateral frameworks for the effective governance of AI, a formulation in sync with India’s thinking. AI, along with machine learning, is at the cusp of transforming our lives at a rate never seen before in history. While it has gradually become ingrained in the regular lives of millions with inventions like ChatGPT and Google Bard, the rapid development of AI has also led to apprehensions prompting many experts and industry bodies to call for regulation to ensure responsible use and deployment of AI technologies. In July, the Telecom Regulatory Authority of India (TRAI), in a consultation paper, proposed a domestic statutory authority to regulate AI through the lens of a “risk-based framework”, while also calling for collaborations with international agencies and governments of other countries for forming a global agency for the responsible use of AI.
India is expected to present before the upcoming G20 summit a strong case for a global agency with regulatory oversight on ethical AI use cases. Back home, the Centre is looking to draw a clear distinction between different types of online intermediaries, including AI-based platforms and issue-specific regulations for each of these intermediaries in the Digital India Bill. TRAI’s recommendation on forming an international body for responsible AI is broadly in line with an approach enunciated by Sam Altman, the founder of OpenAI — the company behind ChatGPT — who had called for an international regulatory body for AI. The concerns being flagged over AI use fall into three broad heads: privacy, system bias and violation of intellectual property rights. The global response to these concerns has been varied. The European union has taken a tougher stance by proposing to bring in a new AI Act that segregates artificial intelligence as per use case scenarios, based broadly on the degree of invasiveness and risk while the UK has come up with a light-touch approach that aims to foster innovation. The US approach falls somewhere in between, with Washington setting the stage for defining an AI regulation rulebook by kicking off public consultations earlier this year on how to regulate artificial intelligence tools.