From the days of the good old paper ballot, elections are now featuring the widespread influence of technology, especially artificial intelligence (AI). No, this is not to be seen as a technological advancement like the electronic voting machine, but as a major threat scenario where ‘bad actors’ use AI deceptively. India’s just concluded elections, said to be the world’s largest democratic process, too was not immune to the disruptive use of AI, according to OpenAI, the creators of ChatGPT. In its latest threat intel report, ‘AI and Covert Influence Operations: Latest Trends’, OpenAI said it “acted within 24 hours to disrupt deceptive uses of AI in covert operations focused on the Indian elections”. This was by STOIC, a political campaign management firm in Israel, which allegedly generated some content on Indian elections alongside the Gaza conflict. This covert influence operation (IO), OpenAI claimed, sought to use AI models for a range of tasks, such as generating short comments and longer articles in a range of languages, making up names and bios for social media accounts, conducting open-source research, debugging simple code, and translating and proofreading texts. The OpenAI operation to prevent STOIC from advancing its agenda was nicknamed Zero Zeno, after the founder of the Stoic school of philosophy. The people behind Zero Zeno used OpenAI’s models to generate articles and comments that were then posted across multiple platforms, notably Instagram, Facebook and X. The content focused on a wide range of issues, including Russia’s invasion of Ukraine, conflict in Gaza, Indian elections, politics in Europe and the US, and criticisms of the Chinese government by Chinese dissidents and foreign governments. These operations used AI to some degree, with AI-generated material posted alongside traditional formats like manually written texts or memes copied from the internet.
It says in early May, STOIC began targeting audiences in India with English-language content. Apart from criticising the UN relief agency in Palestine, portraying Qatar’s investments in the US as a threat to the American way of life, STOIC also began generating comments that focused on India, criticising the ruling BJP and praising the Congress. While the ChatGPT founders say they stopped STOIC from causing widespread damage, the fact remains that the combination of social media with AI, with forces waiting to misuse them along with deepfakes, are emerging as a major threat to a free and fair election process and in turn, a healthy democracy. In a country like India where social media platforms like WhatsApp and X have for long been misused to force political narratives, spread hate and disseminate fake news, the deceptive use of AI requires to be seen with concern and tackled with urgency. With political parties now adding a new tribe called social media warriors, some operating ‘IT Cells’ to target political foes and running ‘WhatsApp universities’, what OpenAI revealed could be just the tip of the proverbial iceberg that is dangerously inching nearer to India’s democratic system. The OpenAI report also leaves one big question, perhaps the most crucial one, unanswered. Was STOIC roped in to force a preplanned narrative in India? If so, by whom?