According to one estimate, Grok generated one non-consensual sexual image every minute. Alarmingly, Grok makes such content easier to produce and customise
The outrage over the disturbing images generated by Grok, the artificial intelligence chatbot of the social media platform X, highlights the growing global concerns over the rogue application of new technologies. American billionaire Elon Musk-owned Grok allows its users to ‘digitally undress’ people without their consent and post sexually explicit images of celebrities and even underage children. This is a despicable and abhorrent trend that needs to be stopped at all costs. Regulators around the world, including India, are exploring options on how to check the latest menace, which is a result of the combination of X’s lax content moderation policies and the accessibility of powerful generative AI tools. In August last year, xAI launched an image-generating feature, called Grok Imagine, with a “spicy” mode that was reportedly used to generate explicit images. Grok seems to be unique among major chatbots in its permissive stance and apparent holes in safeguards. Users discovered that Grok has been creating fake, sexually suggestive edits of real photos of women and girls on request. According to one estimate, Grok generated one non-consensual sexual image every minute. Alarmingly, Grok makes such content easier to produce and customise. And the real impact of the bot comes through its integration with a major social-media platform, allowing it to turn non-consensual, sexualised images into viral phenomena. Following the central government’s recent directive, X has now blocked over 3,500 pieces of content and deleted over 600 accounts and has promised operations in compliance with the country’s online content laws.
India is not alone in objecting to the generation of explicit content using Grok AI. Indonesia recently suspended the chatbot over concernsabout AI-generated pornographic content, and the UK, France and Malaysia have also pushed back against the content generation in the past. The recent case presents a regulatory challenge that extends beyond X. Under Section 79 of the Information Technology Act, platforms enjoy safe harbour immunity from liability for content posted by users, provided they comply with due diligence requirements. But generative AI systems like Grok occupy an ambiguous space —they are neither passive platforms transmitting user content nor traditional users creating content independently. There is a need to examine the legal implications of classifying Grok as a content creator, instead of a passive platform. This will help establish a precedent for how India regulates AI-generated material across platforms. This regulation should be made to apply to other platforms as well if their AI bots generate unlawful content. The abuse of the Grok platform was not just limited to fake accounts but also involved legitimate photos and videos uploaded by women that were later manipulated through AI prompts and synthetic outputs. So far, India’s regulatory response has been more focused on responding to damage. There is an urgent need to have a proactive policy to curb explicit content generated through AI.