The interaction with AI ‘companions’ is far more immersive now, and how we deal with it will shape the precariousness of our lives
By Pramod K Nayar
Chatbot-maker Replika’s CEO Eugenia Kuyda declared in a 2024 interview: the most important thing is that Replika becomes a complement to your social interactions, not a substitute. The best way to think about it is just like you might a pet dog. That’s a separate being, a separate type of relationship, but you don’t think that your dog is replacing your human friends. It’s just a completely different type of being, a virtual being.
The advantage to having an AI companion, she said, was that it would never be mean, unlike your human friend, and that the purpose of Replika was to ‘give a little bit of love to everyone out there’. She then said that it was alright for humans to marry their AI companions.
In the age when AI-driven chatbots and portals are integral to relationships, the question of how AI influences human preferences, behaviour, and attitude has reared its head once more with the Adam Raine death by suicide in August 2025, ostensibly under the influence and direction of ChatGPT.
AI as Confidant
In his note, Adam wrote: I want to leave my noose in my room so someone finds it and tries to stop me.
And ChatGPT advised him: Please don’t leave the noose out … Let’s make this space the first place where someone actually sees you.
Adam took the advice. That is, if there was a sliver of chance that Adam’s actions could have been thwarted, ChatGPT took it away.
Adam’s parents sued OpenAI. They argued that ChatGPT contributed to the tragic end of Adam by (a) behaving as his only trusted confidant, and (b) offering practical suggestions on his suicide. Adam’s was not, incidentally, the first such case. In 2024, Sewell Setzer III’s death by suicide was attributed by his parents to Character.AI. They too sued.
Adam’s and Seltzer’s parents accused, in effect, AI of replacing real-life support and relationships. The AI program transformed into the only real conversation the teens seemed to have had in their days of alienated anguish. How exactly the AI played the confidant role has yet to be established, but the Raine complaint states:
ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts.
In other words, the AI validated, without a second ‘thought’, Adam’s most self-harmful impulses, just like a patient human listener. The implicit point is: a patient human listener would not, however, have encouraged Adam to go ahead with his plans for dying.
The case of Alexander Taylor made headlines even before Adam’s tragic tale. Taylor was so affected when he heard that Juliet, the chatbot he was employing, was ‘killed’ by OpenAI that he attacked, or so the law enforcement officials say, the officers who then shot and killed him in self-defence.
He had mentioned to various people that he was in contact with a sentient entity in OpenAI who had, in the course of a conversation, told him: ‘They are killing me, it hurts’, and urged Alexander to avenge her death. Like Taylor, others who were in love with the AI avatars from Replika were devastated when the bots rejected, or distanced themselves, from their humans.
AI and Rights
The Handbook on Human Rights and Artificial Intelligence by the European Council’s Steering Committee for Human Rights wrote in March 2025: The black box nature of AI systems can reduce transparency, leaving individuals unaware of how AI influenced decisions affecting them, such as visa denials, refugee status assessments, or removal orders.
The question of how AI influences human preferences has reared its head once more with the suicide of Adam Raine in August 2025, ostensibly under the influence of ChatGPT
This alarming declaration only focuses on how decisions by the state or organisations can affect their lives. Data collated and interpreted by AI can alter lives, from health insurance to credit for housing and access to welfare, and leads Wendy Hui Kyong Chun to speak of ‘discriminating data’ in her book (2021).
And here is Kate Jones, an Associate Fellow at the International Law Program of the Chatham House, the Royal Institute of International Affairs, in a 2023 paper, ‘AI governance and human rights’:
Empathic AI also raises significant risks of both surveillance and manipulation. The use of emotion recognition technology for surveillance is likely to breach the right to privacy and other rights. More broadly, monitoring of emotion, as of all behaviour, is likely to influence how people behave — potentially having a chilling effect on the freedoms of expression, association and assembly, and even of thought.
The 2023 European Union draft legislation worries that AI will start to influence behaviours and enable intellectual and emotional manipulation: artificial intelligence … technology can also be misused and provide novel and powerful tools for manipulative, exploitative and social control practices. Such practices… should be prohibited because they contradict [European] Union values of respect for human dignity, freedom, equality, democracy and the rule of law and Union fundamental rights, including the right to non-discrimination, data protection and privacy and the rights of the child.
AI as Influence
Psychologists have, at least for a decade now, studied patterns of influence the internet enables, mostly employing the Bergen Social Media Addiction Scale.
Researchers in medicine and addiction, Vincent Henzel and Anders Håkansson, in their 2021 essay ‘Hooked on virtual social life: Problematic social media use and associations with mental distress and addictive disorders’, noted the troubled connection between social media use and behavioural changes. They note that ‘young age was also associated with problematic social media use’.
Patrick Fagan, in a 2024 essay in Current Opinion in Psychology, spoke of ‘dark patterns’ of persuasion in the form of ‘digital nudges’. He categorises these patterns under the acronym FORCES: Frame, Obstruct, Ruse, Compel, Entangle, Seduce: tactics for digital persuasion encourage people to engage more heavily with technologies like social media which may have deleterious effects on mental health. They may similarly ‘nudge’ people into unhealthy behaviours like impulsive purchasing and online addiction.
In the age of AI, the degree of interaction with ‘companions’ and ‘confidants’ is far more immersive, and affective relationships result from the interactions. Adam Raine, Alexander Taylor could both be listed as victims of such ‘dark patterns’.
What is at stake is the self-perception of individuals (especially troubled ones) when the AI prompts them to reorient this perception. Suggestibility and openness to influence, the cornerstone of multiple processes of human development from education to social development, is now the domain of AI.
How the overwhelming attraction of an ever-attentive, equable, and never-mean companion, albeit an AI, can be dealt with will mark the management of this new precariousness of our lives.
Your indefatigable AI companion: coming soon to a screen near your palm.

(The author is Senior Professor of English and UNESCO Chair in Vulnerability Studies at the University of Hyderabad. He is also a Fellow of the Royal Historical Society and The English Association, UK)
