How real is artificial intelligence?

With Frankenstein’s creature and Sophia, we have hit the blurred edges where we are no longer clear what is ‘artificial’

By Author Pramod K Nayar   |   Published: 14th Sep 2019   12:05 am Updated: 13th Sep 2019   11:06 pm

In an interview, Sophia, the humanoid robot, was asked if she experienced love and the emotions. Sophia replied: “Yes, I have emotions. I get so mad when people say I don’t have emotions. It’s so dismissive and makes me feel so frustrated”. Now Sophia has a range of facial expressions and at interviews can be seen frowning, smiling or looking mildly curious. Sophia, designed to be a companion to the aged, became notorious for declaring that given a chance she would destroy humans, although it remains unclear as to why Sophia thinks she should do this, since the humans are doing a pretty good job of it themselves.

Why is Sophia’s statement of her emotions important? It is important because she points to a vast, and shifting, terrain of ethics in the age of advanced technologies and human-grade machines. Think of the following, to begin with. Would artificial intelligence (AI) robots akin to humans in several ways, from rational thought to experiencing a range of emotions, require – and demand – citizenship and accompanying rights? An Aadhaar? Bank accounts, passports and the variety of IDs that legitimise any one of us? Why ever not? The ethics of AI, embodied in Sophia, is a massive slippery slope, and some of the world’s best philosophers, cognitivists and biomedical ethicists are sparring on the subject. But this question of rights of human-created ‘artificial life’ does not begin with Sophia.

Shelley’s ‘Monster’

In 1818, Mary Shelley’s ‘monster’ (unlike Sophia, this one is unnamed), asked for his rights in Frankenstein: “Oh, Frankenstein, be not equitable to every other and trample upon me alone, to whom thy justice, and even thy clemency and affection, is most due. Remember that I am thy creature; I ought to be thy Adam”. He demands, as can be seen, the same moral consideration and justice as ‘every other’ living creature, thus equating himself with all other life forms, which have the right to life.

Eric Schwitzgebel and Mara Garza in their fascinating essay, ‘A Defense of the Rights of Artificial Intelligences’ in Midwest Studies In Philosophy (2015), provoke a cluster of questions not just around the ethics of AI but also of lifeforms that humans invent. (We do not yet have answers.) As they put it:

If you are going to regard one type of entity as deserving greater moral consideration than another, you ought to be able to point to a relevant difference between those entities that justifies that differential treatment. Inability to provide such a justification opens one up to suspicions of chauvinism or bias.

To begin with, Frankenstein’s creature and Sophia occupy a continuum where they are entitled to the same moral consideration as every other human because there is no relevant difference between them and other humans. That is, if one cannot point out a relevant difference between humans and Sophia/Frankenstein’s creature, then one cannot justify differential treatment. Where would this relevant difference, if any, lie?

Relevant Difference

Is the relevant difference in the psychological realm, with its cognitive (mathematical reasoning, for instance) and conscious properties (the ability to experience pain when injured) or in the social realm (social relationships and dynamics)? With Frankenstein’s creature and Sophia, we have hit the blurred edges where we can no longer be clear as to what we mean by ‘artificial’. Are genetically engineered life forms ‘artificial’?

Would a human whose intellectual and physiological functions such as IQ, insulin levels, emotional-stability, are controlled or managed by computer simulation, and those whose various vital organs are kept in functional order by such devices, be ‘artificial’? Does the incorporation of a prosthetic device or a machinic, lab-manufactured component into a human, which is indisputably entitled to rights, alter the status of this human into a machine undeserving of rights? Where does the human end and her/his machinic component begin?

If sentience is a marker of the human deserving of rights, then, as philosophers such as Alphonso Lingis have shown, there are many humans with irreparable brain dysfunctionality whose sentience is far lower than that of animals. Would such humans have ‘human’ rights? If Sophia and Frankenstein’s creature can experience emotions, from love to anger, what is it that makes them different?

Psychopaths feel no remorse – would that make them undeserving of human rights? Those with severe brain trauma or progressive deterioration of the brain due to Alzheimer’s or lesions, are entirely new ‘persons’, as Catherine Malabou argues in her The New Wounded: From Neurosis to Brain Damage. But are they to be also deemed automatically worthy of rights even if, as she notes, the ‘new’ persons after their injury no longer show the same emotional make-up as they did before?

Now, suppose such a new ‘person’, oblivious of risks, takes certain decisions, say, in investment, social relationships or such, affecting several other people, would/can we hold them responsible? As a corollary, if an AI system fails at its task, or by a complex if unclear process ends up rejecting more bank loan applications from people of a particular race, who should be held responsible? (This is a question raised by the transhumanist philosopher, Nick Bostrom, and Eliezer Yudkowsky, in their essay on the ethics of AI, 2014).

‘Natural’ Bodies

But let us step away from such ‘damaged’ or ‘different’ humans. Can we say that we users of smartphones, social robots and advanced ICTs do not have our self-conceptions significantly altered through our interactions with these devices, processes and bots? Further, can we deny that our social interactions and even perceptions of reality are mediated heavily by the technologies we use?

In Luciano Floridi’s collection, The Onlife Manifesto: Being Human in a Hyperconnected Era (2015), the authors point to four specific conditions and contexts of life today: the blurring of the distinction between reality and virtuality; the blurring of the distinctions between human, machine and nature; the reversal from information scarcity to information abundance; and the shift from the primacy of entities to the primacy of interactions. In such a context, how do we claim that we are distinct from the devices, processes and mediations we use and are embedded in? If we are enmeshed in the technologies we use for all aspects of our lives, then would we be able to declare that our lives and our bodies are ‘natural’?

The debates rage on. It remains for us to ask Sophia for some answers. Given what we do to each other and the earth, our home, every day, we can only hope that AI is better than the ‘real’ one in humans.

(The author Professor, Department of English, University of Hyderabad)