When humans worry that machines pose a risk to the integrity of human lives, we ignore the fact that there is a risk in humans living longer too
By Pramod K Nayar
As Artificial Beings become popular — and acceptable — one question that commentators raise is: are we humans then responsible for the beings we create, even if these are robots? The question was first raised by the creature in Mary Shelley’s Frankenstein, who called upon his creator, the scientist Victor Frankenstein, to acknowledge that he, the ‘monster’, was a creation of the human and, therefore, was entitled to be treated as human progeny. The debate Shelley initiated has not been settled.
Being, becoming
The distinction we make between humans and nonhumans rests partly on the assumption that the technological creature is distinct from us. But humans are also becoming some-thing else, evolving as our ancestors never did. As humans adopted the first tools, humanity co-evolved with technology.
Human biology changed over long evolutionary periods so that it can better survive. As the cognitive philosopher Andrew Clark observes: “We humans have always been adept at dovetailing our minds and skills to the shape of our current tools and aids. But when those tools and aids start dovetailing back — when our technologies actively, automatically, and continually tailor themselves to us just as we do to them — then the line between tool and user becomes flimsy indeed.”
In other words, we became the tools we have employed. Our interactions with technology, whether simple items like glasses for better vision or prosthesis, shape us as much as we shape them to our needs. This modification of our biology, our physiology, worries detractors who fear that we are ceding control to machines. But, does this modification turn the organic into the mechanical or the digital, or hasn’t the organic always evolved with the nonhuman other? Human finitude has always sought improvement in life through the use of tools — to grasp, write and run, faster and better. The AI-operated prosthesis or the robot is one more instantiation of this process.
But it is not just enhancement that demands technology. When humans worry that machines pose a risk to the integrity of human lives, we ignore the fact that there is a risk in humans living longer too. A decade ago, the UK House of Lords report titled ‘Ready for Ageing?’ opened with the bland statement: “The UK population is ageing rapidly, but we have concluded that the Government and our society are woefully underprepared. Longer lives can be a great benefit, but there has been a collective failure to address the implications and without urgent action this great boon could turn into a series of miserable crises.”
The ‘miserable crises’ are the lack of medical services, the problems of assisted living, the rising numbers of what gerontologists call the ‘deeply forgetful’, among others.
Consequently, more technological assists are being employed to ensure the old can lead nominally dignified lives. Thus, humans are becoming some-thing else too, when more and more prostheses and implants are employed to make older lives liveable. Some of these appurtenances will be, eventually, robots who will make human lives worth living.
Care for AIs?
Should we then assert as much care towards Artificial Beings as towards humans? Scholars working in critical plant studies and critical animal studies note that care relations are embedded in social contexts, which determine the nature of the relations.
Patricia Ciobanu and Oskar Juhlin, both computer scientists, after an experiment in which a group of people were entrusted with taking care of plants over a period of time, identified three ways in which the social context determined the care relation: the plant treated as human utility, as human proxy and as human. When the plant was deemed a utility, the participants in the experiment paid ‘attention to energy production in general’. ‘When the participants perceived the plant as a proxy for a human, the caring intention [was] directed towards someone else’. When the researchers saw the plant as having human features, ‘this could also be seen as a form of empathy, an attempt to understand these plant others, or a projection of one’s beliefs and values’.
What the experiment indicated was: the nature and quality of care hinged on the perception and acceptance of what the plant is. We could extend this argument to posthuman lifeforms to say that the projection of the Artificial Being, the perceptions about the being, and the public discourse that presents the being in particular ways, will determine whether we see the being as a utility, as a proxy or as human. This is where both fervent endorsements of the Artificial Being as human or as mere machine will determine attitude and policy.
Imagining Artificial Beings
Literary texts such as Ian McEwan’s Machines Like Us and Kazuo Ishiguro’s Never Let Me Go and Klara and the Sun are influential in how they shape public imagination around such beings. These texts, like popular films that instil the fear of the machine (whether Transcendence or A.I. or Lucy), determine our perceptions so that we learn to care for them or stay frightened of them. In such texts, the Artificial Beings are created to resemble humans, complete with sentience and argumentative logic.
The texts say: if the Artificial Being is produced by humans so as to mimic human feelings, cater to human needs and serve as mirrors to our-selves, as Ishiguro’s Klara clearly is, then rather than see them as Artificial Beings, perhaps we need to see them as ‘allo-human persons’. Aleksandra Łukaszewicz Alcaraz, Cultural Studies scholar, defines these as: “persons … acting in continuity with human persons, performing (to a certain degree of accuracy) as human persons … allo-human person … embraces different hybrid beings created in natural-cultural processes, that are building up communication, morality, ethics, and society, and who are like — but not identical to — humans.”
In McEwan’s novel, when Miranda calls the Artificial Being, Adam, a ‘machine’, Charlie responds: “‘Listen,’ I said. ‘If he looks and sounds and behaves like a person, then as far as I’m concerned, that’s what he is.”
Adam expects to be cared for, like a human, by Charlie. The problem is, as Alan Turing, a fictional recreation of the scientist, puts it to Charlie: we created these beings.
“I think the A[dams]-and-E[ves] were ill equipped to understand human decision-making, the way our principles are warped in the force field of our emotions, our peculiar biases, our self-delusion and all the other well-charted defects of our cognition. Soon, these Adams and Eves were in despair. They couldn’t understand us, because we couldn’t understand ourselves. Their learning programs couldn’t accommodate us. If we didn’t know our own minds, how could we design theirs and expect them to be happy alongside us?”
Turing is gesturing at human limitations and prejudices that produce Artificial Beings, who will then be subject to the biases of their creator.
Justice and care for the nonhuman demand not technological solutions but an alteration in our perceptions. As the critic Mads Rosendahl Thomsen puts it: “it is worth questioning how perfect human morality is and how our aesthetic sensibility can be improved. Posthumanist fiction suggests that this will not necessarily be by technological means but rather through a change in mindset and behavior toward others”
It is within the pages of such texts that rehumanise the Artificial Being, that we discover the need to rethink how we treat the Other, whether the religious minority, plant, animal or sentient robot.