What goes inside the black box where data is processed to arrive at a decision is always shrouded in mystery
By GHP Raju
Hyderabad: Scientific inventions such as wheel, fire, printing press, steam engine, penicillin and telephone have transformed our civilisation in incredible ways. Similarly, in the last two decades, the internet and digital media have ushered in extraordinary changes in almost all areas of our lives.
Artificial intelligence (AI) has now emerged as the most potent digital decision-making device with unresolved ethical dilemmas. AI uses copious amounts of data generated through the internet and digital media to take a decision based on machine learning. AI is the buzzword in government and private entrepreneurship endeavours. Some of the key areas where AI is greatly employed include natural language processing (NLP), e-commerce, manufacturing and robotics, customer service, banking and finance, and healthcare. Unresolved ethical issues have also been staring at policymakers from these fields.
AI Intervention
Three critical reasons have necessitated the adoption of AI in decision-making: enormous amounts of complex digital data, extraordinary digital computational capabilities, and the need for quick decision-making through machine learning. Though AI is being widely adopted and decision-making is becoming more mechanised, some of the core ethical issues associated with this process are seldom discussed or addressed by the regulators, government or even the judiciary.
For example, AI is extensively used in customer service and e-commerce. AI algorithms analyse customer preferences, purchase history and behaviour to provide personalised product recommendations. This may appear as a huge advantage to the customer, but it is often at the cost of violation of personal privacy. Personal data such as mobile numbers, e-mail IDs, personal preferences and location is secretly collected, stored and relied upon by various service providers to use to their advantage and maximise profits. Personal data privacy is rarely discussed in any quarter for corrective interventions.
Similarly, a patient’s medical history is private data collected and stored by hospitals. The patient is seldom informed of the nature of the data collected by the hospital, how long they intend to store that data and in which form, and whether adequate safeguards are in place to prevent data theft. The patient and/or his family members have the right to know about the data security measures taken by the hospital. But the lack of awareness in public emboldens hospitals from being not accountable to the patients about their health information.
Black Box Dilemma
Another serious concern about the AI-enabled decision-making process is how it arrives at a decision. What goes inside the black box where data is processed to arrive at a decision is always shrouded in mystery. In some American States, AI-based predictive policing software is deployed in criminal profiling to predict the crime potential of an individual. The software produced results stating that the propensity to commit a crime is high in Afro-Americans. This is preposterous and a racially biased inference arrived at using AI.
If such software is adopted in our society where social biases based on caste, religion and region are prevalent, the results would be as predictably biased as in the US. The problem is not in AI’s decision-making capabilities but in the lack of explanation in AI’s decision-making process inside the black box. Since the decision-making happens inside an AI’s black box, who should be held accountable for the evil consequences of such biased decisions is an ethical dilemma before the policymakers while dealing with predictive policing software.
AI and Robot Cops
Care robots are developed to attend to the medical needs of the elderly. Some of them had turned rogue and either killed or harmed the patients (by administering incorrect doses of medicine). Who should have been held accountable for the fatal consequences of the decisions taken by them? Similarly, consider a fondly remembered Hollywood movie ‘RoboCop‘ where a half-man and half-robot starts enforcing the law but soon turns rogue and harms innocent citizens (albeit due to the malfunctioning of internal circuits). While efficiency in prevention and detection of crime have increased manifold owing to Robo Cops, the ethical dilemma of accountability remains unsolved.
AI and Robo Judges
Can we introduce Robo Judges in our criminal justice system to improve justice delivery? What if we adopt Robo Judges on an experimental basis where judicial decisions/orders are arrived at based on the evidence adduced by the prosecution, arguments by the defence counsel, case laws and other legal provisions? Robo Judges will work round the clock (subject to load shedding), reduce case pendency and dispose of the cases at breakneck speed, all the while strictly adhering to the procedure established by law.
However, one thing they would be incapable of is the ethical elements while dealing with the case at hand. Thus, Robo Judges, like Robo Cops, fail to feel empathy and sympathy, and consider humane elements in judicial matters. This is a huge limitation in AI-enabled decision-making machines.
AI and the Unemployed
Another ethical dilemma before policymakers across the world is the choice between ‘efficiency’ and ‘employment’. Robo Cops are highly efficient in crime prevention and detection. One Robo Cop may be equal to ten cops in efficiency. But, when educated youth need jobs, what would the policymakers choose? If AI-enabled machines start replacing humans in jobs, this will result in extraordinary social unrest, increased crimes and spell doom for the country.
Similarly, if Care robots replace human nurses in providing care to the sick, the ethical, social and economic consequences become unacceptable to the people. They will surely rebel against such policy decisions. Emphasising this perspective, Sundar Pichai, CEO of Google, said, “We must address the ethical and moral implications of AI, as it will have a significant impact on society.”
A Necessary Evil?
Efficiency not only in administrative decision-making but also in the delivery of good governance to the people is undoubtedly the need of the hour. With the adoption of digital technology and AI-enabled decision-making machines, governments and private business entities will be able to serve the people and customers better. AI-enabled tools are extensively used by the government in Direct Benefit Transfer (DBT) of funds and PDS, National Digital Health Mission, crop yield prediction, pest management, soil health assessment, weather forecasting, etc. The ethical dilemmas are effectively taken care of by policymakers while adopting AI in these areas.
Private business entities such as mobile service providers, shopping malls, private hospitals and digital platforms are extensively using AI-enabled machines and collecting enormous amounts of personal data of customers and patients without commitment to data security, privacy, data theft, misuse and other ethical concerns.
Serious ethical concerns have been expressed by scientists regarding AI-enabled decision-making machines. Stephen Hawking, the famous astrophysicist, said, “The development of full artificial intelligence could spell the end of the human race.” And Elon Musk, the techno-industrialist, emphatically declared that “AI is a fundamental risk to the existence of human civilisation.” Policymakers in India must consider the ethical dilemmas while using AI tools for efficiency, transparency and accountability in all policy formulations.