A pair of researchers concluded in an analysis that Artificial Intelligence engineers should enlist ideas from a broad range of social science disciplines in order to reduce the potential harm of their creations and to better serve society as a whole.
“There is mounting evidence that AI can exacerbate inequality, perpetuate discrimination, and inflict harm. We need to include the broadest possible notion of social science, one that includes disciplines that have developed methods for grappling with the vastness of social world and that helps us understand how and why AI harms emerge as part of a large, complex, and emergent techno-social system,” wrote Mona Sloane, a research fellow at New York University’s Institute for Public Knowledge, and Emanuel Moss, a doctoral candidate at the City University of New York.
The authors outline reasons where social science approaches, and its many qualitative methods can broadly enhance the value of AI while also avoiding documented pitfalls. Studies have shown that search engines may discriminate against women of colour while many analysts have raised questions about how self-driving cars will make socially acceptable decisions in crash situations (for example, avoiding humans rather than fire hydrants).
Sloane, also an adjunct faculty member at NYU’s Tandon School of Engineering, and Moss acknowledge that AI engineers are currently seeking to instil “value-alignment” the idea that machines should act in accordance with human values in their creations, but added that “it is exceptionally difficult to define and encode something as fluid and contextual as ‘human values’ into a machine”.
To address this shortcoming, the authors offer a blueprint for inclusion of the social sciences in AI through a series of recommendations such as qualitative social research can help understand the categories through which we make sense of social life and which are being used in AI, reported the study published in the Journal of Nature Machine Intelligence.
“For example, technologists are not trained to understand how racial categories in machine learning are reproduced as a social construct that has real-life effects on the organisation and stratification of society. But, these questions are discussed in depth in the social sciences, which can help create the socio-historical backdrop against which the… history of ascribing categories like ‘race’ can be made explicit,” Sloane and Moss observed.
A qualitative data-collection approach can establish protocols to help diminish bias. “Data always reflects the biases and interests of those doing the collecting. Qualitative research is explicit about the data collection, whereas quantitative research practices in AI are not,” the authors noted.
Qualitative research typically requires researchers to reflect on how their interventions affect the world in which they make their observations. “A quantitative approach does not require the researcher or AI designer to locate themselves in the social world. Therefore, one does not require an assessment of who is included in vital AI design decision, and who is not,” they wrote.