Comprehending Why Medical AI Is Perceived As A Concern Amongst The Public

Cyber Alliance Series: “Resistance to Medical Artificial Intelligence”


Earlier this month, the Hariri Institute of Computing had the privilege to host Chiara Longoni at Boston University’s School of Law for a Cyber Alliance Series where she spoke about her research on “Resistance to Medical Artificial Intelligence”

Artificial intelligence (AI) is revolutionizing healthcare, medical AI can perform with expert-level accuracy and deliver cost-effective healthcare at scale. IBM’s Watson diagnoses heart disease better than cardiologists. Chatbots dispense medical advice for the United Kingdom’s National Health Service in lieu of nurses. Smartphone apps detect skin cancer with expert accuracy, and algorithms identify eye diseases just as well as specialized physicians. Nevertheless, across real and hypothetical choices, and a host of medical domains spanning prevention to diagnosis to treatment, we find that people resist replacing humans with AI healthcare providers.

Across several studies, real and hypothetical choices, and a host of medical domains spanning prevention to diagnosis to treatment, we find that people are less likely to utilize AI than human providers. People are willing to pay more for a human than an equally good AI provider, and prefer human to AI providers when humans are objectively worse. The reason for this robust resistance to medical AI is not the (erroneous) belief that AI provides inferior care. Nor is it that people think that AI is more costly, less convenient, or less informative. Rather, the underlying mechanism is a belief we term “uniqueness neglect.” Uniqueness neglect is a concern that AI will not be able to deal with a person’s idiosyncratic characteristics and circumstances. People view themselves as unique and different from others, and this extends to their health. Because people view medical care delivered by AI providers as standardized, AI is well suited for other people but inadequate to account for their unique circumstances. It is because of the mismatch between these two beliefs — that a person is unique and that AI treats everyone in the same way — that people resist AI, medical providers.

Bio: Chiara Longoni is an Assistant Professor of Marketing at Boston University’s Questrom School of Business. Chiara’s research explores the social impact of artificial intelligence, technology, and disruptive innovations, as well as a number of antecedents to consumer and societal well-being. Substantively, she specializes in issues related to medical decision making and sustainability in consumer and firm behavior. Chiara’s research has been published in top academic journals such as the Journal of Consumer Research, Journal of Marketing Research, and Journal of Experimental Social Psychology.

Prior to joining Boston University, Chiara completed a Ph.D. in marketing at New York University’s Stern School of Business. She completed her B.S. and M.S. (summa cum laude) at Bocconi University, her M.A. (Honors) in Psychology and her M. Phil. in Marketing both at New York University. Prior to joining academia, Chiara worked in Brand Management for SC Johnson and Kraft Foods. Chiara is a PowerPilates certified pilates instructor.