As machines powered by artificial intelligence increase in their technological capacities, there is a growing interest in the theoretical and practical idea of artificial moral advisors (AMAs): systems powered by artificial intelligence that are explicitly designed to assist humans in making ethical decisions. Across four pre-registered studies (total N = 2604) we investigated how people perceive and trust artificial moral advisors compared to human advisors. Extending previous work on algorithmic aversion, we show that people have a significant aversion to AMAs (vs humans) giving moral advice, while also showing that this is particularly the case when advisors - human and AI alike - gave advice based on utilitarian principles. We find that participants expect AI to make utilitarian decisions, and that even when participants agreed with a decision made by an AMA, they still expected to disagree with an AMA more than a human in future. Our findings suggest challenges in the adoption of artificial moral advisors, and particularly those who draw on and endorse utilitarian principles - however normatively justifiable.
Keywords: Algorithm aversion; Artificial intelligence; Person perception; Utilitarianism.
Copyright © 2024 The Authors. Published by Elsevier B.V. All rights reserved.