Differing perspectives on artificial intelligence in mental healthcare among patients: a cross-sectional survey study

Front Digit Health. 2024 Nov 29:6:1410758. doi: 10.3389/fdgth.2024.1410758. eCollection 2024.

Abstract

Introduction: Artificial intelligence (AI) is being developed for mental healthcare, but patients' perspectives on its use are unknown. This study examined differences in attitudes towards AI being used in mental healthcare by history of mental illness, current mental health status, demographic characteristics, and social determinants of health.

Methods: We conducted a cross-sectional survey of an online sample of 500 adults asking about general perspectives, comfort with AI, specific concerns, explainability and transparency, responsibility and trust, and the importance of relevant bioethical constructs.

Results: Multiple vulnerable subgroups perceive potential harms related to AI being used in mental healthcare, place importance on upholding bioethical constructs, and would blame or reduce trust in multiple parties, including mental healthcare professionals, if harm or conflicting assessments resulted from AI.

Discussion: Future research examining strategies for ethical AI implementation and supporting clinician AI literacy is critical for optimal patient and clinician interactions with AI in mental healthcare.

Keywords: artificial intelligence; bioethics aspects; machine learning; mental health; patient engagement.

Grants and funding

The author(s) declare financial support was received for the research, authorship, and/or publication of this article. R41MH124581 (NIH/NIMH); R41MH124581-02S1(NIH/NIMH); R00MD015781 (NIH/NIMHD); R00NR019124 (NIH/NINR).