Effective implementation of artificial intelligence in behavioral healthcare delivery depends on overcoming challenges that are pronounced in this domain. Self and social stigma contribute to under-reported symptoms, and under-coding worsens ascertainment. Health disparities contribute to algorithmic bias. Lack of reliable biological and clinical markers hinders model development, and model explainability challenges impede trust among users. In this perspective, we describe these challenges and discuss design and implementation recommendations to overcome them in intelligent systems for behavioral and mental health.
Keywords: artificial intelligence; behavioral health; ethics; health disparities, algorithms, mental health; precision medicine; predictive modeling.
© The Author(s) 2020. Published by Oxford University Press on behalf of the American Medical Informatics Association.