Large language models for the mental health community: framework for translating code to care

Lancet Digit Health. 2025 Jan 7:S2589-7500(24)00255-3. doi: 10.1016/S2589-7500(24)00255-3. Online ahead of print.

Abstract

Large language models (LLMs) offer promising applications in mental health care to address gaps in treatment and research. By leveraging clinical notes and transcripts as data, LLMs could improve diagnostics, monitoring, prevention, and treatment of mental health conditions. However, several challenges persist, including technical costs, literacy gaps, risk of biases, and inequalities in data representation. In this Viewpoint, we propose a sociocultural-technical approach to address these challenges. We highlight five key areas for development: (1) building a global clinical repository to support LLMs training and testing, (2) designing ethical usage settings, (3) refining diagnostic categories, (4) integrating cultural considerations during development and deployment, and (5) promoting digital inclusivity to ensure equitable access. We emphasise the need for developing representative datasets, interpretable clinical decision support systems, and new roles such as digital navigators. Only through collaborative efforts across all stakeholders, unified by a sociocultural-technical framework, can we clinically deploy LLMs while ensuring equitable access and mitigating risks.

Publication types

  • Review