Ethical Application of Generative Artificial Intelligence in Medicine

Arthroscopy. 2024 Dec 15:S0749-8063(24)01048-X. doi: 10.1016/j.arthro.2024.12.011. Online ahead of print.

Abstract

Generative artificial intelligence (AI) may revolutionize health care, providing solutions that range from enhancing diagnostic accuracy to personalizing treatment plans. However, its rapid and largely unregulated integration into medicine raises ethical concerns related to data integrity, patient safety, and appropriate oversight. One of the primary ethical challenges lies in generative AI's potential to produce misleading or fabricated information, posing risks of misdiagnosis or inappropriate treatment recommendations, which underscore the necessity for robust physician oversight. Transparency also remains a critical concern, as the closed-source nature of many large-language models prevents both patients and health care providers from understanding the reasoning behind AI-generated outputs, potentially eroding trust. The lack of regulatory approval for AI as a medical device, combined with concerns around the security of patient-derived data and AI-generated synthetic data, further complicates its safe integration into clinical workflows. Furthermore, synthetic datasets generated by AI, although valuable for augmenting research in areas with scarce data, complicate questions of data ownership, patient consent, and scientific validity. In addition, generative AI's ability to streamline administrative tasks risks depersonalizing care, further distancing providers from patients. These challenges compound the deeper issues plaguing the health care system, including the emphasis of volume and speed over value and expertise. The use of generative AI in medicine brings about mass scaling of synthetic information, thereby necessitating careful adoption to protect patient care and medical advancement. Given these considerations, generative AI applications warrant regulatory and critical scrutiny. Key starting points include establishing strict standards for data security and transparency, implementing oversight akin to institutional review boards to govern data usage, and developing interdisciplinary guidelines that involve developers, clinicians, and ethicists. By addressing these concerns, we can better align generative AI adoption with the core foundations of humanistic health care, preserving patient safety, autonomy, and trust while harnessing AI's transformative potential. LEVEL OF EVIDENCE: Level V: expert opinion.