Methodology for Safe and Secure AI in Diabetes Management

J Diabetes Sci Technol. 2024 Dec 26:19322968241304434. doi: 10.1177/19322968241304434. Online ahead of print.

Abstract

The use of artificial intelligence (AI) in diabetes management is emerging as a promising solution to improve the monitoring and personalization of therapies. However, the integration of such technologies in the clinical setting poses significant challenges related to safety, security, and compliance with sensitive patient data, as well as the potential direct consequences on patient health. This article provides guidance for developers and researchers on identifying and addressing these safety, security, and compliance challenges in AI systems for diabetes management. We emphasize the role of explainable AI (xAI) systems as the foundational strategy for ensuring security and compliance, fostering user trust, and informed clinical decision-making which is paramount in diabetes care solutions. The article examines both the technical and regulatory dimensions essential for developing explainable applications in this field. Technically, we demonstrate how understanding the lifecycle phases of AI systems aids in constructing xAI frameworks while addressing security concerns and implementing risk mitigation strategies at each stage. In addition, from a regulatory perspective, we analyze key Governance, Risk, and Compliance (GRC) standards established by entities, such as the Food and Drug Administration (FDA), providing specific guidelines to ensure safety, efficacy, and ethical integrity in AI-enabled diabetes care applications. By addressing these interconnected aspects, this article aims to deliver actionable insights and methodologies for developing trustworthy AI-enabled diabetes care solutions while ensuring safety, efficacy, and compliance with ethical standards to enhance patient engagement and improve clinical outcomes.

Keywords: AI in health care; artificial intelligence; cybersecurity; data; explainable AI; health information.