Introduction: Optimal basal insulin titration for people with type 2 diabetes is vital to effectively reducing the risk of complications. However, a sizeable proportion of people (30-50 %) remain in suboptimal glycemic control six months post-initiation of basal insulin. This indicates a clear need for novel titration methods that account for individual patient variability in real-world settings.
Objective: This study aims to investigate the use of real-world data and explainable machine learning in modeling fasting glucose responses to basal insulin adjustments, focusing on identifying factors influencing fasting glucose variability.
Methods: A three-step explanatory approach was used to develop models using multiple linear regression, forward feature selection, and three-fold cross-validation. The models were built progressively, starting with a baseline model incorporating fasting blood glucose and insulin dose adjustments, followed by iterative models that in turn included biometric data, social factors, and biochemistry data, and lastly, a comprehensive model without constraints on the feature pool.
Results: The baseline model yielded an average root mean squared error (RMSE) of 1.52 [95% CI: 1.33-1.71]. The iterative models resulted in an average RMSE of 1.49 [95% CI: 1.35-1.62] (biometric data), 1.47 [95% CI: 1.36-1.58] (social factors), and 1.52 [95% CI: 1.34-1.70] (biochemistry data). The comprehensive model yielded an average RMSE of 1.44 [95% CI: 1.41-1.48].
Conclusion: Developing explainable machine learning models using real-world data is possible for basal insulin titration. However, model performance is influenced by data's ability to capture everyday behavior, underscoring the need for incorporating more detailed behavioral and social data to optimize future titration models.
Keywords: Basal insulin dose adjustment; Basal insulin titration; Explainable machine learning; Fasting blood glucose response; Type 2 diabetes.
Copyright © 2024 The Author(s). Published by Elsevier B.V. All rights reserved.