This narrative review focuses on the role of clinical prediction models in supporting informed decision-making in critical care, emphasizing their 2 forms: traditional scores and artificial intelligence (AI)-based models. Acknowledging the potential for both types to embed biases, the authors underscore the importance of critical appraisal to increase our trust in models. The authors outline recommendations and critical care examples to manage risk of bias in AI models. The authors advocate for enhanced interdisciplinary training for clinicians, who are encouraged to explore various resources (books, journals, news Web sites, and social media) and events (Datathons) to deepen their understanding of risk of bias.
Keywords: AI; Artificial intelligence; Bias; Machine learning; Prediction models.
Copyright © 2024 Elsevier Inc. All rights reserved.