Background: Clinical machine learning (ML) technologies can sometimes be biased and their use could exacerbate health disparities. The extent to which bias is present, the groups who most frequently experience bias, and the mechanism through which bias is introduced in clinical ML applications is not well described. The objective of this study was to examine instances of bias in clinical ML models. We identified the sociodemographic subgroups (using the PROGRESS-Plus framework) that experienced bias and the reported mechanisms of bias introduction METHODS: We searched MEDLINE, EMBASE, PsycINFO and Web of Science for all studies that evaluated bias on sociodemographic factors within ML algorithms created for the purpose of facilitating clinical care. The scoping review was conducted according to the JBI guide and reported using the PRISMA extension for scoping reviews.
Results: We identified 6448 articles, of which 760 reported on a clinical ML model and 91 (12.0%) completed a bias evaluation and met all inclusion criteria. Most studies evaluated a single sociodemographic factor (n=56, 61.5%). The most frequently evaluated sociodemographic factor was race (n=59, 64.8%), followed by sex/gender (n=41, 45.1%), and age (n=24, 26.4%), with one study (1.1%) evaluating intersectional factors. Of all studies, 74.7% (n=68) reported that bias was present, 18.7% (n=17) reported bias was not present, and 6.6% (n=6) did not state whether bias was present. When present, 87% of studies reported bias against groups with socioeconomic disadvantage.
Conclusion: Most ML algorithms that were evaluated for bias demonstrated bias on sociodemographic factors. Furthermore, most bias evaluations concentrated on race, sex/gender, and age, while other sociodemographic factors and their intersection were infrequently assessed. Given potential health equity implications, bias assessments should be completed for all clinical ML models.
Copyright © 2024 The Author(s). Published by Elsevier Inc. All rights reserved.