Objectives: Educators need tools for the assessment of clinical reasoning that reflect the ambiguity of real-world practice and measure learners' ability to determine diagnostic likelihood. In this study, the authors describe the use of the Brier score to assess and provide feedback on the quality of probabilistic diagnostic reasoning.
Methods: The authors describe a novel format called Diagnostic Forecasting (DxF), in which participants read a brief clinical case and assign a probability to each item on a differential diagnosis, order tests and select a final diagnosis. DxF was piloted in a cohort of senior medical students. DxF evaluated students' answers with Brier scores, which compare probabilistic forecasts with case outcomes. The validity of Brier scores in DxF was assessed by comparison to subsequent decision-making in the game environment of DxF, as well as external criteria including medical knowledge tests and performance on clinical rotations.
Results: Brier scores were statistically significantly correlated with diagnostic accuracy (95 % CI -4.4 to -0.44) and with mean scores on the National Board of Medical Examiners (NBME) shelf exams (95 % CI -474.6 to -225.1). Brier scores did not correlate with clerkship grades or performance on a structured clinical skills exam. Reliability as measured by within-student correlation was low.
Conclusions: Brier scoring showed evidence for validity as a measurement of medical knowledge and predictor of clinical decision-making. Further work must evaluated the ability of Brier scores to predict clinical and workplace-based outcomes, and develop reliable approaches to measuring probabilistic reasoning.
Keywords: assessment; diagnostic reasoning; probabilistic reasoning; uncertainty.
© 2024 Walter de Gruyter GmbH, Berlin/Boston.