Background: Machine learning driven clinical decision support tools (ML-CDST) are on the verge of being integrated into clinical settings, including in Otolaryngology-Head & Neck Surgery. In this study, we investigated whether such CDST may influence otolaryngologists' diagnostic judgement.
Methods: Otolaryngologists were recruited virtually across the United States for this experiment on human-AI interaction. Participants were shown 12 different video-stroboscopic exams from patients with previously diagnosed laryngopharyngeal reflux or vocal fold paresis and asked to determine the presence of disease. They were then exposed to a random diagnosis purportedly resulting from an ML-CDST and given the opportunity to revise their diagnosis. The ML-CDST output was presented with no explanation, a general explanation, or a specific explanation of its logic. The ML-CDST impact on diagnostic judgement was assessed with McNemar's test.
Results: Forty-five participants were recruited. When participants reported less confidence (268 observations), they were significantly (p = 0.001) more likely to change their diagnostic judgement after exposure to ML-CDST output compared to when they reported more confidence (238 observations). Participants were more likely to change their diagnostic judgement when presented with a specific explanation of the CDST logic (p = 0.048).
Conclusions: Our study suggests that otolaryngologists are susceptible to accepting ML-CDST diagnostic recommendations, especially when less confident. Otolaryngologists' trust in ML-CDST output is increased when accompanied with a specific explanation of its logic.
Level of evidence: 2 Laryngoscope, 134:2799-2804, 2024.
Keywords: artificial intelligence; laryngology; machine learning.
© 2024 The American Laryngological, Rhinological and Otological Society, Inc.