Aim: This study aims to compare trainee-modified report percentage rate and trainee/consultant satisfaction regarding the feedback process before and after implementation of an automated report comparison tool.
Materials and methods: An automated report comparison tool utilising natural language processing, presenting the trainee's preliminary report beside the final consultant report with changes highlighted, was used in a prospective interventional study. Modification rates, including character counts, of co-authored computed tomography (CT) studies were recorded before and after tool implementation over two 6-month periods and compared with Student's t-test. Trainees and consultants were surveyed before and after the interventional period for time spent and feedback satisfaction.
Results: In total, 3851 (81.7%) of 4175 reports were modified in the baseline preimplementation phase, and 5215 (69.6%) of 7489 reports were modified during the postimplementation phase (p < .001). The average character count change preimplementation was 132, corresponding to 9.0% of the original preliminary report, compared with 91 characters and 7.1% postimplementation, respectively (p < .001). This statistically significant difference generally applied regardless of the level of trainee experience. Prospective data collected in the preimplementation period revealed that for more than two-thirds of after-hours shifts, trainees spent fewer than 5 minutes receiving feedback on their after-hours work. At the conclusion of the implementation phase, 92.3% of trainees and 70% of consultants agreed that the report comparison tool improved feedback.
Conclusion: Following the implementation of an automated report comparison tool, there was a reduction in trainee report modification rates and subjectively improved trainee feedback. This adjunct to existing feedback mechanisms presents a relatively simple intervention to facilitate efficient case review and feedback.
Crown Copyright © 2024. Published by Elsevier Ltd. All rights reserved.