For better collaboration among radiologists, the interpretation workload should be evaluated, considering the difference in difficulty for interpreting each case. However, objective evaluation of difficulty is challenging. This study proposes a multimodal classifier of structural and textual data to predict difficulty based on order information and patient data without using images. The classifier showed performance with a specificity of 0.9 and an accuracy of 0.7.
Keywords: Deep learning; classification; diagnosis; difficulty; multimodal.