BACKGROUND. Although radiology reports are commonly used for lung cancer staging, this task can be challenging given radiologists' variable reporting styles as well as reports' potentially ambiguous and/or incomplete staging-related information. OBJECTIVE. The purpose of this study was to compare the performance of ChatGPT large language models (LLMs) and human readers of varying experience in lung cancer staging using chest CT and FDG PET/CT free-text reports. METHODS. This retrospective study included 700 patients (mean age, 73.8 ± 29.5 [SD] years; 509 men, 191 women) from four institutions in Korea who underwent chest CT or FDG PET/CT for non-small cell lung cancer initial staging from January 2020 to December 2023. Examinations' reports used a free-text format, written exclusively in English or in mixed English and Korean. Two thoracic radiologists in consensus determined the overall stage group (IA, IB, IIA, IIB, IIIA, IIIB, IIIC, IVA, or IVB) for each report using the 8th-edition AJCC Cancer Staging Manual to establish the reference standard. Three ChatGPT models (GPT-4o, GPT-4, GPT-3.5) determined an overall stage group for each report using a script-based application programming interface, zero-shot learning, and a prompt incorporating a staging system summary. The code for this web application was made publicly available through a GitHub repository (https://github.com/elmidion/GPT_Information_Extractor). Six human readers (two fellowship-trained radiologists with less experience than the radiologists who determined the reference standard, two fellows, and two residents) also independently determined overall stage groups. GPT-4o's overall accuracy for determining the correct stage among the nine groups was compared with that of the other LLMs and human readers using McNemar tests. RESULTS. GPT-4o had an overall staging accuracy of 74.1%, significantly better than the accuracy of GPT-4 (70.1%, p = .02), GPT-3.5 (57.4%, p < .001), and resident 2 (65.7%, p < .001); significantly worse than the accuracy of fellowship-trained radiologist 1 (82.3%, p < .001) and fellowship-trained radiologist 2 (85.4%, p < .001); and not significantly different from the accuracy of fellow 1 (77.7%, p = .09), fellow 2 (75.6%, p = .53), and resident 1 (72.3%, p = .42). CONCLUSION. The best-performing model, GPT-4o, showed no significant difference in staging accuracy versus fellows but showed significantly worse performance versus fellowship-trained radiologists. The findings do not support use of LLMs for lung cancer staging in place of expert health care professionals. CLINICAL IMPACT. The findings indicate the importance of domain expertise for performing complex specialized tasks such as cancer staging.
Keywords: free-text report; generative pretrained transformer; large language model; lung cancer; natural language processing.