Performance Evaluation of GPT-4o and o1-Preview Using the Certification Examination for the Japanese 'Operations Chief of Radiography With X-rays'

Cureus. 2024 Nov 22;16(11):e74262. doi: 10.7759/cureus.74262. eCollection 2024 Nov.

Abstract

Purpose The purpose of this study was to assess the ability of large language models (LLMs) to comprehend the safety management, protection methods, and proper handling of X-rays according to laws and regulations. We evaluated the performance of GPT-4o (OpenAI, San Francisco, CA, USA) and o1-preview (OpenAI) using questions from the 'Operations Chief of Radiography With X-rays' certification examination in Japan. Methods This study engaged GPT-4o and o1-preview in responding to questions from this Japanese certification examination for 'Operations Chief of Radiography With X-rays'. A total of four sets of exams published from April 2023 to October 2024 were used. The accuracy of each model was evaluated across the subjects, including knowledge about the control of X-rays, relevant laws and regulations, knowledge about the measurement of X-rays, and knowledge about the effects of X-rays on organisms. The results were compared between the two models, excluding graphical questions due to o1-preview's inability to interpret images. Results The overall accuracy rates of GPT-4o and o1-preview ranged from 57.5% to 70.0% and from 71.1% to 86.5%, respectively. The GPT-4o achieved passing accuracy rates in the subjects except for relevant laws and regulations. In contrast, o1-preview met the passing criteria across all four sets, despite graphical questions being excluded from scoring. The accuracy of all questions and relevant laws and regulations in o1-preview were significantly higher than those in GPT-4o (p = 0.03 for all questions and p = 0.03 for relevant laws and regulations, respectively). No significant differences in accuracy were found across the other subjects. Conclusions In the Japanese 'Operations Chief of Radiography With X-rays' certification examination, GPT-4o demonstrated a competent performance in the subjects except for relevant laws and regulations, while o1-preview showed a commendable performance across all subjects. When graphical questions were excluded from scoring, the performance of o1-preview surpassed that of GPT-4o in all questions and relevant laws and regulations.

Keywords: artificial intelligence (ai); gpt-4o; large language model; o1-preview; x-ray safety management and protection.