Background and objective: In the transformative era of artificial intelligence, its integration into various spheres, especially healthcare, has been promising. The objective of this study was to analyze the performance of ChatGPT, as open-source Large Language Model (LLM), in its different versions on the recent European Board of Urology (EBU) in-service assessment questions.
Design and setting: We asked multiple choice questions of the official EBU test books to ChatGPT-3.5 and ChatGPT-4 for the following exams: exam 1 (2017-2018), exam 2 (2019-2020) and exam 3 (2021-2022). Exams were passed with ≥60% correct answers.
Results: ChatGPT-4 provided significantly more correct answers in all exams compared to the prior version 3.5 (exam 1: ChatGPT-3.5 64.3% vs. ChatGPT-4 81.6%; exam 2: 64.5% vs. 80.5%; exam 3: 56% vs. 77%, p < 0.001, respectively). Test exam 3 was the only exam ChatGPT-3.5 did not pass. Within the different subtopics, there were no significant differences of provided correct answers by ChatGPT-3.5. Concerning ChatGPT-4, the percentage in test exam 3 was significantly decreased in the subtopics Incontinence (exam 1: 81.6% vs. exam 3: 53.6%; p = 0.026) and Transplantation (exam 1: 77.8% vs. exam 3: 0%; p = 0.020).
Conclusion: Our findings indicate that ChatGPT, especially ChatGPT-4, has the general ability to answer complex medical questions and might pass FEBU exams. Nevertheless, there is still the indispensable need for human validation of LLM answers, especially concerning health care issues.
Keywords: Artificial intelligence; ChatGPT; EBU exams; Fellow of the european board of Urology; Large language models.
© 2024. The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature.