Exploring ChatGPT's potential in ECG interpretation and outcome prediction in emergency department

Am J Emerg Med. 2024 Nov 14:88:7-11. doi: 10.1016/j.ajem.2024.11.023. Online ahead of print.

Abstract

Background: Approximately 20 % of emergency department (ED) visits involve cardiovascular symptoms. While ECGs are crucial for diagnosing serious conditions, interpretation accuracy varies among emergency physicians. Artificial intelligence (AI), such as ChatGPT, could assist in ECG interpretation by enhancing diagnostic precision.

Methods: This single-center, retrospective observational study, conducted at Merano Hospital's ED, assessed ChatGPT's agreement with cardiologists in interpreting ECGs. The primary outcome was agreement level between ChatGPT and cardiologists. Secondary outcomes included ChatGPT's ability to identify patients at risk for Major Adverse Cardiac Events (MACE).

Results: Of the 128 patients enrolled, ChatGPT showed good agreement with cardiologists on most ECG segments, excluding T wave (kappa = 0.048) and ST segment (kappa = 0.267). Significant discrepancies arose in the assessment of critical cases, as ChatGPT classified more patients as at risk for MACE than were identified by physicians.

Conclusions: ChatGPT demonstrates moderate accuracy in ECG interpretation, yet its current limitations, especially in assessing critical cases, restrict its clinical utility in ED settings. Future research and technological advancements could enhance AI's reliability, potentially positioning it as a valuable support tool for emergency physicians.

Keywords: Artificial intelligence; Clinical decision support systems; Electrocardiography; Emergency service; Major adverse cardiac events.