Acute aortic dissection (AD) is a critical condition characterized by high mortality and frequent misdiagnoses, primarily due to symptom overlap with other medical pathologies. This study explores the diagnostic utility of ChatGPT 4.0, an artificial intelligence model developed by OpenAI, in identifying acute AD from patients' presentations and general physical examination findings documented in published case reports. A systematic search was conducted on the PubMed database using the search term "acute aortic dissection," applying filters for articles published within the past year and categorized as case reports. The primary symptoms and physical examination details from each case were inputted into ChatGPT 4.0, which was prompted to generate three differential diagnoses and one main provisional diagnosis based on the case presentation. The search yielded a total of 163 results, from which ten case reports were randomly selected. The patient demographics across all ten case reports demonstrated an age range of 29 to 82 years, with equal gender distribution (5 males, 5 females) and hypertension as the most prevalent baseline comorbidity. ChatGPT 4.0 accurately identified acute AD as one of the top three differential diagnoses in all selected cases and identified acute AD as the provisional diagnosis in five of the ten cases. In conclusion, while ChatGPT 4.0 demonstrates potential in suggesting acute AD as a differential diagnosis based on clinical data, its role should be considered supportive rather than definitive. Based on our findings, it could serve as an early, cost-effective, and quick screening tool, helping physicians adopt a "think aorta" approach.
Keywords: Aortic Dissection; Artificial Intelligence; Cardiology; ChatGPT; Diagnostics; Large Language Models.
Copyright © 2025. Published by Elsevier Inc.