Objective: To develop and validate an artificial intelligence (AI) diagnostic model for coronary artery disease based on facial photos. Methods: This study was a cross-sectional study. Patients who were scheduled to undergo coronary angiography (CAG) at Beijing Anzhen Hospital and Beijing Daxing Hospital from August 2022 to November 2023 were included consecutively. Before CAG, facial photos were collected (including four angles: frontal view, left and right 60° profile, and top of the head). Photo datasets were randomly divided into a training set, a validation set (70%), and a testing set (30%). The model was constructed using Masked Autoencoder (MAE) and Vision Transformer (ViT) architectures. Firstly, the model base was pre-training using 2 million facial photos obtained from the publicly available VGGFace dataset, and fine-tuned by the training and validation sets; the model was validated in the test set. In addition, the ResNet architecture was used to process the dataset, and its outputs were compared with those of the models based on MAE and ViT. In the test set, the area under the operating characteristic curve (AUC) of the AI model was calculated using CAG results as the gold standard. Results: A total of 5 974 participants aged 61 (54, 67) years were included, including 4 179 males (70.0%), with a total of 84 964 facial photos. There were 79 140 facial photos in the training and validation sets, with 3 822 patients with coronary artery disease; there were 5 824 facial photos in the test set, with 239 patients with coronary artery disease. The AUC value of the MAE and ViT model initialized with pre-training model weights was 0.841 and 0.824, respectively. The AUC of the ResNet model initialized with random weights was 0.810, while the AUC of the ResNet model initialized with pre-training model weights was 0.816. Conclusion: The AI model based on facial photos showes good diagnostic performance for coronary artery disease and holds promise for further application in early diagnosis.
目的: 开发并验证一款基于面部照片的冠心病人工智能诊断模型。 方法: 本研究为一项横断面研究。连续纳入2022年8月至2023年11月于首都医科大学附属北京安贞医院和北京大兴医院拟行冠状动脉造影的受试者。在完善冠状动脉造影前采集受试者的面部照片(包括面部的正面、左右60°侧脸和头顶等4个角度)。随机将照片数据集分为训练集、验证集(70%)和测试组(30%)。采用Masked Autoencoder(MAE)、Vision Transformer(ViT)架构构建模型,首先利用从公开的VGGFace数据集中获取的200万张面部照片进行模型基座预训练,并使用训练集和验证集进行微调;使用测试集进行模型的验证。此外,采用ResNet架构对数据集进行处理,并与基于MAE和ViT的模型输出结果进行对比。在测试集中,以冠状动脉造影结果为金标准,计算人工智能模型的受试者工作特征曲线下面积(AUC)。 结果: 共纳入5 974例受试者,年龄61(54,67)岁,其中男性4 179(70.0%)例,共84 964张面部照片。训练集和验证集中面部照片79 140张,冠心病患者3 822例;测试集中面部照片5 824张,冠心病患者239例。基于预训练模型权重初始化的MAE模型的AUC值达到了0.841,基于预训练模型权重初始化的ViT模型的AUC为0.824。而随机权重初始化的ResNet模型的AUC为0.810,基于预训练模型权重初始化的ResNet模型的AUC为0.816。 结论: 基于面部照片的人工智能模型在冠心病诊断性能方面表现良好,有望进一步实现冠心病的早期诊断。.