Introduction: Diabetic retinopathy (DR) has long been recognized as a common complication of diabetes, making accurate automated grading of its severity essential. Color fundus photographs play a crucial role in the grading of DR. With the advancement of artificial intelligence technologies, numerous researchers have conducted studies on DR grading based on deep features and radiomic features extracted from color fundus photographs.
Method: We combine deep features and radiomic features to design a feature fusion algorithm. First, we utilize convolutional neural networks to extract deep features from color fundus photographs and employ radiomic methodologies to extract radiomic features. Subsequently, we design a label relaxation-based collaborative learning algorithm for feature fusion.
Results: We validate the effectiveness of the proposed method on two fundus image datasets: the DR1 Dataset and the MESSIDOR Dataset. The proposed method achieved 96.86 of AUC on DR1 and 96.34 of AUC on MESSIDOR, which are better than state-of-the-art methods. Also, the divergence between the training AUC and testing AUC increases substantially after the removal of manifold regularization.
Conclusion: Label relaxation can enhance the distinguishability of training samples in the label space, thereby improving the model's classification accuracy. Additionally, graph constraints based on manifold learning methods can mitigate overfitting caused by label relaxation.
Keywords: collaborative learning; diabetic retinopathy grading; highlevel deep features; label relaxation; radiomic features.
Copyright © 2025 Zhang, Sheng, Su and Duan.