Interpretable COVID-19 chest X-ray detection based on handcrafted feature analysis and sequential neural network

Comput Biol Med. 2025 Jan 22:186:109659. doi: 10.1016/j.compbiomed.2025.109659. Online ahead of print.

Abstract

Deep learning methods have significantly improved medical image analysis, particularly in detecting COVID-19 chest X-rays. Nonetheless, these methodologies frequently inhibit some drawbacks, such as limited interpretability, extensive computational resources, and the need for extensive datasets. To tackle these issues, we introduced two novel algorithms: the Dynamic Co-Occurrence Grey Level Matrix (DC-GLM) and the Contextual Adaptation Multiscale Gabor Network (CAMSGNeT). Ground-glass opacity, consolidation, and fibrosis are key indicators of COVID-19 that are effectively captured by DC-GLM, which is designed to adaptively respond to diverse texture sizes and orientations. It emphasizes coarse texture patterns, adeptly catching significant structural alterations in the texture of chest X-rays, enhancing diagnostic precision by documenting the spatial correlations among pixel intensities and facilitating the detection of both significant and minor irregularities. To enhance coarse feature extraction, we introduced CAMSGNeT, which emphasizes fine features via Contextual Adaptive Diffusion. In contrast to conventional multiscale Gabor filtering, CAMSGNeT improves feature extraction by modifying the diffusion process according to both gradients and local texture complexity. The Contextual Adaptation Diffusion approach adjusts the diffusion coefficient by incorporating both gradient and local variance, enabling intricate texture areas to preserve finer details while smoothing regions are diffused to decrease noise. Air bronchograms and crazy-paving patterns are maintained by this adaptive method, which enhances edge identification and texture characteristics while preserving essential tiny details. Finally, a simple optimized sequential neural network analyzes these refined features, resulting in enhanced classification accuracy. Feature importance analysis improves the model's interpretability by revealing the contributions of individual features to its decisions. Our methodology outperforms numerous state-of-the-art models, achieving 98.27% and 100% accuracy on two datasets, providing a more interpretable, precise, and resource-efficient solution for COVID-19 detection.

Keywords: COVID-19; Contextual adaptation multiscale gabor network; Dynamic co-occurrence grey level matrix; Feature importance; Interpretability; Sequential neural network.