In image segmentation for medical image analysis, effective upsampling is crucial for recovering spatial information lost during downsampling. This challenge becomes more pronounced when dealing with diverse medical image modalities, which can significantly impact model performance. Plain and standard skip connections, widely used in most models, often fall short of maintaining high segmentation accuracy across different modalities, because essential spatial information transferred from the encoder to the decoder is lost. Inspired by these limitations, the Attention-Embedded Deep UNet (ATEDU-Net) is presented here, an innovative framework designed and tested on diverse medical image modalities, including X-ray, breast ultrasound, and retinal fundus images. ATEDU-Net features a unique architecture that combines progressive context refinement modules (PCRM) and global context modules (GCM) within a U-shaped network structure with Convgroup blocks which facilitate the integration of convolutional operations. This allows ATEDU-Net to autonomously capture rich spatial details and crucial contextual information from input images. The GCM captures essential contextual information for informed decision-making, while the PCRM enhances feature attention, resulting in precise and robust segmentation. The experimentation and analysis demonstrate that ATEDU-Net promises to be a powerful tool for medical professionals, aiding in the early detection of chest-related diseases, accurate localization of breast tumors, and early identification of eye diseases. This, in turn, contributes significantly to the formulation of optimal therapeutic strategies, enhancing patient care and outcomes. The versatility of ATEDU-Net is further highlighted by its ability to analyze medical images across various modalities, making it well-suited for the complex task of medical image analysis in diverse clinical scenarios.
Keywords: Attention-embedded deep UNet (ATEDU-Net); Convgroup; Global context modules (GCM); Medical image segmentation; Progressive context refinement modules (PCRM).
Copyright © 2025 Elsevier Ltd. All rights reserved.