GLAC-Unet: Global-Local Active Contour Loss with an Efficient U-Shaped Architecture for Multiclass Medical Image Segmentation

J Imaging Inform Med. 2025 Jan 17. doi: 10.1007/s10278-025-01387-9. Online ahead of print.

Abstract

The field of medical image segmentation powered by deep learning has recently received substantial attention, with a significant focus on developing novel architectures and designing effective loss functions. Traditional loss functions, such as Dice loss and Cross-Entropy loss, predominantly rely on global metrics to compare predictions with labels. However, these global measures often struggle to address challenges such as occlusion and nonuni-form intensity. To overcome these issues, in this study, we propose a novel loss function, termed Global-Local Active Contour (GLAC) loss, which integrates both global and local image features, reformulated within the Mumford-Shah framework and extended for multiclass segmentation. This approach enables the neural network model to be trained end-to-end while simultaneously segmenting multiple classes. In addition to this, we enhance the U-Net architecture by incorporating Dense Layers, Convolutional Block Attention Modules, and DropBlock. These improvements enable the model to more effectively combine contextual information across layers, capture richer semantic details, and mitigate overfitting, resulting in more precise segmentation outcomes. We validate our proposed method, namely GLAC-Unet, which utilizes the GLAC loss in conjunction with our modified U-shaped architecture, on three biomedical segmentation datasets that span a range of modalities, including two-dimensional and three-dimensional images, such as dermoscopy, cardiac magnetic resonance imaging, and brain magnetic resonance imaging. Extensive experiments demonstrate the promising performance of our approach, achieving a Dice score (DSC) of 0.9125 on the ISIC-2018 dataset, 0.9260 on the Automated Cardiac Diagnosis Challenge (ACDC) 2017, and 0.927 on the Infant Brain MRI Segmentation Challenge 2019. Furthermore, statistical significance testing with p-values consistently smaller than 0.05 on the ISIC-2018 and ACDC datasets confirms the superior performance of the proposed method compared to other state-of-the-art models. These results highlight the robustness and effectiveness of our multiclass segmentation technique, underscoring its potential for biomedical image analysis. Our code will be made available at https://github.com/minhnhattrinh312/Active-Contour-Loss-based-on-Global-and-Local-Intensity.

Keywords: Convolutional neural network; Global–local active contour; Multiclass image segmentation; Mumford-Shah loss..