Optical Coherence Tomography (OCT) offers high-resolution images of the eye's fundus. This enables thorough analysis of retinal health by doctors, providing a solid basis for diagnosis and treatment. With the development of deep learning, deep learning-based methods are becoming more popular for fundus OCT image segmentation. Yet, these methods still encounter two primary challenges. Firstly, deep learning methods are sensitive to weak edges. Secondly, the high cost of annotating medical image data results in a lack of labeled data, leading to overfitting during model training. To tackle these challenges, we introduce the Multi-Task Attention Mechanism Network with Pruning (MTAMNP), consisting of a segmentation branch and a boundary regression branch. The boundary regression branch utilizes an adaptive weighted loss function derived from the Truncated Signed Distance Function(TSDF), improving the model's capacity to preserve weak edge details. The Spatial Attention Based Dual-Branch Information Fusion Block links these branches, enabling mutual benefit. Furthermore, we present a structured pruning method grounded in channel attention to decrease parameter count, mitigate overfitting, and uphold segmentation accuracy. Our method surpasses other cutting-edge segmentation networks on two widely accessible datasets, achieving Dice scores of 84.09% and 93.84% on the HCMS and Duke datasets.
Copyright: © 2025 Yang et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.