Medical image segmentation is currently of a priori guiding significance in medical research and clinical diagnosis. In recent years, neural network-based methods have improved in terms of segmentation accuracy and become the mainstream in the field of medical image segmentation. However, the large number of parameters and computations of prevailing methods currently pose big challenges when employed on mobile devices. While, the lightweight model has great potential to be ported to low-resource hardware devices for its high accuracy. To address the above issues, this paper proposes a lightweight medical image segmentation method combining Transformer and Multi-Layer Perceptron (MLP), aiming to achieve accurate segmentation with lower computational cost. The method consists of a multi-scale branches aggregate module (MBA), a lightweight shift MLP module (LSM) and a feature information share module (FIS). The above three modules are integrated into a U-shaped network. The MBA module learns image features accurately by multi-scale aggregation of global spatial and local detail features. The LSM module introduces shift operations to capture the associations between pixels in different locations in the image. The FIS module interactively fuses multi-stage feature maps acting in skip connections to make the fusion effect finer. The method is validated on ISIC 2018 and 2018 DSB datasets. Experimental results demonstrate that the method outperforms many state-of-the-art lightweight segmentation methods and achieves a balance between segmentation accuracy and computational cost.
Keywords: Light transformer; Light-weight model; Medical image segmentation; Shift-MLP.
Copyright © 2024. Published by Elsevier Ltd.