Automatic skin segmentation is an efficient method for the early diagnosis of skin cancer, which can minimize the missed detection rate and treat early skin cancer in time. However, significant variations in texture, size, shape, the position of lesions, and obscure boundaries in dermoscopy images make it extremely challenging to accurately locate and segment lesions. To address these challenges, we propose a novel framework named TG-Net, which exploits textual diagnostic information to guide the segmentation of dermoscopic images. Specifically, TG-Net adopts a dual-stream encoder-decoder architecture. The dual-stream encoder comprises Res2Net for extracting image features and our proposed text attention (TA) block for extracting textual features. Through hierarchical guidance, textual features are embedded into the process of image feature extraction. Additionally, we devise a multi-level fusion (MLF) module to merge higher-level features and generate a global feature map as guidance for subsequent steps. In the decoding stage of the network, local features and the global feature map are utilized in three multi-scale reverse attention modules (MSRA) to produce the final segmentation results. We conduct extensive experiments on three publicly accessible datasets, namely ISIC 2017, HAM10000, and PH2. Experimental results demonstrate that TG-Net outperforms state-of-the-art methods, validating the reliability of our method. Source code is available at https://github.com/ukeLin/TG-Net.
Keywords: Medical image analysis; Res2Net; Skin lesion segmentation; Text attention.
Copyright © 2024 Elsevier Ltd. All rights reserved.