Objective: Breast cancer is one of the most commonly occurring cancers in women. Thus, early detection and treatment of cancer lead to a better outcome for the patient. Ultrasound (US) imaging plays a crucial role in the early detection of breast cancer, providing a cost-effective, convenient, and safe diagnostic approach. To date, much research has been conducted to facilitate reliable and effective early diagnosis of breast cancer through US image analysis. Recently, with the introduction of machine learning technologies such as deep learning (DL), automated lesion segmentation and classification, the identification of malignant masses in US breasts has progressed, and computer-aided diagnosis (CAD) technology is being applied in clinics effectively. Herein, we propose a novel deep learning-based "segmentation + classification" model based on B- and SE-mode images.
Methods: For the segmentation task, we propose a Multi-Modal Fusion U-Net (MMF-U-Net), which segments lesions by mixing B- and SE-mode information through fusion blocks. After segmenting, the lesion area from the B- and SE-mode images is cropped using a predicted segmentation mask. The encoder part of the pre-trained MMF-U-Net model is then used on the cropped B- and SE-mode breast US images to classify benign and malignant lesions.
Results: The experimental results using the proposed method showed good segmentation and classification scores. The dice score, intersection over union (IoU), precision, and recall are 78.23%, 68.60%, 82.21%, and 80.58%, respectively, using the proposed MMF-U-Net on real-world clinical data. The classification accuracy is 98.46%.
Conclusion: Our results show that the proposed method will effectively segment the breast lesion area and can reliably classify the benign from malignant lesions.
Keywords: Breast cancer; Breast ultrasound images; Classification; Multi-modality; Segmentation; Transfer learning.
Copyright © 2024 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.