Breast cancer is one of the most common causes of death in women in the modern world. Cancerous tissue detection in histopathological images relies on complex features related to tissue structure and staining properties. Convolutional neural network (CNN) models like ResNet50, Inception-V1, and VGG-16, while useful in many applications, cannot capture the patterns of cell layers and staining properties. Most previous approaches, such as stain normalization and instance-based vision transformers, either miss important features or do not process the whole image effectively. Therefore, a deep fusion-based vision Transformer model (DFViT) that combines CNNs and transformers for better feature extraction is proposed. DFViT captures local and global patterns more effectively by fusing RGB and stain-normalized images. Trained and tested on several datasets, such as BreakHis, breast cancer histology (BACH), and UCSC cancer genomics (UC), the results demonstrate outstanding accuracy, F1 score, precision, and recall, setting a new milestone in histopathological image analysis for diagnosing breast cancer.
Keywords: artificial intelligence; breast cancer; classification; deep learning; histopathology images; machine learning.
© 2024 The Author(s). Healthcare Technology Letters published by John Wiley & Sons Ltd on behalf of The Institution of Engineering and Technology.