CellViT: Vision Transformers for precise cell segmentation and classification

Med Image Anal. 2024 May:94:103143. doi: 10.1016/j.media.2024.103143. Epub 2024 Mar 16.

Abstract

Nuclei detection and segmentation in hematoxylin and eosin-stained (H&E) tissue images are important clinical tasks and crucial for a wide range of applications. However, it is a challenging task due to nuclei variances in staining and size, overlapping boundaries, and nuclei clustering. While convolutional neural networks have been extensively used for this task, we explore the potential of Transformer-based networks in combination with large scale pre-training in this domain. Therefore, we introduce a new method for automated instance segmentation of cell nuclei in digitized tissue samples using a deep learning architecture based on Vision Transformer called CellViT. CellViT is trained and evaluated on the PanNuke dataset, which is one of the most challenging nuclei instance segmentation datasets, consisting of nearly 200,000 annotated nuclei into 5 clinically important classes in 19 tissue types. We demonstrate the superiority of large-scale in-domain and out-of-domain pre-trained Vision Transformers by leveraging the recently published Segment Anything Model and a ViT-encoder pre-trained on 104 million histological image patches - achieving state-of-the-art nuclei detection and instance segmentation performance on the PanNuke dataset with a mean panoptic quality of 0.50 and an F1-detection score of 0.83. The code is publicly available at https://github.com/TIO-IKIM/CellViT.

Keywords: Cell segmentation; Deep learning; Digital pathology; Vision transformer.

MeSH terms

  • Cell Nucleus*
  • Eosine Yellowish-(YS)
  • Hematoxylin
  • Humans
  • Image Processing, Computer-Assisted
  • Neural Networks, Computer*
  • Staining and Labeling

Substances

  • Eosine Yellowish-(YS)
  • Hematoxylin