Artikel
Evaluating the Segment Anything Model for Histopathological Tissue Segmentation
Suche in Medline nach
Autoren
Veröffentlicht: | 15. September 2023 |
---|
Gliederung
Text
Introduction: Meta introduced the Segment Anything Model (SAM), a new Machine Learning model for segmentation with zero-shot generalization and no need for additional training data, in April 2023 [1]. Since then, the model has already been applied to medical imaging, such as skin cancer segmentation [2] and brain extraction from MRI images [3]. However, no work addresses the tile-wise segmentation approach, which is the current state of the art for histopathological segmentation.
Methods: A histopathological dataset of 103 labeled Whole-Slide-Images (WSIs) of Hematoxylin and Eosin (HE) stained Glioblastoma (GBM) slices from the Technical University of Munich was selected for the segmentation task. The labeling was conducted by domain experts (i.e., neuropathologists). GBM prove an especially hard segmentation task, due to their high heterogeneity and fluid borders. The WSIs were sliced into tiles with a resolution of 1024x1024 pixels, which were further rescaled to 256x256 pixels to match the input of the reference model used for comparison.
The selected tiles were then annotated using SAM’s point labeling before training. Three different methods were evaluated for the annotation process. 1) manual annotation using 20 prompts and automatic annotation based on the original label of the dataset using a 2) grid-based strategy (prompts equally distributed over the image in a grid) and 3) border-based strategy (labeling the border between tumor and tissue). The results were then compared with a U-Net specifically trained on the given dataset [4].
Results: The automatic labeling approaches, and a bounding box-based approach, did not provide any usable segmentation output. The manual labeling resulted in an Intersection-over-Union (IoU) score close to ~40% compared to an IoU of 84.5% using the U-Net approach.
Using other resolutions for the input (e.g., 1024x1024 pixels without rescaling) did not provide any benefit and resulted in IoU scores of ~30%.
Discussion: The results indicate that SAM is currently incapable of segmenting histopathological images out of the box. This finding is in accordance with the work of Deng et al., who tested SAMs histopathological segmentation capabilities on heavily rescaled, non-tiled WSIs of skin cancer, which is a relatively easy tumor to segment [2]. As with any other model, fine-tuning and additional training are required to make SAM suitable for histopathological image segmentation. This removes SAM’s most significant advantage, zero-shot segmentation. Results for other staining methods (e.g., immunohistochemical staining) could provide different results, although this is unlikely, as indicated by [2].
Conclusion: While SAM shows competitive results for other medical segmentation tasks (i.e., nuclei segmentation [2] or MRI segmentation [3]), its histopathological tissue segmentation capabilities using a zero-shot approach are not comparable to an established and well-trained state-of-the-art model. While SAM’s point annotations could assist domain experts in the initial dataset labeling, it is not suitable in its current form for the final segmentation task. In first tests, it was able to detect half the tumor with three annotations, already dramatically reducing the time needed for labeling.
Future work has to evaluate SAM’s trained segmentation capabilities compared to current state-of-the-art models.
The authors declare that they have no competing interests.
The authors declare that an ethics committee vote is not required.
References
- 1.
- Kirillov A, Mintun E, Ravi N, Mao H, Rolland C, Gustafson L, et al. Segment Anything [Preprint]. arXiv. 2023 Apr 5. arXiv:2304.02643. DOI:10.48550/arXiv.2304.02643
- 2.
- Deng R, Cui C, Liu Q, Yao T, Remedios LW, Bao S, et al. Segment Anything Model (SAM)Â for Digital Pathology: Assess Zero-shot Segmentation on Whole Slide Imaging [Preprint]. arXiv. 2023 Apr 9. arXiv:2304.04155. DOI: 10.48550/arXiv.2304.04155
- 3.
- Mohapatra S, Gosai A, Schlaug G. SAM vs BET: A Comparative Study for Brain Extraction and Segmentation of Magnetic Resonance Images using Deep Learning [Preprint]. arXiv. 2023 Apr 19. arXiv:2304.04738v3. DOI: 10.48550/arXiv.2304.04738
- 4.
- Hieber D, Prokop G, Karthan M, Märkl B, Schobel J, Liesche-Starnecker F. Neural Network Assisted Pathology for Labeling Tumors in Whole-Slide-Images of Glioblastoma. In: 106. Jahrestagung der Deutschen Gesellschaft für Pathologie e.V. 2023.