PathoSAM implements interactive annotation and (automatic) instance and semantic segmentation for histopathology images. It is built on top of Segment Anything by Meta AI and our prior work Segment Anything for Microscopy. It specializes Segment Anything for nucleus segmentation in histopathology data. Its core components are:
- The publicly available
patho_sammodels for interactive data annotation that were fine-tuned on openly available histopathology images. - The
patho_samlibrary, which provides training functionality based on Segment Anything for Microscopy, and supports:- Application of Segment Anything to histopathology images, including whole-slide images, and fine-tuning on your data.
- Semantic segmentation.
Based on these components, patho_sam enables fast interactive and automatic annotation for histopathology images, see Usage for details.
How to install patho_sam python library from source?
To create the environment and install patho_sam into it follow these steps:
- Clone the repository:
git clone https://github.com/computational-cell-analytics/patho-sam - Enter it:
cd patho-sam - Create the environment with the necessary requirements:
conda env create -f environment.yaml - Activate the environment:
conda activate patho-sam - Install
patho_sam:pip install -e .
See the examples folder for more details.
-
Download the example whole-slide image by running the following via terminal:
patho_sam.example_data(seepatho_sam.example_data -hfor more details about the CLI). -
Run automatic segmentation on your own WSI or the example data by running the following via terminal:
patho_sam.automatic_segmentation -i /home/anwai/.cache/micro_sam/sample_data/whole-slide-histopathology-example-image.svs -o segmentation.tif
NOTE 1: See
patho_sam.automatic_segmentation -hfor more details about the CLI.NOTE 2: You can find your cache directory using:
python -c "from micro_sam.util import get_cache_directory; print(get_cache_directory())".
If you are using this repository in your research please cite:
- our paper (now published in MIDL 2025).
- the Segment Anything for Microscopy publication.
- And the original Segment Anything publication.
