Skip to content

Commit e3bd691

Browse files
Update doc and extend finetuned model examples
1 parent 70fec20 commit e3bd691

File tree

5 files changed

+80
-22
lines changed

5 files changed

+80
-22
lines changed

doc/installation.md

Lines changed: 21 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -1,41 +1,43 @@
11
# Installation
22

33
We provide three different ways of installing `micro_sam`:
4-
- [From conda](#from-conda) is the recommended way if you want to use all functionality.
5-
- [From source](#from-source) for setting up a development environment to change and potentially contribute to our software.
4+
- [From mamba](#from-mamba) is the recommended way if you want to use all functionality.
5+
- [From source](#from-source) for setting up a development environment to use the latest version and be able to change and contribute to our software.
66
- [From installer](#from-installer) to install without having to use conda. This mode of installation is still experimental! It only provides the annotation tools and does not enable model finetuning.
77

88
Our software requires the following dependencies:
99
- [PyTorch](https://pytorch.org/get-started/locally/)
1010
- [SegmentAnything](https://github.com/facebookresearch/segment-anything#installation)
1111
- [elf](https://github.com/constantinpape/elf)
12+
- [torch_em](https://github.com/constantinpape/torch-em)
1213
- [napari](https://napari.org/stable/) (for the interactive annotation tools)
13-
- [torch_em](https://github.com/constantinpape/torch-em) (for the training functionality)
1414

15-
## From conda
1615

17-
`micro_sam` is available as a conda package and can be installed via
18-
```
19-
$ conda install -c conda-forge micro_sam
20-
```
16+
## From mamba
17+
18+
[mamba](https://mamba.readthedocs.io/en/latest/) is a drop-in replacement for conda, but is much faster than it.
19+
While the steps below may also work with `conda`, we highly recommend using `mamba`.
20+
You can follow the instrutions [here](https://mamba.readthedocs.io/en/latest/installation/mamba-installation.html) to install `mamba`.
2121

22-
This command will not install the required dependencies for the annotation tools and for training / finetuning.
23-
To use the annotation functionality you also need to install `napari`:
22+
`micro_sam` can then be installed in an existing environment via
2423
```
25-
$ conda install -c conda-forge napari pyqt
24+
$ mamba install -c conda-forge micro_sam
2625
```
27-
And to use the training functionality `torch_em`:
26+
or you can create a new environment (here called `micro-sam`) via
2827
```
29-
$ conda install -c conda-forge torch_em
28+
$ mamba create -c conda-forge -n micro-sam micro_sam
3029
```
3130

32-
In case the installation via conda takes too long consider using [mamba](https://mamba.readthedocs.io/en/latest/).
33-
Once you have it installed you can simply replace the `conda` commands with `mamba`.
31+
You also need to install napari to use the annotation tool:
32+
```
33+
$ mamba install -c conda-forge napari pyqt
34+
```
35+
(We don't include napari in the default installation dependencies to keep the choice of rendering backend flexible.)
3436

3537

3638
## From source
3739

38-
To install `micro_sam` from source, we recommend to first set up a conda environment with the necessary requirements:
40+
To install `micro_sam` from source, we recommend to first set up an environment with the necessary requirements:
3941
- [environment_gpu.yaml](https://github.com/computational-cell-analytics/micro-sam/blob/master/environment_gpu.yaml): sets up an environment with GPU support.
4042
- [environment_cpu.yaml](https://github.com/computational-cell-analytics/micro-sam/blob/master/environment_cpu.yaml): sets up an environment with CPU support.
4143

@@ -52,11 +54,11 @@ $ cd micro_sam
5254
3. Create the GPU or CPU environment:
5355

5456
```
55-
$ conda env create -f <ENV_FILE>.yaml
57+
$ mamba env create -f <ENV_FILE>.yaml
5658
```
5759
4. Activate the environment:
5860
```
59-
$ conda activate sam
61+
$ mamba activate sam
6062
```
6163
5. Install `micro_sam`:
6264
```
@@ -65,7 +67,6 @@ $ pip install -e .
6567

6668
**Troubleshooting:**
6769

68-
- On some systems `conda` is extremely slow and cannot resolve the environment in the step `conda env create ...`. You can use `mamba` instead, which is a faster re-implementation of `conda`. It can resolve the environment in less than a minute on any system we tried. Check out [this link](https://mamba.readthedocs.io/en/latest/installation.html) for how to install `mamba`. Once you have installed it, run `mamba env create -f <ENV_FILE>.yaml` to create the env.
6970
- Installation on MAC with a M1 or M2 processor:
7071
- The pytorch installation from `environment_cpu.yaml` does not work with a MAC that has an M1 or M2 processor. Instead you need to:
7172
- Create a new environment: `mamba create -c conda-forge python pip -n sam`
@@ -89,7 +90,7 @@ We also provide installers for Linux and Windows:
8990

9091
**The installers are stil experimental and not fully tested.** Mac is not supported yet, but we are working on also providing an installer for it.
9192

92-
If you encounter problems with them then please consider installing `micro_sam` via [conda](#from-conda) instead.
93+
If you encounter problems with them then please consider installing `micro_sam` via [mamba](#from-mamba) instead.
9394

9495
**Linux Installer:**
9596

doc/python_library.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -26,4 +26,9 @@ TODO: provide link to the paper with results on how much data is needed
2626

2727
The training logic is implemented in `micro_sam.training` and is based on [torch-em](https://github.com/constantinpape/torch-em). Please check out [examples/finetuning](https://github.com/computational-cell-analytics/micro-sam/tree/master/examples/finetuning) to see how you can finetune on your own data with it. The script `finetune_hela.py` contains an example for finetuning on a small custom microscopy dataset and `use_finetuned_model.py` shows how this model can then be used in the interactive annotation tools.
2828

29+
Since release v0.4.0 we also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster.
30+
You can enable training of the decoder by setting `train_instance_segmentation = True` [here](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/finetuning/finetune_hela.py#L165).
31+
The script `instance_segmentation_with_finetuned_model.py` shows how to use it for automatic instance segmentation.
32+
We will fully integrate this functionality with the annotation tool in the next release.
33+
2934
More advanced examples, including quantitative and qualitative evaluation, of finetuned models can be found in [finetuning](https://github.com/computational-cell-analytics/micro-sam/tree/master/finetuning), which contains the code for training and evaluating our microscopy models.

examples/finetuning/README.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,6 @@
1+
# Example for model finetuning
2+
3+
This folder contains example scripts that show how to finetune a SAM model on your own data and how the finetuned model can then be used:
4+
- `finetune_hela.py`: Shows how to finetune the model on new data. Set `train_instance_segmentation` (line 165) to `True` in order to also train a decoder for automatic instance segmentation.
5+
- `annotator_with_finetuned_model.py`: Use the finetuned model in the 2d annotator.
6+
- `instance_segmentation_with_finetuned_model`: Use the finetuned model for automatic instance segmentation (only if you have trained with `train_instance_segmentation = True`).

examples/finetuning/use_finetuned_model.py renamed to examples/finetuning/annotator_with_finetuned_model.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
from micro_sam.sam_annotator import annotator_2d
55

66

7-
def run_annotator_with_custom_model():
7+
def run_annotator_with_finetuned_model():
88
"""Run the 2d anntator with a custom (finetuned) model.
99
1010
Here, we use the model that is produced by `finetuned_hela.py` and apply it
@@ -30,4 +30,4 @@ def run_annotator_with_custom_model():
3030

3131

3232
if __name__ == "__main__":
33-
run_annotator_with_custom_model()
33+
run_annotator_with_finetuned_model()
Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
import napari
2+
import imageio.v3 as imageio
3+
4+
from micro_sam.instance_segmentation import (
5+
load_instance_segmentation_with_decoder_from_checkpoint, mask_data_to_segmentation
6+
)
7+
from micro_sam.util import precompute_image_embeddings
8+
9+
10+
def run_instance_segmentation_with_finetuned_model():
11+
"""Run the instance segmentation with the finetuned model.
12+
13+
Here, we use the model that is produced by `finetuned_hela.py` and apply it
14+
for an image from the validation set.
15+
This only works if 'instance_segmentation_with_decoder' was set to true.
16+
"""
17+
# take the last frame, which is part of the val set, so the model was not directly trained on it
18+
image = imageio.imread("./data/DIC-C2DH-HeLa.zip.unzip/DIC-C2DH-HeLa/01/t083.tif")
19+
20+
# set the checkpoint and the path for caching the embeddings
21+
checkpoint = "./checkpoints/sam_hela/best.pt"
22+
embedding_path = "./embeddings/embeddings-finetuned.zarr"
23+
24+
model_type = "vit_b" # We finetune a vit_b in the example script.
25+
# Adapt this if you finetune a different model type, e.g. vit_h.
26+
27+
# Create the segmenter.
28+
segmenter = load_instance_segmentation_with_decoder_from_checkpoint(checkpoint, model_type=model_type)
29+
image_embeddings = precompute_image_embeddings(segmenter._predictor, image, embedding_path)
30+
31+
# Compute the segmentation for the current image:
32+
# First initialize the segmenter.
33+
segmenter.initialize(image, image_embeddings)
34+
# Then compute the actual segmentation. You can set different hyperparameters here,
35+
# see the function description of 'generate' for details
36+
masks = segmenter.generate(output_mode="binary_mask")
37+
segmentation = mask_data_to_segmentation(masks, with_background=True)
38+
39+
viewer = napari.Viewer()
40+
viewer.add_image(image)
41+
viewer.add_labels(segmentation)
42+
napari.run()
43+
44+
45+
if __name__ == "__main__":
46+
run_instance_segmentation_with_finetuned_model()

0 commit comments

Comments
 (0)