Skip to content

Commit ab75348

Browse files
Update micro_sam doc
1 parent 4882c6c commit ab75348

File tree

5 files changed

+181
-2
lines changed

5 files changed

+181
-2
lines changed

doc/annotation_tools.md

Lines changed: 110 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1,110 @@
1-
# Data Annotation Tools
1+
# Annotation Tools
2+
3+
`micro_sam` provides applications for fast interactive 2d segmentation, 3d segmentation and tracking.
4+
See an example for interactive cell segmentation in phase-contrast microscopy (left), interactive segmentation
5+
of mitochondria in volume EM (middle) and interactive tracking of cells (right).
6+
7+
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/d5ee2080-ab08-4716-b4c4-c169b4ed29f5" width="256">
8+
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/dfca3d9b-dba5-440b-b0f9-72a0683ac410" width="256">
9+
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/aefbf99f-e73a-4125-bb49-2e6592367a64" width="256">
10+
11+
The annotation tools can be started from the `micro_sam` GUI, the command line or from python scripts. The `micro_sam` GUI can be started by
12+
```
13+
$ micro_sam.annotator
14+
```
15+
16+
They are built with [napari](https://napari.org/stable/) to implement the viewer and user interaction.
17+
If you are not familiar with napari yet, [start here](https://napari.org/stable/tutorials/fundamentals/quick_start.html).
18+
The `micro_sam` applications are mainly based on [the point layer](https://napari.org/stable/howtos/layers/points.html), [the shape layer](https://napari.org/stable/howtos/layers/shapes.html) and [the label layer](https://napari.org/stable/howtos/layers/labels.html).
19+
20+
## Annotator 2D
21+
22+
The 2d annotator can be started by
23+
- clicking `2d annotator` in the `micro_sam` GUI.
24+
- running `$ micro_sam.annotator_2d` in the command line. Run `micro_sam.annotator_2d -h` for details.
25+
- calling `micro_sam.sam_annotator.annotator_2d` in a python script. Check out [examples/sam_annotator_2d.py](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_2d.py) for details.
26+
27+
The user interface of the 2d annotator looks like this:
28+
29+
<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/2d-annotator-menu.png" width="768">
30+
31+
It contains the following elements:
32+
1. The napari layers for the image, segmentations and prompts:
33+
- `box_prompts`: shape layer that is used to provide box prompts to SegmentAnything.
34+
- `prompts`: point layer that is used to provide prompts to SegmentAnything. Positive prompts (green points) for marking the object you want to segment, negative prompts (red points) for marking the outside of the object.
35+
- `current_object`: label layer that contains the object you're currently segmenting.
36+
- `committed_objects`: label layer with the objects that have already been segmented.
37+
- `auto_segmentation`: label layer results from using SegmentAnything for automatic instance segmentation.
38+
- `raw`: image layer that shows the image data.
39+
2. The prompt menu for changing the currently selected point from positive to negative and vice versa. This can also be done by pressing `t`.
40+
3. The menu for automatic segmentation. Pressing `Segment All Objects` will run automatic segmentation. The results will be displayed in the `auto_segmentation` layer. Change the parameters `pred iou thresh` and `stability score thresh` to control how many objects are segmented.
41+
4. The menu for interactive segmentation. Pressing `Segment Object` (or `s`) will run segmentation for the current prompts. The result is displayed in `current_object`
42+
5. The menu for commiting the segmentation. When pressing `Commit` (or `c`) the result from the selected layer (either `current_object` or `auto_segmentation`) will be transferred from the respective layer to `committed_objects`.
43+
6. The menu for clearing the current annotations. Pressing `Clear Annotations` (or `shift c`) will clear the current annotations and the current segmentation.
44+
45+
Note that point prompts and box prompts can be combined. When you're using point prompts you can only segment one object at a time. With box prompts you can segment several objects at once.
46+
47+
Check out [this video](https://youtu.be/DfWE_XRcqN8) for an example of how to use the interactive 2d annotator.
48+
49+
We also provide the `image series annotator`, which can be used for running the 2d annotator for several images in a folder. You can start by clicking `Image series annotator` in the GUI, running `micro_sam.image_series_annotator` in the command line or from a [python script](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_image_series_annotator.py).
50+
51+
## Annotator 3D
52+
53+
The 3d annotator can be started by
54+
- clicking `3d annotator` in the `micro_sam` GUI.
55+
- running `$ micro_sam.annotator_3d` in the command line. Run `micro_sam.annotator_3d -h` for details.
56+
- calling `micro_sam.sam_annotator.annotator_3d` in a python script. Check out [examples/sam_annotator_3d.py](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_3d.py) for details.
57+
58+
The user interface of the 3d annotator looks like this:
59+
60+
<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/3d-annotator-menu.png" width="768">
61+
62+
Most elements are the same as in [the 2d annotator](#annotator-2d):
63+
1. The napari layers that contain the image, segmentation and prompts. Same as for [the 2d annotator](#annotator-2d) but without the `auto_segmentation` layer.
64+
2. The prompt menu.
65+
3. The menu for interactive segmentation.
66+
4. The 3d segmentation menu. Pressing `Segment Volume` (or `v`) will extend the segmentation for the current object across the volume.
67+
5. The menu for committing the segmentation.
68+
6. The menu for clearing the current annotations.
69+
70+
Note that you can only segment one object at a time with the 3d annotator.
71+
72+
Check out [this video](https://youtu.be/5Jo_CtIefTM) for an overview of the interactive 3d segmentation functionality.
73+
74+
## Annotator Tracking
75+
76+
The tracking annotator can be started by
77+
- clicking `Tracking annotator` in the `micro_sam` GUI.
78+
- running `$ micro_sam.annotator_tracking` in the command line. Run `micro_sam.annotator_tracking -h` for details.
79+
- calling `micro_sam.sam_annotator.annotator_tracking` in a python script. Check out [examples/sam_annotator_tracking.py](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_tracking.py) for details.
80+
81+
The user interface of the tracking annotator looks like this:
82+
83+
<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/tracking-annotator-menu.png" width="768">
84+
85+
Most elements are the same as in [the 2d annotator](#annotator-2d):
86+
1. The napari layers that contain the image, segmentation and prompts. Same as for [the 2d segmentation app](#annotator-2d) but without the `auto_segmentation` layer, `current_tracks` and `committed_tracks` are the equivalent of `current_object` and `committed_objects`.
87+
2. The prompt menu.
88+
3. The menu with tracking settings: `track_state` is used to indicate that the object you are tracking is dividing in the current frame. `track_id` is used to select which of the tracks after divsion you are following.
89+
4. The menu for interactive segmentation.
90+
5. The tracking menu. Press `Track Object` (or `v`) to track the current object across time.
91+
6. The menu for committing the current tracking result.
92+
7. The menu for clearing the current annotations.
93+
94+
Note that the tracking annotator only supports 2d image data, volumetric data is not supported.
95+
96+
Check out [this video](https://youtu.be/PBPW0rDOn9w) for an overview of the interactive tracking functionality.
97+
98+
### Tips & Tricks
99+
100+
- You can use tiling for large images. (TODO: expand on this).
101+
- The applications pre-compute the image embeddings produced by SegmentAnything and (optionally) store them on disc. If you are using a CPU this step can take a while for 3d data or timeseries (you will see a progress bar with a time estimate). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them to your laptop / local machine to speed this up. You can use the command `micro_sam.precompute_embeddings` for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the `embedding_path` argument.
102+
- Most other processing steps are very fast even on a CPU, so interactive annotation is possible. An exception is the automatic segmentation step (2d segmentation), which takes several minutes without a GPU (depending on the image size). For large volumes and timeseries segmenting an object in 3d / tracking across time can take a couple settings with a CPU (it is very fast with a GPU).
103+
- You can also try using a smaller version of the SegmentAnything model to speed up the computations. For this you can pass the `model_type` argument and either set it to `vit_l` or `vit_b` (default is `vit_h`). However, this may lead to worse results.
104+
- You can save and load the results from the `committed_objects` / `committed_tracks` layer to correct segmentations you obtained from another tool (e.g. CellPose) or to save intermediate annotation results. The results can be saved via `File->Save Selected Layer(s) ...` in the napari menu (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the `segmentation_result` (2d and 3d segmentation) or `tracking_result` (tracking) argument.
105+
106+
### Known limitations
107+
108+
- Segment Anything does not work well for very small or fine-graind objects (e.g. filaments).
109+
- For the automatic segmentation functionality we currently rely on the automatic mask generation provided by SegmentAnything. It is slow and often misses objects in microscopy images. For now, we only offer this functionality in the 2d segmentation app; we are working on improving it and extending it to 3d segmentation and tracking.
110+
- Prompt bounding boxes do not provide the full functionality for tracking yet (they cannot be used for divisions or for starting new tracks). See also [this github issue](https://github.com/computational-cell-analytics/micro-sam/issues/23).

doc/images/2d-annotator-menu.png

11.2 KB
Loading

doc/installation.md

Lines changed: 55 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1 +1,56 @@
11
# Installation
2+
3+
`micro_sam` requires the following dependencies:
4+
- [PyTorch](https://pytorch.org/get-started/locally/)
5+
- [SegmentAnything](https://github.com/facebookresearch/segment-anything#installation)
6+
- [napari](https://napari.org/stable/)
7+
- [elf](https://github.com/constantinpape/elf)
8+
9+
It is available as a conda package and can be installed via
10+
```
11+
$ conda install -c conda-forge micro_sam
12+
```
13+
14+
## From source
15+
16+
To install `micro_sam` from source, we recommend to first set up a conda environment with the necessary requirements:
17+
- [environment_gpu.yaml](https://github.com/computational-cell-analytics/micro-sam/blob/master/environment_gpu.yaml): sets up an environment with GPU support.
18+
- [environment_cpu.yaml](https://github.com/computational-cell-analytics/micro-sam/blob/master/environment_cpu.yaml): sets up an environment with CPU support.
19+
20+
To create one of these environments and install `micro_sam` into it follow these steps
21+
22+
1. Clone the repository:
23+
```
24+
$ git clone https://github.com/computational-cell-analytics/micro-sam
25+
```
26+
2. Enter it:
27+
```
28+
$ cd micro_sam
29+
```
30+
3. Create the GPU or CPU environment:
31+
32+
```
33+
conda env create -f <ENV_FILE>.yaml
34+
```
35+
4. Activate the environment:
36+
```
37+
conda activate sam
38+
```
39+
5. Install `micro_sam`:
40+
```
41+
pip install -e .
42+
```
43+
44+
**Troubleshooting:**
45+
46+
- On some systems `conda` is extremely slow and cannot resolve the environment in the step `conda env create ...`. You can use `mamba` instead, which is a faster re-implementation of `conda`. It can resolve the environment in less than a minute on any system we tried. Check out [this link](https://mamba.readthedocs.io/en/latest/installation.html) for how to install `mamba`. Once you have installed it, run `mamba env create -f <ENV_FILE>.yaml` to create the env.
47+
- Installation on MAC with a M1 or M2 processor:
48+
- The pytorch installation from `environment_cpu.yaml` does not work with a MAC that has an M1 or M2 processor. Instead you need to:
49+
- Create a new environment: `mamba create -c conda-forge python pip -n sam`
50+
- Activate it va `mamba activate sam`
51+
- Follow the instructions for how to install pytorch for MAC via conda from [pytorch.org](https://pytorch.org/).
52+
- Install additional dependencies: `mamba install -c conda-forge napari python-elf tqdm`
53+
- Install SegmentAnything: `pip install git+https://github.com/facebookresearch/segment-anything.git`
54+
- Install `micro_sam` by running `pip install -e .` in this folder.
55+
- **Note:** we have seen many issues with the pytorch installation on MAC. If a wrong pytorch version is installed for you (which will cause pytorch errors once you run the application) please try again with a clean `mambaforge` installation. Please install the `OS X, arm64` version from [here](https://github.com/conda-forge/miniforge#mambaforge).
56+
- Some MACs require a specific installation order of packages. If the steps layed out above don't work for you please check out the procedure described [in this github issue](https://github.com/computational-cell-analytics/micro-sam/issues/77).

doc/python_library.md

Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
# How to use the Python Library
2+
3+
The python library can be imported via
4+
```python
5+
import micro_sam
6+
```
7+
8+
It implements functionality for running Segment Anything for 2d and 3d data, provides more instance segmentation functionality and several other helpful functions for using Segment Anything.
9+
This functionality is used to implement the `micro_sam` annotation tools, but you can also use it as a standalone python library.
10+
11+
## Finetuned models
12+
13+
We provide fine-tuned Segment Anything models for microscopy data. They are still in an experimental stage and we will upload more and better models soon, as well as the code for fine-tuning.
14+
For using the current models, check out the [2d annotator example](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_2d.py#L62) and set `use_finetuned_model` to `True`.

micro_sam/__init__.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,8 @@
11
"""
22
.. include:: ../doc/start_page.md
3-
.. include:: ../doc/annotation_tools.md
43
.. include:: ../doc/installation.md
4+
.. include:: ../doc/annotation_tools.md
5+
.. include:: ../doc/python_library.md
56
"""
67

78
__all__ = [

0 commit comments

Comments
 (0)