Skip to content

Commit 1bb1e09

Browse files
Merge pull request #96 from computational-cell-analytics/more-doc
Extend documentation
2 parents 846269f + 77f5eba commit 1bb1e09

File tree

11 files changed

+244
-163
lines changed

11 files changed

+244
-163
lines changed

README.md

Lines changed: 26 additions & 158 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,19 @@
11
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7919746.svg)](https://doi.org/10.5281/zenodo.7919746)
2+
[![Conda](https://anaconda.org/conda-forge/micro_sam/badges/version.svg)](https://anaconda.org/conda-forge/micro_sam)
23

34
# SegmentAnything for Microscopy
45

56
Tools for segmentation and tracking in microscopy build on top of [SegmentAnything](https://segment-anything.com/).
67
Segment and track objects in microscopy images interactively with a few clicks!
78

89
We implement napari applications for:
9-
- interactive 2d segmentation
10-
- interactive 3d segmentation
11-
- interactive tracking of 2d image data
10+
- interactive 2d segmentation (Left: interactive cell segmentation)
11+
- interactive 3d segmentation (Middle: interactive mitochondria segmentation in EM)
12+
- interactive tracking of 2d image data (Right: interactive cell tracking)
13+
14+
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/d04cb158-9f5b-4460-98cd-023c4f19cccd" width="256">
15+
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/dfca3d9b-dba5-440b-b0f9-72a0683ac410" width="256">
16+
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/aefbf99f-e73a-4125-bb49-2e6592367a64" width="256">
1217

1318
**Beta version**
1419

@@ -18,162 +23,9 @@ We will soon provide a stand-alone application for running the `micro_sam` annot
1823

1924
If you run into any problems or have questions please open an issue or reach out via [image.sc](https://forum.image.sc/) using the tag `micro-sam` and tagging @constantinpape.
2025

21-
![box-prompts](https://github.com/computational-cell-analytics/micro-sam/assets/4263537/d04cb158-9f5b-4460-98cd-023c4f19cccd)
22-
23-
24-
## Functionality overview
25-
26-
We implement applications for fast interactive 2d and 3d segmentation as well as tracking.
27-
- Left: interactive 2d segmentation
28-
- Middle: interactive 3d segmentation
29-
- Right: interactive tracking
30-
31-
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/d5ee2080-ab08-4716-b4c4-c169b4ed29f5" width="256">
32-
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/dfca3d9b-dba5-440b-b0f9-72a0683ac410" width="256">
33-
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/aefbf99f-e73a-4125-bb49-2e6592367a64" width="256">
34-
35-
## Installation
36-
37-
We require these dependencies:
38-
- [PyTorch](https://pytorch.org/get-started/locally/)
39-
- [SegmentAnything](https://github.com/facebookresearch/segment-anything#installation)
40-
- [napari](https://napari.org/stable/)
41-
- [elf](https://github.com/constantinpape/elf)
42-
43-
We recommend to use conda and provide two environment files with all necessary requirements:
44-
- `environment_gpu.yaml`: sets up an environment with GPU support.
45-
- `environment_cpu.yaml`: sets up an environment with CPU support.
46-
47-
To install via conda first clone this repository:
48-
```
49-
git clone https://github.com/computational-cell-analytics/micro-sam
50-
```
51-
and
52-
```
53-
cd micro_sam
54-
```
55-
56-
Then create either the GPU or CPU environment via
57-
58-
```
59-
conda env create -f <ENV_FILE>.yaml
60-
```
61-
Then activate the environment via
62-
```
63-
conda activate sam
64-
```
65-
And install our napari applications and the `micro_sam` library via
66-
```
67-
pip install -e .
68-
```
69-
70-
**Troubleshooting:**
71-
72-
- On some systems `conda` is extremely slow and cannot resolve the environment in the step `conda env create ...`. You can use `mamba` instead, which is a faster re-implementation of `conda`. It can resolve the environment in less than a minute on any system we tried. Check out [this link](https://mamba.readthedocs.io/en/latest/installation.html) for how to install `mamba`. Once you have installed it, run `mamba env create -f <ENV_FILE>.yaml` to create the env.
73-
- Installation on MAC with a M1 or M2 processor:
74-
- The pytorch installation from `environment_cpu.yaml` does not work with a MAC that has an M1 or M2 processor. Instead you need to:
75-
- Create a new environment: `mamba create -c conda-forge python pip -n sam`
76-
- Activate it va `mamba activate sam`
77-
- Follow the instructions for how to install pytorch for MAC via conda from [pytorch.org](https://pytorch.org/).
78-
- Install additional dependencies: `mamba install -c conda-forge napari python-elf tqdm`
79-
- Install SegmentAnything: `pip install git+https://github.com/facebookresearch/segment-anything.git`
80-
- Install `micro_sam` by running `pip install -e .` in this folder.
81-
- **Note:** we have seen many issues with the pytorch installation on MAC. If a wrong pytorch version is installed for you (which will cause pytorch errors once you run the application) please try again with a clean `mambaforge` installation. Please install the `OS X, arm64` version from [here](https://github.com/conda-forge/miniforge#mambaforge).
82-
83-
## Usage
84-
85-
After installing the `micro_sam` python application the three interactive annotation tools can be started from the command line or from a python script (see details below).
86-
They are built with napari to implement the viewer and user interaction. If you are not familiar with napari yet, [start here](https://napari.org/stable/tutorials/fundamentals/quick_start.html).
87-
To use the apps the functionality of [napari point layers](https://napari.org/stable/howtos/layers/points.html), [napari shape layers](https://napari.org/stable/howtos/layers/shapes.html) and [napari labels layers](https://napari.org/stable/howtos/layers/labels.html) is of particular importance.
88-
89-
**Note:** the screenshots and tutorials do not show how to use bounding boxes for prompts yet. You can use the `box_prompts` layer for them in all three tools, and they can be used as a replacement or in combination with the point prompts.
90-
91-
### 2D Segmentation
92-
93-
The application for 2d segmentation can be started in two ways:
94-
- Via the command line with the command `micro_sam.annotator_2d`. Run `micro_sam.annotator_2d -h` for details.
95-
- From a python script with the function `micro_sam.sam_annotator.annotator_2d`. Check out [examples/sam_annotator_2d](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_2d.py) for details.
96-
97-
Below you can see the interface of the application for a cell segmentation example:
98-
99-
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/041585a6-0b72-4e4b-8df3-42135f4334c5" width="768">
100-
101-
The most important parts of the user interface are:
102-
1. The napari layers that contain the image, segmentations and prompts:
103-
- `prompts`: point layer that is used to provide prompts to SegmentAnything. Positive prompts (green points) for marking the object you want to segment, negative prompts (red points) for marking the outside of the object.
104-
- `current_object`: label layer that contains the object you're currently segmenting.
105-
- `committed_objects`: label layer with the objects that have already been segmented.
106-
- `auto_segmentation`: label layer results from using SegmentAnything for automatic instance segmentation.
107-
- `raw`: image layer that shows the image data.
108-
2. The prompt menu for changing the currently selected point from positive to negative and vice versa. This can also be done by pressing `t`.
109-
3. The menu for automatic segmentation. Pressing `Segment All Objects` will run automatic segmentation (this can take few minutes if you are using a CPU). The results will be displayed in the `auto_segmentation` layer.
110-
4. The menu for interactive segmentation. Pressing `Segment Object` (or `s`) will run segmentation for the current prompts. The result is displayed in `current_object`
111-
5. The menu for commiting the segmentation. When pressing `Commit` (or `c`) the result from the selected layer (either `current_object` or `auto_segmentation`) will be transferred from the respective layer to `committed_objects`.
112-
113-
Check out [this video](https://youtu.be/DfWE_XRcqN8) for an overview of the interactive 2d segmentation functionality.
114-
115-
### 3D Segmentation
116-
117-
The application for 3d segmentation can be started as follows:
118-
- Via the command line with the command `micro_sam.annotator_3d`. Run `micro_sam.annotator_3d -h` for details.
119-
- From a python script with the function `micro_sam.sam_annotator.annotator_3d`. Check out [examples/sam_annotator_3d](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_3d.py) for details.
120-
121-
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/0a6fb19e-7db5-4188-9371-3c238671f881" width="768">
122-
123-
The most important parts of the user interface are listed below. Most of these elements are the same as for [the 2d segmentation app](https://github.com/computational-cell-analytics/micro-sam#2d-segmentation).
124-
1. The napari layers that contain the image, segmentation and prompts. Same as for [the 2d segmentation app](https://github.com/computational-cell-analytics/micro-sam#2d-segmentation) but without the `auto_segmentation` layer.
125-
2. The prompt menu.
126-
3. The menu for interactive segmentation.
127-
4. The 3d segmentation menu. Pressing `Segment Volume` (or `v`) will extend the segmentation for the current object across the volume.
128-
5. The menu for committing the segmentation.
129-
130-
Check out [this video](https://youtu.be/5Jo_CtIefTM) for an overview of the interactive 3d segmentation functionality.
131-
132-
### Tracking
133-
134-
The application for interactive tracking (of 2d data) can be started as follows:
135-
- Via the command line with the command `micro_sam.annotator_tracking`. Run `micro_sam.annotator_tracking -h` for details.
136-
- From a python script with the function `micro_sam.sam_annotator.annotator_tracking`. Check out [examples/sam_annotator_tracking](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_tracking.py) for details.
137-
138-
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/dfb80f17-a370-4cbc-aaeb-29de93444090" width="768">
139-
140-
The most important parts of the user interface are listed below. Most of these elements are the same as for [the 2d segmentation app](https://github.com/computational-cell-analytics/micro-sam#2d-segmentation).
141-
1. The napari layers thaat contain the image, segmentation and prompts. Same as for [the 2d segmentation app](https://github.com/computational-cell-analytics/micro-sam#2d-segmentation) but without the `auto_segmentation` layer, `current_tracks` and `committed_tracks` are the equivalent of `current_object` and `committed_objects`.
142-
2. The prompt menu.
143-
3. The menu with tracking settings: `track_state` is used to indicate that the object you are tracking is dividing in the current frame. `track_id` is used to select which of the tracks after divsion you are following.
144-
4. The menu for interactive segmentation.
145-
5. The tracking menu. Press `Track Object` (or `v`) to track the current object across time.
146-
6. The menu for committing the current tracking result.
147-
148-
Check out [this video](https://youtu.be/PBPW0rDOn9w) for an overview of the interactive tracking functionality.
149-
150-
### Tips & Tricks
151-
152-
- By default, the applications pre-compute the image embeddings produced by SegmentAnything and store them on disc. If you are using a CPU this step can take a while for 3d data or timeseries (you will see a progress bar with a time estimate). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them to your laptop / local machine to speed this up. You can use the command `micro_sam.precompute_embeddings` for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the `embedding_path` argument.
153-
- Most other processing steps are very fast even on a CPU, so interactive annotation is possible. An exception is the automatic segmentation step (2d segmentation), which takes several minutes without a GPU (depending on the image size). For large volumes and timeseries segmenting an object in 3d / tracking across time can take a couple settings with a CPU (it is very fast with a GPU).
154-
- You can also try using a smaller version of the SegmentAnything model to speed up the computations. For this you can pass the `model_type` argument and either set it to `vit_l` or `vit_b` (default is `vit_h`). However, this may lead to worse results.
155-
- You can save and load the results from the `committed_objects` / `committed_tracks` layer to correct segmentations you obtained from another tool (e.g. CellPose) or to save intermediate annotation results. The results can be saved via `File->Save Selected Layer(s) ...` in the napari menu (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the `segmentation_result` (2d and 3d segmentation) or `tracking_result` (tracking) argument.
156-
157-
### Known limitations
158-
159-
- SegmentAnything does not work well for very small or fine-graind objects (e.g. filaments).
160-
- For the automatic segmentation functionality we currently rely on the automatic mask generation provided by SegmentAnything. It is slow and often misses objects in microscopy images. For now, we only offer this functionality in the 2d segmentation app; we are working on improving it and extending it to 3d segmentation and tracking.
161-
- Prompt bounding boxes do not provide the full functionality for tracking yet (they cannot be used for divisions or for starting new tracks). See also https://github.com/computational-cell-analytics/micro-sam/issues/23.
162-
163-
### Using the micro_sam library
164-
165-
After installation the `micro_sam` python library is available, which provides several utility functions for using SegmentAnything with napari. Check out [examples/image_series_annotator.py](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/image_series_annotator_app.py) for an example application for segmenting objects in an image series built with it.
166-
167-
<!---
168-
## Contributing
169-
170-
```
171-
micro_sam <- library with utility functionality for using SAM for microscopy data
172-
/sam_annotator <- the napari plugins for annotation
173-
```
174-
TODO: related projects
175-
-->
26+
## Installation and Usage
17627

28+
TODO / links to doc
17729

17830
## Citation
17931

@@ -194,6 +46,12 @@ Compared to these we support more applications (2d, 3d and tracking), and aim to
19446

19547
## Release Overview
19648

49+
**New in version 0.1.1**
50+
51+
- Fine-tuned segmenta anything models for microscopy (experimental)
52+
- Simplified instance segmentation menu
53+
- Menu for clearing annotations
54+
19755
**New in version 0.1.0**
19856

19957
- We support tiling in all annotators to enable processing large images.
@@ -211,3 +69,13 @@ Compared to these we support more applications (2d, 3d and tracking), and aim to
21169
- We have added support for bounding box prompts, which provide better segmentation results than points in many cases.
21270
- Interactive tracking now uses a better heuristic to propagate masks across time, leading to better automatic tracking results.
21371
- And have fixed several small bugs.
72+
73+
74+
<!---
75+
## Contributing
76+
77+
```
78+
micro_sam <- library with utility functionality for using SAM for microscopy data
79+
/sam_annotator <- the napari plugins for annotation
80+
```
81+
-->

0 commit comments

Comments
 (0)