Skip to content

Commit 77f5eba

Browse files
Update README and bump version
1 parent ab75348 commit 77f5eba

File tree

2 files changed

+26
-159
lines changed

2 files changed

+26
-159
lines changed

README.md

Lines changed: 25 additions & 158 deletions
Original file line numberDiff line numberDiff line change
@@ -7,9 +7,13 @@ Tools for segmentation and tracking in microscopy build on top of [SegmentAnythi
77
Segment and track objects in microscopy images interactively with a few clicks!
88

99
We implement napari applications for:
10-
- interactive 2d segmentation
11-
- interactive 3d segmentation
12-
- interactive tracking of 2d image data
10+
- interactive 2d segmentation (Left: interactive cell segmentation)
11+
- interactive 3d segmentation (Middle: interactive mitochondria segmentation in EM)
12+
- interactive tracking of 2d image data (Right: interactive cell tracking)
13+
14+
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/d04cb158-9f5b-4460-98cd-023c4f19cccd" width="256">
15+
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/dfca3d9b-dba5-440b-b0f9-72a0683ac410" width="256">
16+
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/aefbf99f-e73a-4125-bb49-2e6592367a64" width="256">
1317

1418
**Beta version**
1519

@@ -19,162 +23,9 @@ We will soon provide a stand-alone application for running the `micro_sam` annot
1923

2024
If you run into any problems or have questions please open an issue or reach out via [image.sc](https://forum.image.sc/) using the tag `micro-sam` and tagging @constantinpape.
2125

22-
![box-prompts](https://github.com/computational-cell-analytics/micro-sam/assets/4263537/d04cb158-9f5b-4460-98cd-023c4f19cccd)
23-
24-
25-
## Functionality overview
26-
27-
We implement applications for fast interactive 2d and 3d segmentation as well as tracking.
28-
- Left: interactive 2d segmentation
29-
- Middle: interactive 3d segmentation
30-
- Right: interactive tracking
31-
32-
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/d5ee2080-ab08-4716-b4c4-c169b4ed29f5" width="256">
33-
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/dfca3d9b-dba5-440b-b0f9-72a0683ac410" width="256">
34-
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/aefbf99f-e73a-4125-bb49-2e6592367a64" width="256">
35-
36-
## Installation
37-
38-
We require these dependencies:
39-
- [PyTorch](https://pytorch.org/get-started/locally/)
40-
- [SegmentAnything](https://github.com/facebookresearch/segment-anything#installation)
41-
- [napari](https://napari.org/stable/)
42-
- [elf](https://github.com/constantinpape/elf)
43-
44-
We recommend to use conda and provide two environment files with all necessary requirements:
45-
- `environment_gpu.yaml`: sets up an environment with GPU support.
46-
- `environment_cpu.yaml`: sets up an environment with CPU support.
47-
48-
To install via conda first clone this repository:
49-
```
50-
git clone https://github.com/computational-cell-analytics/micro-sam
51-
```
52-
and
53-
```
54-
cd micro_sam
55-
```
56-
57-
Then create either the GPU or CPU environment via
58-
59-
```
60-
conda env create -f <ENV_FILE>.yaml
61-
```
62-
Then activate the environment via
63-
```
64-
conda activate sam
65-
```
66-
And install our napari applications and the `micro_sam` library via
67-
```
68-
pip install -e .
69-
```
70-
71-
**Troubleshooting:**
72-
73-
- On some systems `conda` is extremely slow and cannot resolve the environment in the step `conda env create ...`. You can use `mamba` instead, which is a faster re-implementation of `conda`. It can resolve the environment in less than a minute on any system we tried. Check out [this link](https://mamba.readthedocs.io/en/latest/installation.html) for how to install `mamba`. Once you have installed it, run `mamba env create -f <ENV_FILE>.yaml` to create the env.
74-
- Installation on MAC with a M1 or M2 processor:
75-
- The pytorch installation from `environment_cpu.yaml` does not work with a MAC that has an M1 or M2 processor. Instead you need to:
76-
- Create a new environment: `mamba create -c conda-forge python pip -n sam`
77-
- Activate it va `mamba activate sam`
78-
- Follow the instructions for how to install pytorch for MAC via conda from [pytorch.org](https://pytorch.org/).
79-
- Install additional dependencies: `mamba install -c conda-forge napari python-elf tqdm`
80-
- Install SegmentAnything: `pip install git+https://github.com/facebookresearch/segment-anything.git`
81-
- Install `micro_sam` by running `pip install -e .` in this folder.
82-
- **Note:** we have seen many issues with the pytorch installation on MAC. If a wrong pytorch version is installed for you (which will cause pytorch errors once you run the application) please try again with a clean `mambaforge` installation. Please install the `OS X, arm64` version from [here](https://github.com/conda-forge/miniforge#mambaforge).
83-
84-
## Usage
85-
86-
After installing the `micro_sam` python application the three interactive annotation tools can be started from the command line or from a python script (see details below).
87-
They are built with napari to implement the viewer and user interaction. If you are not familiar with napari yet, [start here](https://napari.org/stable/tutorials/fundamentals/quick_start.html).
88-
To use the apps the functionality of [napari point layers](https://napari.org/stable/howtos/layers/points.html), [napari shape layers](https://napari.org/stable/howtos/layers/shapes.html) and [napari labels layers](https://napari.org/stable/howtos/layers/labels.html) is of particular importance.
89-
90-
**Note:** the screenshots and tutorials do not show how to use bounding boxes for prompts yet. You can use the `box_prompts` layer for them in all three tools, and they can be used as a replacement or in combination with the point prompts.
91-
92-
### 2D Segmentation
93-
94-
The application for 2d segmentation can be started in two ways:
95-
- Via the command line with the command `micro_sam.annotator_2d`. Run `micro_sam.annotator_2d -h` for details.
96-
- From a python script with the function `micro_sam.sam_annotator.annotator_2d`. Check out [examples/sam_annotator_2d](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_2d.py) for details.
97-
98-
Below you can see the interface of the application for a cell segmentation example:
99-
100-
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/041585a6-0b72-4e4b-8df3-42135f4334c5" width="768">
101-
102-
The most important parts of the user interface are:
103-
1. The napari layers that contain the image, segmentations and prompts:
104-
- `prompts`: point layer that is used to provide prompts to SegmentAnything. Positive prompts (green points) for marking the object you want to segment, negative prompts (red points) for marking the outside of the object.
105-
- `current_object`: label layer that contains the object you're currently segmenting.
106-
- `committed_objects`: label layer with the objects that have already been segmented.
107-
- `auto_segmentation`: label layer results from using SegmentAnything for automatic instance segmentation.
108-
- `raw`: image layer that shows the image data.
109-
2. The prompt menu for changing the currently selected point from positive to negative and vice versa. This can also be done by pressing `t`.
110-
3. The menu for automatic segmentation. Pressing `Segment All Objects` will run automatic segmentation (this can take few minutes if you are using a CPU). The results will be displayed in the `auto_segmentation` layer.
111-
4. The menu for interactive segmentation. Pressing `Segment Object` (or `s`) will run segmentation for the current prompts. The result is displayed in `current_object`
112-
5. The menu for commiting the segmentation. When pressing `Commit` (or `c`) the result from the selected layer (either `current_object` or `auto_segmentation`) will be transferred from the respective layer to `committed_objects`.
113-
114-
Check out [this video](https://youtu.be/DfWE_XRcqN8) for an overview of the interactive 2d segmentation functionality.
115-
116-
### 3D Segmentation
117-
118-
The application for 3d segmentation can be started as follows:
119-
- Via the command line with the command `micro_sam.annotator_3d`. Run `micro_sam.annotator_3d -h` for details.
120-
- From a python script with the function `micro_sam.sam_annotator.annotator_3d`. Check out [examples/sam_annotator_3d](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_3d.py) for details.
121-
122-
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/0a6fb19e-7db5-4188-9371-3c238671f881" width="768">
123-
124-
The most important parts of the user interface are listed below. Most of these elements are the same as for [the 2d segmentation app](https://github.com/computational-cell-analytics/micro-sam#2d-segmentation).
125-
1. The napari layers that contain the image, segmentation and prompts. Same as for [the 2d segmentation app](https://github.com/computational-cell-analytics/micro-sam#2d-segmentation) but without the `auto_segmentation` layer.
126-
2. The prompt menu.
127-
3. The menu for interactive segmentation.
128-
4. The 3d segmentation menu. Pressing `Segment Volume` (or `v`) will extend the segmentation for the current object across the volume.
129-
5. The menu for committing the segmentation.
130-
131-
Check out [this video](https://youtu.be/5Jo_CtIefTM) for an overview of the interactive 3d segmentation functionality.
132-
133-
### Tracking
134-
135-
The application for interactive tracking (of 2d data) can be started as follows:
136-
- Via the command line with the command `micro_sam.annotator_tracking`. Run `micro_sam.annotator_tracking -h` for details.
137-
- From a python script with the function `micro_sam.sam_annotator.annotator_tracking`. Check out [examples/sam_annotator_tracking](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_tracking.py) for details.
138-
139-
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/dfb80f17-a370-4cbc-aaeb-29de93444090" width="768">
140-
141-
The most important parts of the user interface are listed below. Most of these elements are the same as for [the 2d segmentation app](https://github.com/computational-cell-analytics/micro-sam#2d-segmentation).
142-
1. The napari layers thaat contain the image, segmentation and prompts. Same as for [the 2d segmentation app](https://github.com/computational-cell-analytics/micro-sam#2d-segmentation) but without the `auto_segmentation` layer, `current_tracks` and `committed_tracks` are the equivalent of `current_object` and `committed_objects`.
143-
2. The prompt menu.
144-
3. The menu with tracking settings: `track_state` is used to indicate that the object you are tracking is dividing in the current frame. `track_id` is used to select which of the tracks after divsion you are following.
145-
4. The menu for interactive segmentation.
146-
5. The tracking menu. Press `Track Object` (or `v`) to track the current object across time.
147-
6. The menu for committing the current tracking result.
148-
149-
Check out [this video](https://youtu.be/PBPW0rDOn9w) for an overview of the interactive tracking functionality.
150-
151-
### Tips & Tricks
152-
153-
- By default, the applications pre-compute the image embeddings produced by SegmentAnything and store them on disc. If you are using a CPU this step can take a while for 3d data or timeseries (you will see a progress bar with a time estimate). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them to your laptop / local machine to speed this up. You can use the command `micro_sam.precompute_embeddings` for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the `embedding_path` argument.
154-
- Most other processing steps are very fast even on a CPU, so interactive annotation is possible. An exception is the automatic segmentation step (2d segmentation), which takes several minutes without a GPU (depending on the image size). For large volumes and timeseries segmenting an object in 3d / tracking across time can take a couple settings with a CPU (it is very fast with a GPU).
155-
- You can also try using a smaller version of the SegmentAnything model to speed up the computations. For this you can pass the `model_type` argument and either set it to `vit_l` or `vit_b` (default is `vit_h`). However, this may lead to worse results.
156-
- You can save and load the results from the `committed_objects` / `committed_tracks` layer to correct segmentations you obtained from another tool (e.g. CellPose) or to save intermediate annotation results. The results can be saved via `File->Save Selected Layer(s) ...` in the napari menu (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the `segmentation_result` (2d and 3d segmentation) or `tracking_result` (tracking) argument.
157-
158-
### Known limitations
159-
160-
- SegmentAnything does not work well for very small or fine-graind objects (e.g. filaments).
161-
- For the automatic segmentation functionality we currently rely on the automatic mask generation provided by SegmentAnything. It is slow and often misses objects in microscopy images. For now, we only offer this functionality in the 2d segmentation app; we are working on improving it and extending it to 3d segmentation and tracking.
162-
- Prompt bounding boxes do not provide the full functionality for tracking yet (they cannot be used for divisions or for starting new tracks). See also https://github.com/computational-cell-analytics/micro-sam/issues/23.
163-
164-
### Using the micro_sam library
165-
166-
After installation the `micro_sam` python library is available, which provides several utility functions for using SegmentAnything with napari. Check out [examples/image_series_annotator.py](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/image_series_annotator_app.py) for an example application for segmenting objects in an image series built with it.
167-
168-
<!---
169-
## Contributing
170-
171-
```
172-
micro_sam <- library with utility functionality for using SAM for microscopy data
173-
/sam_annotator <- the napari plugins for annotation
174-
```
175-
TODO: related projects
176-
-->
26+
## Installation and Usage
17727

28+
TODO / links to doc
17829

17930
## Citation
18031

@@ -195,6 +46,12 @@ Compared to these we support more applications (2d, 3d and tracking), and aim to
19546

19647
## Release Overview
19748

49+
**New in version 0.1.1**
50+
51+
- Fine-tuned segmenta anything models for microscopy (experimental)
52+
- Simplified instance segmentation menu
53+
- Menu for clearing annotations
54+
19855
**New in version 0.1.0**
19956

20057
- We support tiling in all annotators to enable processing large images.
@@ -212,3 +69,13 @@ Compared to these we support more applications (2d, 3d and tracking), and aim to
21269
- We have added support for bounding box prompts, which provide better segmentation results than points in many cases.
21370
- Interactive tracking now uses a better heuristic to propagate masks across time, leading to better automatic tracking results.
21471
- And have fixed several small bugs.
72+
73+
74+
<!---
75+
## Contributing
76+
77+
```
78+
micro_sam <- library with utility functionality for using SAM for microscopy data
79+
/sam_annotator <- the napari plugins for annotation
80+
```
81+
-->

micro_sam/__version__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
__version__ = "0.1.0"
1+
__version__ = "0.1.1"

0 commit comments

Comments
 (0)