You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is an advanced beta version. While many features are still under development, we aim to keep the user interface and python library stable.
22
-
Any feedback is welcome, but please be aware that the functionality is under active development and that some features may not be thoroughly tested yet.
23
-
We will soon provide a stand-alone application for running the `micro_sam` annotation tools, and plan to also release it as [napari plugin](https://napari.org/stable/plugins/index.html) in the future.
24
-
25
-
If you run into any problems or have questions please open an issue on Github or reach out via [image.sc](https://forum.image.sc/) using the tag `micro-sam` and tagging @constantinpape.
19
+
If you run into any problems or have questions regarding our tool please open an issue on Github or reach out via [image.sc](https://forum.image.sc/) using the tag `micro-sam` and tagging @constantinpape.
You can then start the `micro_sam` tools by running `$ micro_sam.annotator` in the command line.
35
29
30
+
For an introduction in how to use the napari based annotation tools check out [the video tutorials](https://www.youtube.com/watch?v=ket7bDUP9tI&list=PLwYZXQJ3f36GQPpKCrSbHjGiH39X4XjSO&pp=gAQBiAQB).
36
31
Please check out [the documentation](https://computational-cell-analytics.github.io/micro-sam/) for more details on the installation and usage of `micro_sam`.
37
32
38
33
39
34
## Citation
40
35
41
36
If you are using this repository in your research please cite
Copy file name to clipboardExpand all lines: doc/annotation_tools.md
+19-6Lines changed: 19 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,9 +13,22 @@ The annotation tools can be started from the `micro_sam` GUI, the command line o
13
13
$ micro_sam.annotator
14
14
```
15
15
16
-
They are built with[napari](https://napari.org/stable/) to implement the viewer and user interaction.
16
+
They are built using[napari](https://napari.org/stable/)and [magicgui](https://pyapp-kit.github.io/magicgui/)to provide the viewer and user interface.
17
17
If you are not familiar with napari yet, [start here](https://napari.org/stable/tutorials/fundamentals/quick_start.html).
18
-
The `micro_sam` applications are mainly based on [the point layer](https://napari.org/stable/howtos/layers/points.html), [the shape layer](https://napari.org/stable/howtos/layers/shapes.html) and [the label layer](https://napari.org/stable/howtos/layers/labels.html).
18
+
The `micro_sam` tools use [the point layer](https://napari.org/stable/howtos/layers/points.html), [shape layer](https://napari.org/stable/howtos/layers/shapes.html) and [label layer](https://napari.org/stable/howtos/layers/labels.html).
19
+
20
+
The annotation tools are explained in detail below. In addition to the documentation here we also provide [video tutorials](https://www.youtube.com/watch?v=ket7bDUP9tI&list=PLwYZXQJ3f36GQPpKCrSbHjGiH39X4XjSO).
21
+
22
+
23
+
## Starting via GUI
24
+
25
+
The annotation toools can be started from a central GUI, which can be started with the command `$ micro_sam.annotator` or using the executable [from an installer](#from-installer).
26
+
27
+
In the GUI you can select with of the four annotation tools you want to use:
And after selecting them a new window will open where you can select the input file path and other optional parameter. Then click the top button to start the tool. **Note: If you are not starting the annotation tool with a path to pre-computed embeddings then it can take several minutes to open napari after pressing the button because the embeddings are being computed.**
31
+
19
32
20
33
## Annotator 2D
21
34
@@ -44,7 +57,7 @@ It contains the following elements:
44
57
45
58
Note that point prompts and box prompts can be combined. When you're using point prompts you can only segment one object at a time. With box prompts you can segment several objects at once.
46
59
47
-
Check out [this video](https://youtu.be/DfWE_XRcqN8) for an example of how to use the interactive 2d annotator.
60
+
Check out [this video](https://youtu.be/ket7bDUP9tI) for a tutorial for the 2d annotation tool.
48
61
49
62
We also provide the `image series annotator`, which can be used for running the 2d annotator for several images in a folder. You can start by clicking `Image series annotator` in the GUI, running `micro_sam.image_series_annotator` in the command line or from a [python script](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/image_series_annotator.py).
50
63
@@ -69,7 +82,7 @@ Most elements are the same as in [the 2d annotator](#annotator-2d):
69
82
70
83
Note that you can only segment one object at a time with the 3d annotator.
71
84
72
-
Check out [this video](https://youtu.be/5Jo_CtIefTM) for an overview of the interactive 3d segmentation functionality.
85
+
Check out [this video](https://youtu.be/PEy9-rTCdS4) for a tutorial for the 3d annotation tool.
73
86
74
87
## Annotator Tracking
75
88
@@ -93,7 +106,7 @@ Most elements are the same as in [the 2d annotator](#annotator-2d):
93
106
94
107
Note that the tracking annotator only supports 2d image data, volumetric data is not supported.
95
108
96
-
Check out [this video](https://youtu.be/PBPW0rDOn9w) for an overview of the interactive tracking functionality.
109
+
Check out [this video](https://youtu.be/Xi5pRWMO6_w) for a tutorial for how to use the tracking annotation tool.
97
110
98
111
## Tips & Tricks
99
112
@@ -105,7 +118,7 @@ You can activate tiling by passing the parameters `tile_shape`, which determines
105
118
- If you're using the command line functions you can pass them via the options `--tile_shape 1024 1024 --halo 128 128`
106
119
- Note that prediction with tiling only works when the embeddings are cached to file, so you must specify an `embedding_path` (`-e` in the CLI).
107
120
- You should choose the `halo` such that it is larger than half of the maximal radius of the objects your segmenting.
108
-
- The applications pre-compute the image embeddings produced by SegmentAnything and (optionally) store them on disc. If you are using a CPU this step can take a while for 3d data or timeseries (you will see a progress bar with a time estimate). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them to your laptop / local machine to speed this up. You can use the command `micro_sam.precompute_embeddings` for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the `embedding_path` argument.
121
+
- The applications pre-compute the image embeddings produced by SegmentAnything and (optionally) store them on disc. If you are using a CPU this step can take a while for 3d data or timeseries (you will see a progress bar with a time estimate). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them to your laptop / local machine to speed this up. You can use the command `micro_sam.precompute_state` for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the `embedding_path` argument.
109
122
- Most other processing steps are very fast even on a CPU, so interactive annotation is possible. An exception is the automatic segmentation step (2d segmentation), which takes several minutes without a GPU (depending on the image size). For large volumes and timeseries segmenting an object in 3d / tracking across time can take a couple settings with a CPU (it is very fast with a GPU).
110
123
- You can also try using a smaller version of the SegmentAnything model to speed up the computations. For this you can pass the `model_type` argument and either set it to `vit_b` or to `vit_l` (default is `vit_h`). However, this may lead to worse results.
111
124
- You can save and load the results from the `committed_objects` / `committed_tracks` layer to correct segmentations you obtained from another tool (e.g. CellPose) or to save intermediate annotation results. The results can be saved via `File -> Save Selected Layer(s) ...` in the napari menu (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the `segmentation_result` (2d and 3d segmentation) or `tracking_result` (tracking) argument.
We provide models that were finetuned on microscopy data using `micro_sam.training`. They are hosted on zenodo. We currently offer the following models:
4
+
-`vit_h`: Default Segment Anything model with vit-h backbone.
5
+
-`vit_l`: Default Segment Anything model with vit-l backbone.
6
+
-`vit_b`: Default Segment Anything model with vit-b backbone.
7
+
-`vit_h_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-h backbone.
8
+
-`vit_b_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-b backbone.
9
+
-`vit_h_em`: Finetuned Segment Anything model for neurites and cells in electron microscopy data with vit-h backbone.
10
+
-`vit_b_em`: Finetuned Segment Anything model for neurites and cells in electron microscopy data with vit-b backbone.
11
+
12
+
See the two figures below of the improvements through the finetuned model for LM and EM data.
To use a specific model in the python library you need to pass the corresponding name as value to the `model_type` parameter exposed by all relevant functions.
23
+
See for example the [2d annotator example](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/annotator_2d.py#L62) where `use_finetuned_model` can be set to `True` to use the `vit_h_lm` model.
24
+
25
+
## Which model should I choose?
26
+
27
+
As a rule of thumb:
28
+
- Use the `_lm` models for segmenting cells or nuclei in light microscopy.
29
+
- Use the `_em` models for segmenting ceells or neurites in electron microscopy.
30
+
- Note that this model does not work well for segmenting mitochondria or other organelles becuase it is biased towards segmenting the full cell / cellular compartment.
31
+
- For other cases use the default models.
32
+
33
+
See also the figures above for examples where the finetuned models work better than the vanilla models.
34
+
Currently the model `vit_h` is used by default.
35
+
36
+
We are working on releasing more fine-tuned models, in particular for mitochondria and other organelles in EM.
0 commit comments