Skip to content

Commit 171dbf8

Browse files
Fix merge conflicts, add image series & segmentation datasets to napari plugin
2 parents dbbd4ca + 4e4df74 commit 171dbf8

File tree

84 files changed

+5480
-301
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

84 files changed

+5480
-301
lines changed

.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,3 +3,4 @@ __pycache__/
33
*.pth
44
*.tif
55
examples/data/*
6+
*.out

README.md

Lines changed: 16 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -16,31 +16,26 @@ We implement napari applications for:
1616
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/dfca3d9b-dba5-440b-b0f9-72a0683ac410" width="256">
1717
<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/aefbf99f-e73a-4125-bb49-2e6592367a64" width="256">
1818

19-
**Beta version**
20-
21-
This is an advanced beta version. While many features are still under development, we aim to keep the user interface and python library stable.
22-
Any feedback is welcome, but please be aware that the functionality is under active development and that some features may not be thoroughly tested yet.
23-
We will soon provide a stand-alone application for running the `micro_sam` annotation tools, and plan to also release it as [napari plugin](https://napari.org/stable/plugins/index.html) in the future.
24-
25-
If you run into any problems or have questions please open an issue on Github or reach out via [image.sc](https://forum.image.sc/) using the tag `micro-sam` and tagging @constantinpape.
19+
If you run into any problems or have questions regarding our tool please open an issue on Github or reach out via [image.sc](https://forum.image.sc/) using the tag `micro-sam` and tagging @constantinpape.
2620

2721

2822
## Installation and Usage
2923

3024
You can install `micro_sam` via conda:
3125
```
32-
conda install -c conda-forge micro_sam
26+
conda install -c conda-forge micro_sam napari pyqt
3327
```
3428
You can then start the `micro_sam` tools by running `$ micro_sam.annotator` in the command line.
3529

30+
For an introduction in how to use the napari based annotation tools check out [the video tutorials](https://www.youtube.com/watch?v=ket7bDUP9tI&list=PLwYZXQJ3f36GQPpKCrSbHjGiH39X4XjSO&pp=gAQBiAQB).
3631
Please check out [the documentation](https://computational-cell-analytics.github.io/micro-sam/) for more details on the installation and usage of `micro_sam`.
3732

3833

3934
## Citation
4035

4136
If you are using this repository in your research please cite
42-
- [SegmentAnything](https://arxiv.org/abs/2304.02643)
43-
- and our repository on [zenodo](https://doi.org/10.5281/zenodo.7919746) (we are working on a publication)
37+
- Our [preprint](https://doi.org/10.1101/2023.08.21.554208)
38+
- and the original [Segment Anything publication](https://arxiv.org/abs/2304.02643)
4439

4540

4641
## Related Projects
@@ -56,6 +51,17 @@ Compared to these we support more applications (2d, 3d and tracking), and provid
5651

5752
## Release Overview
5853

54+
**New in version 0.2.1 and 0.2.2**
55+
56+
- Several bugfixes for the newly introduced functionality in 0.2.0.
57+
58+
**New in version 0.2.0**
59+
60+
- Functionality for training / finetuning and evaluation of Segment Anything Models
61+
- Full support for our finetuned segment anything models
62+
- Improvements of the automated instance segmentation functionality in the 2d annotator
63+
- And several other small improvements
64+
5965
**New in version 0.1.1**
6066

6167
- Fine-tuned segment anything models for microscopy (experimental)

deployment/construct.yaml

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,8 +8,6 @@ header_image: ../doc/images/micro-sam-logo.png
88
icon_image: ../doc/images/micro-sam-logo.png
99
channels:
1010
- conda-forge
11-
welcome_text: Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod
12-
tempor incididunt ut labore et dolore magna aliqua.
13-
conclusion_text: Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris
14-
nisi ut aliquip ex ea commodo consequat.
15-
initialize_by_default: false
11+
welcome_text: Install Segment Anything for Microscopy.
12+
conclusion_text: Segment Anything for Microscopy has been installed.
13+
initialize_by_default: false

doc/annotation_tools.md

Lines changed: 19 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,22 @@ The annotation tools can be started from the `micro_sam` GUI, the command line o
1313
$ micro_sam.annotator
1414
```
1515

16-
They are built with [napari](https://napari.org/stable/) to implement the viewer and user interaction.
16+
They are built using [napari](https://napari.org/stable/) and [magicgui](https://pyapp-kit.github.io/magicgui/) to provide the viewer and user interface.
1717
If you are not familiar with napari yet, [start here](https://napari.org/stable/tutorials/fundamentals/quick_start.html).
18-
The `micro_sam` applications are mainly based on [the point layer](https://napari.org/stable/howtos/layers/points.html), [the shape layer](https://napari.org/stable/howtos/layers/shapes.html) and [the label layer](https://napari.org/stable/howtos/layers/labels.html).
18+
The `micro_sam` tools use [the point layer](https://napari.org/stable/howtos/layers/points.html), [shape layer](https://napari.org/stable/howtos/layers/shapes.html) and [label layer](https://napari.org/stable/howtos/layers/labels.html).
19+
20+
The annotation tools are explained in detail below. In addition to the documentation here we also provide [video tutorials](https://www.youtube.com/watch?v=ket7bDUP9tI&list=PLwYZXQJ3f36GQPpKCrSbHjGiH39X4XjSO).
21+
22+
23+
## Starting via GUI
24+
25+
The annotation toools can be started from a central GUI, which can be started with the command `$ micro_sam.annotator` or using the executable [from an installer](#from-installer).
26+
27+
In the GUI you can select with of the four annotation tools you want to use:
28+
<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/micro-sam-gui.png">
29+
30+
And after selecting them a new window will open where you can select the input file path and other optional parameter. Then click the top button to start the tool. **Note: If you are not starting the annotation tool with a path to pre-computed embeddings then it can take several minutes to open napari after pressing the button because the embeddings are being computed.**
31+
1932

2033
## Annotator 2D
2134

@@ -44,7 +57,7 @@ It contains the following elements:
4457

4558
Note that point prompts and box prompts can be combined. When you're using point prompts you can only segment one object at a time. With box prompts you can segment several objects at once.
4659

47-
Check out [this video](https://youtu.be/DfWE_XRcqN8) for an example of how to use the interactive 2d annotator.
60+
Check out [this video](https://youtu.be/ket7bDUP9tI) for a tutorial for the 2d annotation tool.
4861

4962
We also provide the `image series annotator`, which can be used for running the 2d annotator for several images in a folder. You can start by clicking `Image series annotator` in the GUI, running `micro_sam.image_series_annotator` in the command line or from a [python script](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/image_series_annotator.py).
5063

@@ -69,7 +82,7 @@ Most elements are the same as in [the 2d annotator](#annotator-2d):
6982

7083
Note that you can only segment one object at a time with the 3d annotator.
7184

72-
Check out [this video](https://youtu.be/5Jo_CtIefTM) for an overview of the interactive 3d segmentation functionality.
85+
Check out [this video](https://youtu.be/PEy9-rTCdS4) for a tutorial for the 3d annotation tool.
7386

7487
## Annotator Tracking
7588

@@ -93,7 +106,7 @@ Most elements are the same as in [the 2d annotator](#annotator-2d):
93106

94107
Note that the tracking annotator only supports 2d image data, volumetric data is not supported.
95108

96-
Check out [this video](https://youtu.be/PBPW0rDOn9w) for an overview of the interactive tracking functionality.
109+
Check out [this video](https://youtu.be/Xi5pRWMO6_w) for a tutorial for how to use the tracking annotation tool.
97110

98111
## Tips & Tricks
99112

@@ -105,7 +118,7 @@ You can activate tiling by passing the parameters `tile_shape`, which determines
105118
- If you're using the command line functions you can pass them via the options `--tile_shape 1024 1024 --halo 128 128`
106119
- Note that prediction with tiling only works when the embeddings are cached to file, so you must specify an `embedding_path` (`-e` in the CLI).
107120
- You should choose the `halo` such that it is larger than half of the maximal radius of the objects your segmenting.
108-
- The applications pre-compute the image embeddings produced by SegmentAnything and (optionally) store them on disc. If you are using a CPU this step can take a while for 3d data or timeseries (you will see a progress bar with a time estimate). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them to your laptop / local machine to speed this up. You can use the command `micro_sam.precompute_embeddings` for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the `embedding_path` argument.
121+
- The applications pre-compute the image embeddings produced by SegmentAnything and (optionally) store them on disc. If you are using a CPU this step can take a while for 3d data or timeseries (you will see a progress bar with a time estimate). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them to your laptop / local machine to speed this up. You can use the command `micro_sam.precompute_state` for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the `embedding_path` argument.
109122
- Most other processing steps are very fast even on a CPU, so interactive annotation is possible. An exception is the automatic segmentation step (2d segmentation), which takes several minutes without a GPU (depending on the image size). For large volumes and timeseries segmenting an object in 3d / tracking across time can take a couple settings with a CPU (it is very fast with a GPU).
110123
- You can also try using a smaller version of the SegmentAnything model to speed up the computations. For this you can pass the `model_type` argument and either set it to `vit_b` or to `vit_l` (default is `vit_h`). However, this may lead to worse results.
111124
- You can save and load the results from the `committed_objects` / `committed_tracks` layer to correct segmentations you obtained from another tool (e.g. CellPose) or to save intermediate annotation results. The results can be saved via `File -> Save Selected Layer(s) ...` in the napari menu (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the `segmentation_result` (2d and 3d segmentation) or `tracking_result` (tracking) argument.

doc/finetuned_models.md

Lines changed: 36 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,36 @@
1+
# Finetuned models
2+
3+
We provide models that were finetuned on microscopy data using `micro_sam.training`. They are hosted on zenodo. We currently offer the following models:
4+
- `vit_h`: Default Segment Anything model with vit-h backbone.
5+
- `vit_l`: Default Segment Anything model with vit-l backbone.
6+
- `vit_b`: Default Segment Anything model with vit-b backbone.
7+
- `vit_h_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-h backbone.
8+
- `vit_b_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-b backbone.
9+
- `vit_h_em`: Finetuned Segment Anything model for neurites and cells in electron microscopy data with vit-h backbone.
10+
- `vit_b_em`: Finetuned Segment Anything model for neurites and cells in electron microscopy data with vit-b backbone.
11+
12+
See the two figures below of the improvements through the finetuned model for LM and EM data.
13+
14+
<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/lm_comparison.png" width="768">
15+
16+
<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/em_comparison.png" width="768">
17+
18+
You can select which of the models is used in the annotation tools by selecting the corresponding name from the `Model Type` menu:
19+
20+
<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/model-type-selector.png" width="256">
21+
22+
To use a specific model in the python library you need to pass the corresponding name as value to the `model_type` parameter exposed by all relevant functions.
23+
See for example the [2d annotator example](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/annotator_2d.py#L62) where `use_finetuned_model` can be set to `True` to use the `vit_h_lm` model.
24+
25+
## Which model should I choose?
26+
27+
As a rule of thumb:
28+
- Use the `_lm` models for segmenting cells or nuclei in light microscopy.
29+
- Use the `_em` models for segmenting ceells or neurites in electron microscopy.
30+
- Note that this model does not work well for segmenting mitochondria or other organelles becuase it is biased towards segmenting the full cell / cellular compartment.
31+
- For other cases use the default models.
32+
33+
See also the figures above for examples where the finetuned models work better than the vanilla models.
34+
Currently the model `vit_h` is used by default.
35+
36+
We are working on releasing more fine-tuned models, in particular for mitochondria and other organelles in EM.

doc/images/em_comparison.png

3.16 MB
Loading

doc/images/lm_comparison.png

4.94 MB
Loading

doc/images/micro-sam-gui.png

15.7 KB
Loading

doc/images/model-type-selector.png

61.3 KB
Loading

doc/images/vanilla-v-finetuned.png

-988 KB
Binary file not shown.

0 commit comments

Comments
 (0)