Skip to content

Commit 8c91a99

Browse files
More doc updates and minor changes
1 parent 9e8b28c commit 8c91a99

File tree

9 files changed

+70
-45
lines changed

9 files changed

+70
-45
lines changed

build_doc.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,15 @@
88

99
def check_docs_completeness():
1010
"""@private
11-
All markdown and RST documentation files **MUST** be included in the module
11+
All markdown and RST documentation files **SHOULD** be included in the module
1212
docstring at micro_sam/__init__.py
1313
"""
1414
import micro_sam
1515

16-
markdown_doc_files = glob.glob("doc/**/*.md", recursive=True)
17-
rst_doc_files = glob.glob("doc/**/*.rst", recursive=True)
16+
# We don't search in subfolders anymore, to allow putting additional documentation
17+
# (e.g. for bioimage.io mdoels) that should not be included in the main documentation here.
18+
markdown_doc_files = glob.glob("doc/*.md", recursive=True)
19+
rst_doc_files = glob.glob("doc/*.rst", recursive=True)
1820
all_doc_files = markdown_doc_files + rst_doc_files
1921
missing_from_docs = [f for f in all_doc_files if os.path.basename(f) not in micro_sam.__doc__]
2022
if len(missing_from_docs) > 0:
@@ -42,5 +44,3 @@ def check_docs_completeness():
4244
cmd.append("micro_sam")
4345

4446
run(cmd)
45-
46-
# pdoc --docformat google --logo "https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/micro-sam-logo.png" micro_sam

doc/annotation_tools.md

Lines changed: 28 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -104,15 +104,39 @@ Check out [this video](TODO) for a tutorial for how to use the tracking annotati
104104

105105
## Image Series Annotator
106106

107-
<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/series-menu.png" width="1024">
107+
The image series annotation tool enables running the [2d annotator](#annotator-2d) or [2d annotator](#annotator-3d) for multiple images that are saved within an folder. This makes it convenient to annotate many images without having to close the tool. It can be started by
108+
- clicking `Image Series Annotator` in the plugin menu.
109+
- running `$ micro_sam.image_series_annotator` in the command line.
110+
- calling `micro_sam.sam_annotator.image_series_annotator` in a python script. Check out [examples/image_series_annotator.py](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/image_series_annotator.py) for details.
108111

109-
We also provide the `image series annotator`, which can be used for running the 2d annotator for several images in a folder. You can start by clicking `Image series annotator` in the GUI, running `micro_sam.image_series_annotator` in the command line or from a [python script](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/image_series_annotator.py).
112+
When starting this tool via the plugin menu the following interface opens:
110113

114+
<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/series-menu.png" width="512">
111115

112-
## Finetuning Tool
116+
You can select the folder where your image data is saved with `Input Folder`. The annotation results will be saved in `Output Folder`.
117+
You can specify a rule for loading only a subset of images via `pattern`, for example `*.tif` to only load tif images. Set `is_volumetric` if the data you want to annotate is 3d. The rest of the options are settings for the image embedding computation and are the same as for the embedding menu (see above).
118+
Once you click `Annotate Images` the images from the folder you have specified will be loaded and the annotation tool is started for them.
113119

114-
<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/finetuning-menu.png" width="1024">
120+
This menu will not open if you start the image series annotator from the command line or via python. In this case the input folder and other settings are passed as parameters instead.
115121

122+
Check out [this video](TODO) for a tutorial for how to use the image series annotator.
123+
124+
125+
## Finetuning UI
126+
127+
We also provide a graphical tool for finetuning models on your own data. It can be started by clicking `Finetuning` in the plugin menu.
128+
129+
**Note:** if you know a bit of python programming we recommend to use a script for model finetuning instead. This will give you more options to configure the training. See [these instructions](training-your-own-model) for details.
130+
131+
When starting this tool via the plugin menu the following interface opens:
132+
133+
<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/finetuning-menu.png" width="512">
134+
135+
You can select the image data via `Path to images`. We can either load images from a folder or select a single file for training. By providing `Image data key` you can either provide a pattern for selecting files from a folder or provide an internal filepath for hdf5, zarr or similar fileformats.
136+
137+
You can select the label data via `Path to labels` and `Label data key`, following the same logic as for the image data. We expect label masks stored in the same size as the image data for training. You can for example use annotations created with one of the `micro_sam` annotation tools for this, they are stored in the correct format!
138+
139+
The `Configuration` option allows you to choose the hardware configuration for training. We try to automatically select the correct setting for your system, but it can also be changed. Please refer to the tooltips for the other parameters.
116140

117141
## Tips & Tricks
118142

doc/contributing.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# How to contribute
1+
# Contribution Guide
22

33
* [Discuss your ideas](#discuss-your-ideas)
44
* [Clone the repository](#clone-the-repository)

doc/development.md renamed to doc/deprecated/development.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
**This is outdated and we need to update it to correctly describe the current annotator design.**
2+
13
# For Developers
24

35
This software consists of four different python (sub-)modules:

doc/finetuned_models.md

Lines changed: 22 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,52 +1,53 @@
11
# Finetuned models
22

3-
In addition to the original Segment anything models, we provide models that finetuned on microscopy data using the functionality from `micro_sam.training`.
4-
The models are hosted on zenodo. We currently offer the following models:
3+
In addition to the original Segment Anything models, we provide models that are finetuned on microscopy data.
4+
The additional models are available in the [bioimage.io modelzoo](https://bioimage.io/#/) and are also hosted on zenodo.
5+
6+
We currently offer the following models:
57
- `vit_h`: Default Segment Anything model with vit-h backbone.
68
- `vit_l`: Default Segment Anything model with vit-l backbone.
79
- `vit_b`: Default Segment Anything model with vit-b backbone.
810
- `vit_t`: Segment Anything model with vit-tiny backbone. From the [Mobile SAM publication](https://arxiv.org/abs/2306.14289).
9-
- `vit_b_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-b backbone.
10-
- `vit_b_em_organelles`: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-b backbone.
11-
- `vit_b_em_boundaries`: Finetuned Segment Anything model for neurites and cells in electron microscopy data with vit-b backbone.
11+
- `vit_l_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-l backbone. ([zenodo](TODO), [bioimage.io](TODO))
12+
- `vit_b_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-b backbone. ([zenodo](TODO), [bioimage.io](TODO))
13+
- `vit_t_lm`: Finetuned Segment Anything model for cells and nuclei in light microscopy data with vit-t backbone. ([zenodo](TODO), [bioimage.io](TODO))
14+
- `vit_l_em_organelles`: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-l backbone. ([zenodo](TODO), [bioimage.io](TODO))
15+
- `vit_b_em_organelles`: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-b backbone. ([zenodo](TODO), [bioimage.io](TODO))
16+
- `vit_t_em_organelles`: Finetuned Segment Anything model for mitochodria and nuclei in electron microscopy data with vit-t backbone. ([zenodo](TODO), [bioimage.io](TODO))
1217

1318
See the two figures below of the improvements through the finetuned model for LM and EM data.
1419

1520
<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/lm_comparison.png" width="768">
1621

1722
<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/em_comparison.png" width="768">
1823

19-
You can select which of the models is used in the annotation tools by selecting the corresponding name from the `Model Type` menu:
24+
You can select which model to use for annotation by selecting the corresponding name in the embedding menu:
2025

2126
<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/model-type-selector.png" width="256">
2227

2328
To use a specific model in the python library you need to pass the corresponding name as value to the `model_type` parameter exposed by all relevant functions.
24-
See for example the [2d annotator example](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/annotator_2d.py#L62) where `use_finetuned_model` can be set to `True` to use the `vit_b_lm` model.
25-
26-
Note that we are still working on improving these models and may update them from time to time. All older models will stay available for download on zenodo, see [model sources](#model-sources) below
29+
See for example the [2d annotator example](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/annotator_2d.py#L62).
2730

2831

29-
## Which model should I choose?
32+
## Choosing a Model
3033

3134
As a rule of thumb:
32-
- Use the `vit_b_lm` model for segmenting cells or nuclei in light microscopy.
33-
- Use the `vit_b_em_organelles` models for segmenting mitochondria, nuclei or other organelles in electron microscopy.
34-
- Use the `vit_b_em_boundaries` models for segmenting cells or neurites in electron microscopy.
35+
- Use the `vit_l_lm` or `vit_b_lm` model for segmenting cells or nuclei in light microscopy. The larger model (`vit_l_lm`) yields a bit better segmentation quality, especially for automatic segmentation, but needs more computational resources.
36+
- Use the `vit_l_em_organelles` or `vit_b_em_organelles` models for segmenting mitochondria, nuclei or other roundish organelles in electron microscopy.
3537
- For other use-cases use one of the default models.
38+
- The `vit_t_...` models run much faster than other models, but yield inferior quality for many applications. It can still make sense to try them for your use-case if your working on a laptop and want to annotate many images or volumetric data.
3639

37-
See also the figures above for examples where the finetuned models work better than the vanilla models.
38-
Currently the model `vit_h` is used by default.
39-
40+
See also the figures above for examples where the finetuned models work better than the default models.
4041
We are working on further improving these models and adding new models for other biomedical imaging domains.
4142

4243

43-
## Model Sources
44+
## Older Models
4445

45-
Here is an overview of all finetuned models we have released to zenodo so far:
46+
Previous versions of our models are available on zenodo:
4647
- [vit_b_em_boundaries](https://zenodo.org/records/10524894): for segmenting compartments delineated by boundaries such as cells or neurites in EM.
4748
- [vit_b_em_organelles](https://zenodo.org/records/10524828): for segmenting mitochondria, nuclei or other organelles in EM.
4849
- [vit_b_lm](https://zenodo.org/records/10524791): for segmenting cells and nuclei in LM.
49-
- [vit_h_em](https://zenodo.org/records/8250291): this model is outdated.
50-
- [vit_h_lm](https://zenodo.org/records/8250299): this model is outdated.
50+
- [vit_h_em](https://zenodo.org/records/8250291): for general EM segmentation.
51+
- [vit_h_lm](https://zenodo.org/records/8250299): for general LM segmentation.
5152

52-
Some of these models contain multiple versions.
53+
We do not recommend to use these models since our new models improve upon them significantly. But we provide the links here in case they are needed to reproduce older segmentation workflows.

doc/images/model-type-selector.png

-109 KB
Loading

doc/images/series-menu.png

498 Bytes
Loading

doc/python_library.md

Lines changed: 11 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -5,9 +5,9 @@ The python library can be imported via
55
import micro_sam
66
```
77

8-
The library
9-
- implements function to apply Segment Anything to 2d and 3d data more conveniently in `micro_sam.prompt_based_segmentation`.
10-
- provides more and improved automatic instance segmentation functionality in `micro_sam.instance_segmentation`.
8+
This library extends the [Segment Anything library](https://github.com/facebookresearch/segment-anything) and
9+
- implements functions to apply Segment Anything to 2d and 3d data in `micro_sam.prompt_based_segmentation`.
10+
- provides improved automatic instance segmentation functionality in `micro_sam.instance_segmentation`.
1111
- implements training functionality that can be used for finetuning on your own data in `micro_sam.training`.
1212
- provides functionality for quantitative and qualitative evaluation of Segment Anything models in `micro_sam.evaluation`.
1313

@@ -18,24 +18,22 @@ import micro_sam.instance_segmentation
1818
# etc.
1919
```
2020

21-
This functionality is used to implement the interactive annotation tools and can also be used as a standalone python library.
22-
Some preliminary examples for how to use the python library can be found [here](https://github.com/computational-cell-analytics/micro-sam/tree/master/examples/use_as_library). Check out the `Submodules` documentation for more details.
21+
This functionality is used to implement the interactive annotation tools in `micro_sam.sam_annotator` and can be used as a standalone python library.
22+
We provide jupyter notebooks that demonstrate how to use it [here](https://github.com/computational-cell-analytics/micro-sam/tree/master/notebooks). You can find the full library documentation by scrolling to the end of this page.
2323

2424
## Training your own model
2525

2626
We reimplement the training logic described in the [Segment Anything publication](https://arxiv.org/abs/2304.02643) to enable finetuning on custom data.
27-
We use this functionality to provide the [finetuned microscopy models](#finetuned-models) and it can also be used to finetune models on your own data.
27+
We use this functionality to provide the [finetuned microscopy models](#finetuned-models) and it can also be used to train models on your own data.
2828
In fact the best results can be expected when finetuning on your own data, and we found that it does not require much annotated training data to get siginficant improvements in model performance.
29-
So a good strategy is to annotate a few images with one of the provided models using one of the interactive annotation tools and, if the annotation is not working as good as expected yet, finetune on the annotated data.
29+
So a good strategy is to annotate a few images with one of the provided models using our interactive annotation tools and, if the model is not working as good as required for your use-case, finetune on the annotated data.
3030
<!--
3131
TODO: provide link to the paper with results on how much data is needed
3232
-->
3333

34-
The training logic is implemented in `micro_sam.training` and is based on [torch-em](https://github.com/constantinpape/torch-em). Please check out [examples/finetuning](https://github.com/computational-cell-analytics/micro-sam/tree/master/examples/finetuning) to see how you can finetune on your own data with it. The script `finetune_hela.py` contains an example for finetuning on a small custom microscopy dataset and `use_finetuned_model.py` shows how this model can then be used in the interactive annotation tools.
34+
The training logic is implemented in `micro_sam.training` and is based on [torch-em](https://github.com/constantinpape/torch-em). Check out [the finetuning notebook](https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/micro-sam-finetuning.ipynb) to see how to use it.
3535

36-
Since release v0.4.0 we also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster.
37-
You can enable training of the decoder by setting `train_instance_segmentation = True` [here](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/finetuning/finetune_hela.py#L165).
38-
The script `instance_segmentation_with_finetuned_model.py` shows how to use it for automatic instance segmentation.
39-
We will fully integrate this functionality with the annotation tool in the next release.
36+
We also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster.
37+
The notebook explains how to activate training it together with the rest of SAM and how to then use it.
4038

41-
More advanced examples, including quantitative and qualitative evaluation, of finetuned models can be found in [finetuning](https://github.com/computational-cell-analytics/micro-sam/tree/master/finetuning), which contains the code for training and evaluating our microscopy models.
39+
More advanced examples, including quantitative and qualitative evaluation, of finetuned models can be found in [finetuning](https://github.com/computational-cell-analytics/micro-sam/tree/master/finetuning), which contains the code for training and evaluating [our models](finetuned-models).

micro_sam/sam_annotator/image_series_annotator.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -323,7 +323,7 @@ def _create_settings(self):
323323

324324
self.pattern = "*"
325325
_, layout = self._add_string_param(
326-
"", self.pattern, tooltip=get_tooltip("image_series_annotator", "pattern")
326+
"pattern", self.pattern, tooltip=get_tooltip("image_series_annotator", "pattern")
327327
)
328328
setting_values.layout().addLayout(layout)
329329

0 commit comments

Comments
 (0)