Skip to content

Commit 3feb39c

Browse files
Merge pull request #152 from computational-cell-analytics/doc-updates
Documentation updates
2 parents c57d102 + ddbf448 commit 3feb39c

File tree

13 files changed

+54
-13
lines changed

13 files changed

+54
-13
lines changed

doc/annotation_tools.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ You can activate tiling by passing the parameters `tile_shape`, which determines
105105
- If you're using the command line functions you can pass them via the options `--tile_shape 1024 1024 --halo 128 128`
106106
- Note that prediction with tiling only works when the embeddings are cached to file, so you must specify an `embedding_path` (`-e` in the CLI).
107107
- You should choose the `halo` such that it is larger than half of the maximal radius of the objects your segmenting.
108-
- The applications pre-compute the image embeddings produced by SegmentAnything and (optionally) store them on disc. If you are using a CPU this step can take a while for 3d data or timeseries (you will see a progress bar with a time estimate). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them to your laptop / local machine to speed this up. You can use the command `micro_sam.precompute_embeddings` for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the `embedding_path` argument.
108+
- The applications pre-compute the image embeddings produced by SegmentAnything and (optionally) store them on disc. If you are using a CPU this step can take a while for 3d data or timeseries (you will see a progress bar with a time estimate). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them to your laptop / local machine to speed this up. You can use the command `micro_sam.precompute_state` for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the `embedding_path` argument.
109109
- Most other processing steps are very fast even on a CPU, so interactive annotation is possible. An exception is the automatic segmentation step (2d segmentation), which takes several minutes without a GPU (depending on the image size). For large volumes and timeseries segmenting an object in 3d / tracking across time can take a couple settings with a CPU (it is very fast with a GPU).
110110
- You can also try using a smaller version of the SegmentAnything model to speed up the computations. For this you can pass the `model_type` argument and either set it to `vit_b` or to `vit_l` (default is `vit_h`). However, this may lead to worse results.
111111
- You can save and load the results from the `committed_objects` / `committed_tracks` layer to correct segmentations you obtained from another tool (e.g. CellPose) or to save intermediate annotation results. The results can be saved via `File -> Save Selected Layer(s) ...` in the napari menu (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the `segmentation_result` (2d and 3d segmentation) or `tracking_result` (tracking) argument.

doc/installation.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -30,9 +30,9 @@ Once you have it installed you can simply replace the `conda` commands with `mam
3030
## From installer
3131

3232
We also provide installers for Linux, Mac and Windows:
33-
- [Linux](https://owncloud.gwdg.de/index.php/s/HRp948SDkaWzCuV)
34-
- [Mac](https://owncloud.gwdg.de/index.php/s/HpGzlXrgJ8VDgnI)
35-
- [Windows](https://owncloud.gwdg.de/index.php/s/BVipOmDPR2TXmxk)
33+
- [Linux](https://owncloud.gwdg.de/index.php/s/Cw9RmA3BlyqKJeU)
34+
- [Mac](https://owncloud.gwdg.de/index.php/s/7YupGgACw9SHy2P)
35+
- [Windows](https://owncloud.gwdg.de/index.php/s/1iD1eIcMZvEyE6d)
3636

3737
Note that these installers are stil experimental and not yet fully tested.
3838
If you encounter problems with them then please consider installing `micro_sam` via [conda](#from-conda) instead.

doc/python_library.md

Lines changed: 12 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -12,8 +12,18 @@ The library
1212
- provides functionality for quantitative and qualitative evaluation of Segment Anything models in `micro_sam.evaluation`.
1313

1414
This functionality is used to implement the interactive annotation tools and can also be used as a standalone python library.
15-
Check out the documentation under `Submodules` for more details.
15+
Some preliminary examples for how to use the python library can be found [here](https://github.com/computational-cell-analytics/micro-sam/tree/master/examples/use_as_library). Check out the `Submodules` documentation for more details.
1616

1717
## Training your own model
1818

19-
TODO
19+
We reimplement the training logic described in the [Segment Anything publication](https://arxiv.org/abs/2304.02643) to enable finetuning on custom data.
20+
We use this functionality to provide the [finetuned microscopy models](#finetuned-models) and it can also be used to finetune models on your own data.
21+
In fact the best results can be expected when finetuning on your own data, and we found that it does not require much annotated training data to get siginficant improvements in model performance.
22+
So a good strategy is to annotate a few images with one of the provided models using one of the interactive annotation tools and, if the annotation is not working as good as expected yet, finetune on the annotated data.
23+
<!--
24+
TODO: provide link to the paper with results on how much data is needed
25+
-->
26+
27+
The training logic is implemented in `micro_sam.training` and is based on [torch-em](https://github.com/constantinpape/torch-em). Please check out [examples/finetuning](https://github.com/computational-cell-analytics/micro-sam/tree/master/examples/finetuning) to see how you can finetune on your own data with it. The script `finetune_hela.py` contains an example for finetuning on a small custom microscopy dataset and `use_finetuned_model.py` shows how this model can then be used in the interactive annotatin tools.
28+
29+
More advanced examples, including quantitative and qualitative evaluation, of finetuned models can be found in [finetuning](https://github.com/computational-cell-analytics/micro-sam/tree/master/finetuning), which contains the code for training and evaluating our microscopy models.

micro_sam/evaluation/__init__.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
"""Functionality for evaluating Segment Anything models on microscopy data.
2+
"""
3+
14
from .automatic_mask_generation import (
25
run_amg_inference,
36
run_amg_grid_search,

micro_sam/evaluation/automatic_mask_generation.py

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
"""Inference and evaluation for the automatic instance segmentation functionality.
2+
"""
3+
14
import os
25
from glob import glob
36
from pathlib import Path
@@ -190,9 +193,9 @@ def evaluate_amg_grid_search(result_dir: Union[str, os.PathLike], criterion: str
190193
criterion: The metric to use for determining the best parameters.
191194
192195
Returns:
193-
The best value for `pred_iou_thresh`.
194-
The best value for ``stability_score_thresh.
195-
The evaluation score for the best setting.
196+
- The best value for `pred_iou_thresh`.
197+
- The best value for `stability_score_thresh`.
198+
- The evaluation score for the best setting.
196199
"""
197200

198201
# load all the grid search results

micro_sam/evaluation/evaluation.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
"""Evaluation functionality for segmentation predictions from `micro_sam.evaluation.automatic_mask_generation`
2+
and `micro_sam.evaluation.inference`.
3+
"""
4+
15
import os
26
from pathlib import Path
37
from typing import List, Optional, Union

micro_sam/evaluation/experiments.py

Lines changed: 6 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,11 @@
1+
"""Predefined experiment settings for experiments with different prompt strategies.
2+
"""
3+
14
from typing import Dict, List, Optional
25

36
# TODO fully define the dict type
4-
ExperimentSettings = List[Dict]
7+
ExperimentSetting = Dict
8+
ExperimentSettings = List[ExperimentSetting]
59
"""@private"""
610

711

@@ -63,7 +67,7 @@ def default_experiment_settings() -> ExperimentSettings:
6367
return experiment_settings
6468

6569

66-
def get_experiment_setting_name(setting: ExperimentSettings) -> str:
70+
def get_experiment_setting_name(setting: ExperimentSetting) -> str:
6771
"""Get the name for the given experiment setting.
6872
6973
Args:

micro_sam/evaluation/inference.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
"""Inference with Segment Anything models and different prompt strategies.
2+
"""
3+
14
import os
25
import pickle
36
import warnings

micro_sam/evaluation/livecell.py

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,7 @@
1+
"""Inference and evaluation for the [LiveCELL dataset](https://www.nature.com/articles/s41592-021-01249-6) and
2+
the different cell lines contained in it.
3+
"""
4+
15
import argparse
26
import json
37
import os
@@ -147,6 +151,7 @@ def run_livecell_amg(
147151
) -> None:
148152
"""Run automatic mask generation grid-search and inference for livecell.
149153
154+
Args:
150155
checkpoint: The segment anything model checkpoint.
151156
input_folder: The folder with the livecell data.
152157
model_type: The type of the segmenta anything model.

micro_sam/evaluation/model_comparison.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
"""Functionality for qualitative comparison of Segment Anything models on microscopy data.
2+
"""
3+
14
import os
25
from functools import partial
36
from glob import glob

0 commit comments

Comments
 (0)