Skip to content

Commit 996ff1c

Browse files
Merge pull request #104 from computational-cell-analytics/doc-fix
Doc fix
2 parents 0921581 + 96894fb commit 996ff1c

File tree

3 files changed

+6
-6
lines changed

3 files changed

+6
-6
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
[![DOC](https://pdoc.dev/logo.svg)](https://computational-cell-analytics.github.io/micro-sam/)
1+
[![DOC](https://shields.mitmproxy.org/badge/docs-pdoc.dev-brightgreen.svg)](https://computational-cell-analytics.github.io/micro-sam/)
22
[![Conda](https://anaconda.org/conda-forge/micro_sam/badges/version.svg)](https://anaconda.org/conda-forge/micro_sam)
33
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7919746.svg)](https://doi.org/10.5281/zenodo.7919746)
44

doc/annotation_tools.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -95,22 +95,22 @@ Note that the tracking annotator only supports 2d image data, volumetric data is
9595

9696
Check out [this video](https://youtu.be/PBPW0rDOn9w) for an overview of the interactive tracking functionality.
9797

98-
### Tips & Tricks
98+
## Tips & Tricks
9999

100100
- Segment Anything was trained with a fixed image size of 1024 x 1024 pixels. Inputs that do not match this size will be internally resized to match it. Hence, applying Segment Anything to a much larger image will often lead to inferior results, because it will be downsampled by a large factor and the objects in the image become too small.
101101
To address this image we implement tiling: cutting up the input image into tiles of a fixed size (with a fixed overlap) and running Segment Anything for the individual tiles.
102102
You can activate tiling by passing the parameters `tile_shape`, which determines the size of the inner tile and `halo`, which determines the size of the additional overlap.
103103
- If you're using the `micro_sam` GUI you can specify the values for the `halo` and `tile_shape` via the `Tile X`, `Tile Y`, `Halo X` and `Halo Y`.
104-
- If you're using a python script you can pass them as tuples, e.g. `tile_shape=(1024, 1024), halo=(128, 128)`.
104+
- If you're using a python script you can pass them as tuples, e.g. `tile_shape=(1024, 1024), halo=(128, 128)`. See also [the wholeslide_annotator example](https://github.com/computational-cell-analytics/micro-sam/blob/0921581e2964139194d235a87cb002d3f3667f45/examples/annotator_2d.py#L40).
105105
- If you're using the command line functions you can pass them via the options `--tile_shape 1024 1024 --halo 128 128`
106106
- Note that prediction with tiling only works when the embeddings are cached to file, so you must specify an `embedding_path` (`-e` in the CLI).
107107
- You should choose the `halo` such that it is larger than half of the maximal radius of the objects your segmenting.
108108
- The applications pre-compute the image embeddings produced by SegmentAnything and (optionally) store them on disc. If you are using a CPU this step can take a while for 3d data or timeseries (you will see a progress bar with a time estimate). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them to your laptop / local machine to speed this up. You can use the command `micro_sam.precompute_embeddings` for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the `embedding_path` argument.
109109
- Most other processing steps are very fast even on a CPU, so interactive annotation is possible. An exception is the automatic segmentation step (2d segmentation), which takes several minutes without a GPU (depending on the image size). For large volumes and timeseries segmenting an object in 3d / tracking across time can take a couple settings with a CPU (it is very fast with a GPU).
110-
- You can also try using a smaller version of the SegmentAnything model to speed up the computations. For this you can pass the `model_type` argument and either set it to `vit_b` or `vit_l` as well (default is `vit_h`). However, this may lead to worse results.
110+
- You can also try using a smaller version of the SegmentAnything model to speed up the computations. For this you can pass the `model_type` argument and either set it to `vit_b` or to `vit_l` (default is `vit_h`). However, this may lead to worse results.
111111
- You can save and load the results from the `committed_objects` / `committed_tracks` layer to correct segmentations you obtained from another tool (e.g. CellPose) or to save intermediate annotation results. The results can be saved via `File -> Save Selected Layer(s) ...` in the napari menu (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the `segmentation_result` (2d and 3d segmentation) or `tracking_result` (tracking) argument.
112112

113-
### Known limitations
113+
## Known limitations
114114

115115
- Segment Anything does not work well for very small or fine-grained objects (e.g. filaments).
116116
- For the automatic segmentation functionality we currently rely on the automatic mask generation provided by SegmentAnything. It is slow and often misses objects in microscopy images. For now, we only offer this functionality in the 2d segmentation app; we are working on improving it and extending it to 3d segmentation and tracking.

doc/start_page.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ The `micro_sam` python library can be used via
3838
```python
3939
import micro_sam
4040
```
41-
It is explained in more details [here](#how-to-use-the-python-library).
41+
It is explained in more detail [here](#how-to-use-the-python-library).
4242

4343
Our support for finetuned models is still experimental. We will soon release better finetuned models and host them on zenodo.
4444
For now, check out [the example script for the 2d annotator](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_2d.py#L62) to see how the finetuned models can be used within `micro_sam`.

0 commit comments

Comments
 (0)