Skip to content

Commit 12cc614

Browse files
Update doc and readme
1 parent 1bb1e09 commit 12cc614

File tree

5 files changed

+27
-6
lines changed

5 files changed

+27
-6
lines changed

README.md

Lines changed: 13 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,6 @@
1-
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7919746.svg)](https://doi.org/10.5281/zenodo.7919746)
1+
[![DOC](https://pdoc.dev/logo.svg)](https://computational-cell-analytics.github.io/micro-sam/)
22
[![Conda](https://anaconda.org/conda-forge/micro_sam/badges/version.svg)](https://anaconda.org/conda-forge/micro_sam)
3+
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.7919746.svg)](https://doi.org/10.5281/zenodo.7919746)
34

45
# SegmentAnything for Microscopy
56

@@ -21,11 +22,19 @@ This is an advanced beta version. While many features are still under developmen
2122
Any feedback is welcome, but please be aware that the functionality is under active development and that some features may not be thoroughly tested yet.
2223
We will soon provide a stand-alone application for running the `micro_sam` annotation tools, and plan to also release it as [napari plugin](https://napari.org/stable/plugins/index.html) in the future.
2324

24-
If you run into any problems or have questions please open an issue or reach out via [image.sc](https://forum.image.sc/) using the tag `micro-sam` and tagging @constantinpape.
25+
If you run into any problems or have questions please open an issue on Github or reach out via [image.sc](https://forum.image.sc/) using the tag `micro-sam` and tagging @constantinpape.
26+
2527

2628
## Installation and Usage
2729

28-
TODO / links to doc
30+
You can install `micro_sam` via conda:
31+
```
32+
conda install -c conda-forge micro_sam
33+
```
34+
You can then start the `micro_sam` tools by running `$ micro_sam.annotator` in the command line.
35+
36+
Please check out [the documentation](https://computational-cell-analytics.github.io/micro-sam/) for more details on the installation and usage of `micro_sam`.
37+
2938

3039
## Citation
3140

@@ -40,7 +49,7 @@ There are two other napari plugins build around segment anything:
4049
- https://github.com/MIC-DKFZ/napari-sam (2d and 3d support)
4150
- https://github.com/JoOkuma/napari-segment-anything (only 2d support)
4251

43-
Compared to these we support more applications (2d, 3d and tracking), and aim to further extend and specialize SegmentAnything for microscopy data.
52+
Compared to these we support more applications (2d, 3d and tracking), and provide finetuned models for microscopy data.
4453
[WebKnossos](https://webknossos.org/) also offers integration of SegmentAnything for interactive segmentation.
4554

4655

doc/annotation_tools.md

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,14 @@ Check out [this video](https://youtu.be/PBPW0rDOn9w) for an overview of the inte
9797

9898
### Tips & Tricks
9999

100-
- You can use tiling for large images. (TODO: expand on this).
100+
- Segment Anything was trained with a fixed image size of 1024 x 1024 pixels. Inputs that do not match this size will be internally resized to match it. Hence, applying Segment Anything to a much larger image will often lead to inferior results, because it will be downsampled by a large factor and the objects in the image become too small.
101+
To address this image we implement tiling: cutting up the input image into tiles of a fixed size (with a fixed overlap) and running Segment Anything for the individual tiles.
102+
You can activate tiling by passing the parameters `tile_shape`, which determines the size of the inner tile and `halo`, which determines the size of the additional overlap.
103+
- If you're using the `micro_sam` GUI you can specify the values for the `halo` and `tile_shape` via the `Tile X`, `Tile Y`, `Halo X` and `Halo Y`.
104+
- If you're using a python script you can pass them as tuples, e.g. `tile_shape=(1024, 1024), halo=(128, 128)`.
105+
- If you're using the command line functions you can pass them via the options `--tile_shape 1024 1024 --halo 128 128`
106+
- Note that prediction with tiling only works when the embeddings are cached to file, so you must specify an `embedding_path` (`-e` in the CLI).
107+
- You should choose the `halo` such that it is larger than half of the maximal radius of the objects your segmenting.
101108
- The applications pre-compute the image embeddings produced by SegmentAnything and (optionally) store them on disc. If you are using a CPU this step can take a while for 3d data or timeseries (you will see a progress bar with a time estimate). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them to your laptop / local machine to speed this up. You can use the command `micro_sam.precompute_embeddings` for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the `embedding_path` argument.
102109
- Most other processing steps are very fast even on a CPU, so interactive annotation is possible. An exception is the automatic segmentation step (2d segmentation), which takes several minutes without a GPU (depending on the image size). For large volumes and timeseries segmenting an object in 3d / tracking across time can take a couple settings with a CPU (it is very fast with a GPU).
103110
- You can also try using a smaller version of the SegmentAnything model to speed up the computations. For this you can pass the `model_type` argument and either set it to `vit_l` or `vit_b` (default is `vit_h`). However, this may lead to worse results.

doc/images/vanilla-v-finetuned.png

988 KB
Loading

doc/python_library.md

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,12 @@ import micro_sam
66
```
77

88
It implements functionality for running Segment Anything for 2d and 3d data, provides more instance segmentation functionality and several other helpful functions for using Segment Anything.
9-
This functionality is used to implement the `micro_sam` annotation tools, but you can also use it as a standalone python library.
9+
This functionality is used to implement the `micro_sam` annotation tools, but you can also use it as a standalone python library. Check out the documentation under `Submodules` for more details on the python library.
1010

1111
## Finetuned models
1212

1313
We provide fine-tuned Segment Anything models for microscopy data. They are still in an experimental stage and we will upload more and better models soon, as well as the code for fine-tuning.
1414
For using the current models, check out the [2d annotator example](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_2d.py#L62) and set `use_finetuned_model` to `True`.
15+
See the difference between the normal and fine-tuned Segment Anything ViT-h model on an image from [LiveCELL](https://sartorius-research.github.io/LIVECell/):
16+
17+
<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/vanilla-v-finetuned.png" width="768">

doc/start_page.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,8 @@ On our roadmap for more functionality are:
1717
- Integration of the finetuned models with [bioimage.io](https://bioimage.io/#/)
1818
- Implementing a napari plugin for `micro_sam`.
1919

20+
If you run into any problems or have questions please open an issue on Github or reach out via [image.sc](https://forum.image.sc/) using the tag `micro-sam` and tagging @constantinpape.
21+
2022
<!----
2123
Better instance segmentation, Few-shot adapation (using LORA, QLORA, etc.)
2224
---->

0 commit comments

Comments
 (0)