You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -21,11 +22,19 @@ This is an advanced beta version. While many features are still under developmen
21
22
Any feedback is welcome, but please be aware that the functionality is under active development and that some features may not be thoroughly tested yet.
22
23
We will soon provide a stand-alone application for running the `micro_sam` annotation tools, and plan to also release it as [napari plugin](https://napari.org/stable/plugins/index.html) in the future.
23
24
24
-
If you run into any problems or have questions please open an issue or reach out via [image.sc](https://forum.image.sc/) using the tag `micro-sam` and tagging @constantinpape.
25
+
If you run into any problems or have questions please open an issue on Github or reach out via [image.sc](https://forum.image.sc/) using the tag `micro-sam` and tagging @constantinpape.
26
+
25
27
26
28
## Installation and Usage
27
29
28
-
TODO / links to doc
30
+
You can install `micro_sam` via conda:
31
+
```
32
+
conda install -c conda-forge micro_sam
33
+
```
34
+
You can then start the `micro_sam` tools by running `$ micro_sam.annotator` in the command line.
35
+
36
+
Please check out [the documentation](https://computational-cell-analytics.github.io/micro-sam/) for more details on the installation and usage of `micro_sam`.
37
+
29
38
30
39
## Citation
31
40
@@ -40,7 +49,7 @@ There are two other napari plugins build around segment anything:
40
49
-https://github.com/MIC-DKFZ/napari-sam (2d and 3d support)
Copy file name to clipboardExpand all lines: doc/annotation_tools.md
+8-1Lines changed: 8 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -97,7 +97,14 @@ Check out [this video](https://youtu.be/PBPW0rDOn9w) for an overview of the inte
97
97
98
98
### Tips & Tricks
99
99
100
-
- You can use tiling for large images. (TODO: expand on this).
100
+
- Segment Anything was trained with a fixed image size of 1024 x 1024 pixels. Inputs that do not match this size will be internally resized to match it. Hence, applying Segment Anything to a much larger image will often lead to inferior results, because it will be downsampled by a large factor and the objects in the image become too small.
101
+
To address this image we implement tiling: cutting up the input image into tiles of a fixed size (with a fixed overlap) and running Segment Anything for the individual tiles.
102
+
You can activate tiling by passing the parameters `tile_shape`, which determines the size of the inner tile and `halo`, which determines the size of the additional overlap.
103
+
- If you're using the `micro_sam` GUI you can specify the values for the `halo` and `tile_shape` via the `Tile X`, `Tile Y`, `Halo X` and `Halo Y`.
104
+
- If you're using a python script you can pass them as tuples, e.g. `tile_shape=(1024, 1024), halo=(128, 128)`.
105
+
- If you're using the command line functions you can pass them via the options `--tile_shape 1024 1024 --halo 128 128`
106
+
- Note that prediction with tiling only works when the embeddings are cached to file, so you must specify an `embedding_path` (`-e` in the CLI).
107
+
- You should choose the `halo` such that it is larger than half of the maximal radius of the objects your segmenting.
101
108
- The applications pre-compute the image embeddings produced by SegmentAnything and (optionally) store them on disc. If you are using a CPU this step can take a while for 3d data or timeseries (you will see a progress bar with a time estimate). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them to your laptop / local machine to speed this up. You can use the command `micro_sam.precompute_embeddings` for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the `embedding_path` argument.
102
109
- Most other processing steps are very fast even on a CPU, so interactive annotation is possible. An exception is the automatic segmentation step (2d segmentation), which takes several minutes without a GPU (depending on the image size). For large volumes and timeseries segmenting an object in 3d / tracking across time can take a couple settings with a CPU (it is very fast with a GPU).
103
110
- You can also try using a smaller version of the SegmentAnything model to speed up the computations. For this you can pass the `model_type` argument and either set it to `vit_l` or `vit_b` (default is `vit_h`). However, this may lead to worse results.
Copy file name to clipboardExpand all lines: doc/python_library.md
+4-1Lines changed: 4 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,9 +6,12 @@ import micro_sam
6
6
```
7
7
8
8
It implements functionality for running Segment Anything for 2d and 3d data, provides more instance segmentation functionality and several other helpful functions for using Segment Anything.
9
-
This functionality is used to implement the `micro_sam` annotation tools, but you can also use it as a standalone python library.
9
+
This functionality is used to implement the `micro_sam` annotation tools, but you can also use it as a standalone python library. Check out the documentation under `Submodules` for more details on the python library.
10
10
11
11
## Finetuned models
12
12
13
13
We provide fine-tuned Segment Anything models for microscopy data. They are still in an experimental stage and we will upload more and better models soon, as well as the code for fine-tuning.
14
14
For using the current models, check out the [2d annotator example](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_2d.py#L62) and set `use_finetuned_model` to `True`.
15
+
See the difference between the normal and fine-tuned Segment Anything ViT-h model on an image from [LiveCELL](https://sartorius-research.github.io/LIVECell/):
Copy file name to clipboardExpand all lines: doc/start_page.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,6 +17,8 @@ On our roadmap for more functionality are:
17
17
- Integration of the finetuned models with [bioimage.io](https://bioimage.io/#/)
18
18
- Implementing a napari plugin for `micro_sam`.
19
19
20
+
If you run into any problems or have questions please open an issue on Github or reach out via [image.sc](https://forum.image.sc/) using the tag `micro-sam` and tagging @constantinpape.
0 commit comments