Skip to content

Commit 4b5f73d

Browse files
committed
Update Documentation
1 parent 3ce4824 commit 4b5f73d

File tree

4 files changed

+10
-10
lines changed

4 files changed

+10
-10
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,7 @@ Compared to these we support more applications (2d, 3d and tracking), and provid
5757

5858
**New in version 0.1.1**
5959

60-
- Fine-tuned segmenta anything models for microscopy (experimental)
60+
- Fine-tuned segment anything models for microscopy (experimental)
6161
- Simplified instance segmentation menu
6262
- Menu for clearing annotations
6363

doc/annotation_tools.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -85,7 +85,7 @@ The user interface of the tracking annotator looks like this:
8585
Most elements are the same as in [the 2d annotator](#annotator-2d):
8686
1. The napari layers that contain the image, segmentation and prompts. Same as for [the 2d segmentation app](#annotator-2d) but without the `auto_segmentation` layer, `current_tracks` and `committed_tracks` are the equivalent of `current_object` and `committed_objects`.
8787
2. The prompt menu.
88-
3. The menu with tracking settings: `track_state` is used to indicate that the object you are tracking is dividing in the current frame. `track_id` is used to select which of the tracks after divsion you are following.
88+
3. The menu with tracking settings: `track_state` is used to indicate that the object you are tracking is dividing in the current frame. `track_id` is used to select which of the tracks after division you are following.
8989
4. The menu for interactive segmentation.
9090
5. The tracking menu. Press `Track Object` (or `v`) to track the current object across time.
9191
6. The menu for committing the current tracking result.
@@ -107,11 +107,11 @@ You can activate tiling by passing the parameters `tile_shape`, which determines
107107
- You should choose the `halo` such that it is larger than half of the maximal radius of the objects your segmenting.
108108
- The applications pre-compute the image embeddings produced by SegmentAnything and (optionally) store them on disc. If you are using a CPU this step can take a while for 3d data or timeseries (you will see a progress bar with a time estimate). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them to your laptop / local machine to speed this up. You can use the command `micro_sam.precompute_embeddings` for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the `embedding_path` argument.
109109
- Most other processing steps are very fast even on a CPU, so interactive annotation is possible. An exception is the automatic segmentation step (2d segmentation), which takes several minutes without a GPU (depending on the image size). For large volumes and timeseries segmenting an object in 3d / tracking across time can take a couple settings with a CPU (it is very fast with a GPU).
110-
- You can also try using a smaller version of the SegmentAnything model to speed up the computations. For this you can pass the `model_type` argument and either set it to `vit_l` or `vit_b` (default is `vit_h`). However, this may lead to worse results.
111-
- You can save and load the results from the `committed_objects` / `committed_tracks` layer to correct segmentations you obtained from another tool (e.g. CellPose) or to save intermediate annotation results. The results can be saved via `File->Save Selected Layer(s) ...` in the napari menu (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the `segmentation_result` (2d and 3d segmentation) or `tracking_result` (tracking) argument.
110+
- You can also try using a smaller version of the SegmentAnything model to speed up the computations. For this you can pass the `model_type` argument and either set it to `vit_b` or `vit_l` as well (default is `vit_h`). However, this may lead to worse results.
111+
- You can save and load the results from the `committed_objects` / `committed_tracks` layer to correct segmentations you obtained from another tool (e.g. CellPose) or to save intermediate annotation results. The results can be saved via `File -> Save Selected Layer(s) ...` in the napari menu (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the `segmentation_result` (2d and 3d segmentation) or `tracking_result` (tracking) argument.
112112

113113
### Known limitations
114114

115-
- Segment Anything does not work well for very small or fine-graind objects (e.g. filaments).
115+
- Segment Anything does not work well for very small or fine-grained objects (e.g. filaments).
116116
- For the automatic segmentation functionality we currently rely on the automatic mask generation provided by SegmentAnything. It is slow and often misses objects in microscopy images. For now, we only offer this functionality in the 2d segmentation app; we are working on improving it and extending it to 3d segmentation and tracking.
117117
- Prompt bounding boxes do not provide the full functionality for tracking yet (they cannot be used for divisions or for starting new tracks). See also [this github issue](https://github.com/computational-cell-analytics/micro-sam/issues/23).

doc/installation.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -30,15 +30,15 @@ $ cd micro_sam
3030
3. Create the GPU or CPU environment:
3131

3232
```
33-
conda env create -f <ENV_FILE>.yaml
33+
$ conda env create -f <ENV_FILE>.yaml
3434
```
3535
4. Activate the environment:
3636
```
37-
conda activate sam
37+
$ conda activate sam
3838
```
3939
5. Install `micro_sam`:
4040
```
41-
pip install -e .
41+
$ pip install -e .
4242
```
4343

4444
**Troubleshooting:**

doc/start_page.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
Segment Anything for Microscopy implements automatic and interactive annotation for microscopy data. It is built on top of [Segment Anything](https://segment-anything.com/) by Meta AI and specializes it for microscopy and other bio-imaging data.
44
Its core components are:
55
- The `micro_sam` annotator tools for interactive data annotation with [napari](https://napari.org/stable/).
6-
- The `micro_sam` library to apply Segment Anything to 2 and 3d data or fine-tune it on your data.
6+
- The `micro_sam` library to apply Segment Anything to 2d and 3d data or fine-tune it on your data.
77
- The `micro_sam` models that are fine-tuned on publicly available microscopy data.
88

99
Our goal is to build fast and interactive annotation tools for microscopy data, like interactive cell segmentation from bounding boxes:
@@ -38,7 +38,7 @@ The `micro_sam` python library can be used via
3838
```python
3939
import micro_sam
4040
```
41-
It is explained in more detail [here](#how-to-use-the-python-library).
41+
It is explained in more details [here](#how-to-use-the-python-library).
4242

4343
Our support for finetuned models is still experimental. We will soon release better finetuned models and host them on zenodo.
4444
For now, check out [the example script for the 2d annotator](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_2d.py#L62) to see how the finetuned models can be used within `micro_sam`.

0 commit comments

Comments
 (0)