Skip to content

Commit ddbfcf7

Browse files
constantinpapeanwai98lufre1
authored
Sync changes from dev (#561)
Sync dev and master --------- Co-authored-by: Constantin Pape <[email protected]> Co-authored-by: Anwai Archit <[email protected]> --------- Co-authored-by: Anwai Archit <[email protected]> Co-authored-by: lufre1 <[email protected]>
1 parent 7463042 commit ddbfcf7

File tree

3 files changed

+7
-13
lines changed

3 files changed

+7
-13
lines changed

doc/annotation_tools.md

Lines changed: 3 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -29,12 +29,6 @@ In the GUI you can select with of the four annotation tools you want to use:
2929

3030
And after selecting them a new window will open where you can select the input file path and other optional parameter. Then click the top button to start the tool. **Note: If you are not starting the annotation tool with a path to pre-computed embeddings then it can take several minutes to open napari after pressing the button because the embeddings are being computed.**
3131

32-
**Changes in version 0.3:**
33-
34-
We have made two changes in version 0.3 that are not reflected in the documentation below yet:
35-
- We now support prompts from box, ellipse and polygon annotations. To reflect this we have renamed the `box_prompts` layer to `prompts` and the `prompts` layer to `point_prompts`.
36-
- We support automatic segmentation in 3d! To use it, you can first run automated segmentation in the current slice via `Automatic Segmentation`, and then extend the segmentation of these objects to 3d by running `Segment All Slices` with `layer: auto segmentation`.
37-
3832
## Annotator 2D
3933

4034
The 2d annotator can be started by
@@ -118,18 +112,19 @@ Check out [this video](https://youtu.be/Xi5pRWMO6_w) for a tutorial for how to u
118112
- Segment Anything was trained with a fixed image size of 1024 x 1024 pixels. Inputs that do not match this size will be internally resized to match it. Hence, applying Segment Anything to a much larger image will often lead to inferior results, because it will be downsampled by a large factor and the objects in the image become too small.
119113
To address this image we implement tiling: cutting up the input image into tiles of a fixed size (with a fixed overlap) and running Segment Anything for the individual tiles.
120114
You can activate tiling by passing the parameters `tile_shape`, which determines the size of the inner tile and `halo`, which determines the size of the additional overlap.
121-
- If you're using the `micro_sam` GUI you can specify the values for the `halo` and `tile_shape` via the `Tile X`, `Tile Y`, `Halo X` and `Halo Y`.
115+
- If you're using the `micro_sam` GUI you can specify the values for the `halo` and `tile_shape` via the `Tile X`, `Tile Y`, `Halo X` and `Halo Y` by clicking on `Embeddings Settings`.
122116
- If you're using a python script you can pass them as tuples, e.g. `tile_shape=(1024, 1024), halo=(128, 128)`. See also [the wholeslide_annotator example](https://github.com/computational-cell-analytics/micro-sam/blob/0921581e2964139194d235a87cb002d3f3667f45/examples/annotator_2d.py#L40).
123117
- If you're using the command line functions you can pass them via the options `--tile_shape 1024 1024 --halo 128 128`
124118
- Note that prediction with tiling only works when the embeddings are cached to file, so you must specify an `embedding_path` (`-e` in the CLI).
125119
- You should choose the `halo` such that it is larger than half of the maximal radius of the objects your segmenting.
126120
- The applications pre-compute the image embeddings produced by SegmentAnything and (optionally) store them on disc. If you are using a CPU this step can take a while for 3d data or timeseries (you will see a progress bar with a time estimate). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them to your laptop / local machine to speed this up. You can use the command `micro_sam.precompute_embeddings` for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the `embedding_path` argument.
121+
- If you use the GUI to save or load embeddings, simply specify an `embeddings save path`. Existing embeddings are loaded from the specified path or embeddings are computed and the path is used to save them.
127122
- Most other processing steps are very fast even on a CPU, so interactive annotation is possible. An exception is the automatic segmentation step (2d segmentation), which takes several minutes without a GPU (depending on the image size). For large volumes and timeseries segmenting an object in 3d / tracking across time can take a couple settings with a CPU (it is very fast with a GPU).
128123
- You can also try using a smaller version of the SegmentAnything model to speed up the computations. For this you can pass the `model_type` argument and either set it to `vit_b` or to `vit_l` (default is `vit_h`). However, this may lead to worse results.
129124
- You can save and load the results from the `committed_objects` / `committed_tracks` layer to correct segmentations you obtained from another tool (e.g. CellPose) or to save intermediate annotation results. The results can be saved via `File -> Save Selected Layer(s) ...` in the napari menu (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the `segmentation_result` (2d and 3d segmentation) or `tracking_result` (tracking) argument.
130125

131126
## Known limitations
132127

133128
- Segment Anything does not work well for very small or fine-grained objects (e.g. filaments).
134-
- For the automatic segmentation functionality we currently rely on the automatic mask generation provided by SegmentAnything. It is slow and often misses objects in microscopy images. For now, we only offer this functionality in the 2d segmentation app; we are working on improving it and extending it to 3d segmentation and tracking.
129+
- For the automatic segmentation functionality we currently rely on the automatic mask generation provided by SegmentAnything. It is slow and often misses objects in microscopy images.
135130
- Prompt bounding boxes do not provide the full functionality for tracking yet (they cannot be used for divisions or for starting new tracks). See also [this github issue](https://github.com/computational-cell-analytics/micro-sam/issues/23).

micro_sam/sam_annotator/_widgets.py

Lines changed: 3 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -977,7 +977,7 @@ def _create_settings_widget(self):
977977
)
978978
setting_values.layout().addWidget(widget)
979979

980-
settings = _make_collapsible(setting_values, title="Settings")
980+
settings = _make_collapsible(setting_values, title="Embedding Settings")
981981
return settings
982982

983983
def _validate_inputs(self):
@@ -1184,7 +1184,7 @@ def _create_settings(self):
11841184
)
11851185
setting_values.layout().addLayout(layout)
11861186

1187-
settings = _make_collapsible(setting_values, title="Settings")
1187+
settings = _make_collapsible(setting_values, title="Segmentation Settings")
11881188
return settings
11891189

11901190
def _run_tracking(self):
@@ -1504,8 +1504,7 @@ def _amg_settings(self):
15041504

15051505
def _create_settings(self):
15061506
setting_values = self._ais_settings() if self.with_decoder else self._amg_settings()
1507-
settings = _make_collapsible(setting_values, title="Settings")
1508-
settings.setToolTip(get_tooltip("segmentnd", "projection_dropdown"))
1507+
settings = _make_collapsible(setting_values, title="Automatic Segmentation Settings")
15091508
return settings
15101509

15111510
def _run_segmentation_2d(self, kwargs, i=None):

micro_sam/sam_annotator/image_series_annotator.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -367,7 +367,7 @@ def _create_settings(self):
367367
)
368368
setting_values.layout().addLayout(layout)
369369

370-
settings = widgets._make_collapsible(setting_values, title="Settings")
370+
settings = widgets._make_collapsible(setting_values, title="Embedding Settings")
371371
return settings
372372

373373
def _validate_inputs(self):

0 commit comments

Comments
 (0)