|
1 | | -# Data Annotation Tools |
| 1 | +# Annotation Tools |
| 2 | + |
| 3 | +`micro_sam` provides applications for fast interactive 2d segmentation, 3d segmentation and tracking. |
| 4 | +See an example for interactive cell segmentation in phase-contrast microscopy (left), interactive segmentation |
| 5 | +of mitochondria in volume EM (middle) and interactive tracking of cells (right). |
| 6 | + |
| 7 | +<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/d5ee2080-ab08-4716-b4c4-c169b4ed29f5" width="256"> |
| 8 | +<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/dfca3d9b-dba5-440b-b0f9-72a0683ac410" width="256"> |
| 9 | +<img src="https://github.com/computational-cell-analytics/micro-sam/assets/4263537/aefbf99f-e73a-4125-bb49-2e6592367a64" width="256"> |
| 10 | + |
| 11 | +The annotation tools can be started from the `micro_sam` GUI, the command line or from python scripts. The `micro_sam` GUI can be started by |
| 12 | +``` |
| 13 | +$ micro_sam.annotator |
| 14 | +``` |
| 15 | + |
| 16 | +They are built with [napari](https://napari.org/stable/) to implement the viewer and user interaction. |
| 17 | +If you are not familiar with napari yet, [start here](https://napari.org/stable/tutorials/fundamentals/quick_start.html). |
| 18 | +The `micro_sam` applications are mainly based on [the point layer](https://napari.org/stable/howtos/layers/points.html), [the shape layer](https://napari.org/stable/howtos/layers/shapes.html) and [the label layer](https://napari.org/stable/howtos/layers/labels.html). |
| 19 | + |
| 20 | +## Annotator 2D |
| 21 | + |
| 22 | +The 2d annotator can be started by |
| 23 | +- clicking `2d annotator` in the `micro_sam` GUI. |
| 24 | +- running `$ micro_sam.annotator_2d` in the command line. Run `micro_sam.annotator_2d -h` for details. |
| 25 | +- calling `micro_sam.sam_annotator.annotator_2d` in a python script. Check out [examples/sam_annotator_2d.py](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_2d.py) for details. |
| 26 | + |
| 27 | +The user interface of the 2d annotator looks like this: |
| 28 | + |
| 29 | +<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/2d-annotator-menu.png" width="768"> |
| 30 | + |
| 31 | +It contains the following elements: |
| 32 | +1. The napari layers for the image, segmentations and prompts: |
| 33 | + - `box_prompts`: shape layer that is used to provide box prompts to SegmentAnything. |
| 34 | + - `prompts`: point layer that is used to provide prompts to SegmentAnything. Positive prompts (green points) for marking the object you want to segment, negative prompts (red points) for marking the outside of the object. |
| 35 | + - `current_object`: label layer that contains the object you're currently segmenting. |
| 36 | + - `committed_objects`: label layer with the objects that have already been segmented. |
| 37 | + - `auto_segmentation`: label layer results from using SegmentAnything for automatic instance segmentation. |
| 38 | + - `raw`: image layer that shows the image data. |
| 39 | +2. The prompt menu for changing the currently selected point from positive to negative and vice versa. This can also be done by pressing `t`. |
| 40 | +3. The menu for automatic segmentation. Pressing `Segment All Objects` will run automatic segmentation. The results will be displayed in the `auto_segmentation` layer. Change the parameters `pred iou thresh` and `stability score thresh` to control how many objects are segmented. |
| 41 | +4. The menu for interactive segmentation. Pressing `Segment Object` (or `s`) will run segmentation for the current prompts. The result is displayed in `current_object` |
| 42 | +5. The menu for commiting the segmentation. When pressing `Commit` (or `c`) the result from the selected layer (either `current_object` or `auto_segmentation`) will be transferred from the respective layer to `committed_objects`. |
| 43 | +6. The menu for clearing the current annotations. Pressing `Clear Annotations` (or `shift c`) will clear the current annotations and the current segmentation. |
| 44 | + |
| 45 | +Note that point prompts and box prompts can be combined. When you're using point prompts you can only segment one object at a time. With box prompts you can segment several objects at once. |
| 46 | + |
| 47 | +Check out [this video](https://youtu.be/DfWE_XRcqN8) for an example of how to use the interactive 2d annotator. |
| 48 | + |
| 49 | +We also provide the `image series annotator`, which can be used for running the 2d annotator for several images in a folder. You can start by clicking `Image series annotator` in the GUI, running `micro_sam.image_series_annotator` in the command line or from a [python script](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_image_series_annotator.py). |
| 50 | + |
| 51 | +## Annotator 3D |
| 52 | + |
| 53 | +The 3d annotator can be started by |
| 54 | +- clicking `3d annotator` in the `micro_sam` GUI. |
| 55 | +- running `$ micro_sam.annotator_3d` in the command line. Run `micro_sam.annotator_3d -h` for details. |
| 56 | +- calling `micro_sam.sam_annotator.annotator_3d` in a python script. Check out [examples/sam_annotator_3d.py](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_3d.py) for details. |
| 57 | + |
| 58 | +The user interface of the 3d annotator looks like this: |
| 59 | + |
| 60 | +<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/3d-annotator-menu.png" width="768"> |
| 61 | + |
| 62 | +Most elements are the same as in [the 2d annotator](#annotator-2d): |
| 63 | +1. The napari layers that contain the image, segmentation and prompts. Same as for [the 2d annotator](#annotator-2d) but without the `auto_segmentation` layer. |
| 64 | +2. The prompt menu. |
| 65 | +3. The menu for interactive segmentation. |
| 66 | +4. The 3d segmentation menu. Pressing `Segment Volume` (or `v`) will extend the segmentation for the current object across the volume. |
| 67 | +5. The menu for committing the segmentation. |
| 68 | +6. The menu for clearing the current annotations. |
| 69 | + |
| 70 | +Note that you can only segment one object at a time with the 3d annotator. |
| 71 | + |
| 72 | +Check out [this video](https://youtu.be/5Jo_CtIefTM) for an overview of the interactive 3d segmentation functionality. |
| 73 | + |
| 74 | +## Annotator Tracking |
| 75 | + |
| 76 | +The tracking annotator can be started by |
| 77 | +- clicking `Tracking annotator` in the `micro_sam` GUI. |
| 78 | +- running `$ micro_sam.annotator_tracking` in the command line. Run `micro_sam.annotator_tracking -h` for details. |
| 79 | +- calling `micro_sam.sam_annotator.annotator_tracking` in a python script. Check out [examples/sam_annotator_tracking.py](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/sam_annotator_tracking.py) for details. |
| 80 | + |
| 81 | +The user interface of the tracking annotator looks like this: |
| 82 | + |
| 83 | +<img src="https://raw.githubusercontent.com/computational-cell-analytics/micro-sam/master/doc/images/tracking-annotator-menu.png" width="768"> |
| 84 | + |
| 85 | +Most elements are the same as in [the 2d annotator](#annotator-2d): |
| 86 | +1. The napari layers that contain the image, segmentation and prompts. Same as for [the 2d segmentation app](#annotator-2d) but without the `auto_segmentation` layer, `current_tracks` and `committed_tracks` are the equivalent of `current_object` and `committed_objects`. |
| 87 | +2. The prompt menu. |
| 88 | +3. The menu with tracking settings: `track_state` is used to indicate that the object you are tracking is dividing in the current frame. `track_id` is used to select which of the tracks after divsion you are following. |
| 89 | +4. The menu for interactive segmentation. |
| 90 | +5. The tracking menu. Press `Track Object` (or `v`) to track the current object across time. |
| 91 | +6. The menu for committing the current tracking result. |
| 92 | +7. The menu for clearing the current annotations. |
| 93 | + |
| 94 | +Note that the tracking annotator only supports 2d image data, volumetric data is not supported. |
| 95 | + |
| 96 | +Check out [this video](https://youtu.be/PBPW0rDOn9w) for an overview of the interactive tracking functionality. |
| 97 | + |
| 98 | +### Tips & Tricks |
| 99 | + |
| 100 | +- You can use tiling for large images. (TODO: expand on this). |
| 101 | +- The applications pre-compute the image embeddings produced by SegmentAnything and (optionally) store them on disc. If you are using a CPU this step can take a while for 3d data or timeseries (you will see a progress bar with a time estimate). If you have access to a GPU without graphical interface (e.g. via a local computer cluster or a cloud provider), you can also pre-compute the embeddings there and then copy them to your laptop / local machine to speed this up. You can use the command `micro_sam.precompute_embeddings` for this (it is installed with the rest of the applications). You can specify the location of the precomputed embeddings via the `embedding_path` argument. |
| 102 | +- Most other processing steps are very fast even on a CPU, so interactive annotation is possible. An exception is the automatic segmentation step (2d segmentation), which takes several minutes without a GPU (depending on the image size). For large volumes and timeseries segmenting an object in 3d / tracking across time can take a couple settings with a CPU (it is very fast with a GPU). |
| 103 | +- You can also try using a smaller version of the SegmentAnything model to speed up the computations. For this you can pass the `model_type` argument and either set it to `vit_l` or `vit_b` (default is `vit_h`). However, this may lead to worse results. |
| 104 | +- You can save and load the results from the `committed_objects` / `committed_tracks` layer to correct segmentations you obtained from another tool (e.g. CellPose) or to save intermediate annotation results. The results can be saved via `File->Save Selected Layer(s) ...` in the napari menu (see the tutorial videos for details). They can be loaded again by specifying the corresponding location via the `segmentation_result` (2d and 3d segmentation) or `tracking_result` (tracking) argument. |
| 105 | + |
| 106 | +### Known limitations |
| 107 | + |
| 108 | +- Segment Anything does not work well for very small or fine-graind objects (e.g. filaments). |
| 109 | +- For the automatic segmentation functionality we currently rely on the automatic mask generation provided by SegmentAnything. It is slow and often misses objects in microscopy images. For now, we only offer this functionality in the 2d segmentation app; we are working on improving it and extending it to 3d segmentation and tracking. |
| 110 | +- Prompt bounding boxes do not provide the full functionality for tracking yet (they cannot be used for divisions or for starting new tracks). See also [this github issue](https://github.com/computational-cell-analytics/micro-sam/issues/23). |
0 commit comments