Skip to content

Commit 2c2878c

Browse files
authored
Update documentation with latest features (#890)
Update documentation
1 parent 05fbe45 commit 2c2878c

File tree

6 files changed

+10
-4
lines changed

6 files changed

+10
-4
lines changed

doc/annotation_tools.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -146,3 +146,7 @@ You can select the image data via `Path to images`. You can either load images f
146146
You can select the label data via `Path to labels` and `Label data key`, following the same logic as for the image data. The label masks are expected to have the same size as the image data. You can for example use annotations created with one of the `micro_sam` annotation tools for this, they are stored in the correct format. See [the FAQ](#fine-tuning-questions) for more details on the expected label data.
147147

148148
The `Configuration` option allows you to choose the hardware configuration for training. We try to automatically select the correct setting for your system, but it can also be changed. Details on the configurations can be found [here](#training-your-own-model).
149+
150+
NOTE: We recommend to fine-tune Segment Anything models on your data by
151+
- running `$ micro_sam.train` in the command line.
152+
- calling `micro_sam.training.train_sam` in a python script. Check out [examples/finetuning/finetune_hela.py](https://github.com/computational-cell-analytics/micro-sam/blob/master/examples/finetuning/finetune_hela.py) OR [notebooks/sam_finetuning.ipynb](https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/sam_finetuning.ipynb) for details.

doc/cli_tools.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ The supported CLIs can be used by
88
- Running `$ micro_sam.annotator_3d` for starting the 3d annotator.
99
- Running `$ micro_sam.annotator_tracking` for starting the tracking annotator.
1010
- Running `$ micro_sam.image_series_annotator` for starting the image series annotator.
11+
- Running `$ micro_sam.train` for finetuning Segment Anything models on your data.
1112
- Running `$ micro_sam.automatic_segmentation` for automatic instance segmentation.
1213
- We support all post-processing parameters for automatic instance segmentation (for both AMG and AIS).
1314
- The automatic segmentation mode can be controlled by: `--mode <MODE_NAME>`, where the available choice for `MODE_NAME` is `amg` / `ais`.
@@ -20,5 +21,6 @@ The supported CLIs can be used by
2021
```
2122
- Remember to specify the automatic segmentation mode using `--mode <MODE_NAME>` when using additional post-processing parameters.
2223
- You can check details for supported parameters and their respective default values at `micro_sam/instance_segmentation.py` under the `generate` method for `AutomaticMaskGenerator` and `InstanceSegmentationWithDecoder` class.
24+
- A good practice is to set `--ndim <NDIM>`, where `<NDIM>` corresponds to the number of dimensions of input images.
2325

2426
NOTE: For all CLIs above, you can find more details by adding the argument `-h` to the CLI script (eg. `$ micro_sam.annotator_2d -h`).

doc/faq.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ The installer should work out-of-the-box on Windows and Linux platforms. Please
1717

1818
### 3. What is the minimum system requirement for `micro_sam`?
1919
From our experience, the `micro_sam` annotation tools work seamlessly on most laptop or workstation CPUs and with > 8GB RAM.
20-
You might encounter some slowness for $\leq$ 8GB RAM. The resources `micro_sam`'s annotation tools have been tested on are:
20+
You might encounter some slowness for >= 8GB RAM. The resources `micro_sam`'s annotation tools have been tested on are:
2121
- Windows:
2222
- Windows 10 Pro, Intel i5 7th Gen, 8GB RAM
2323
- Windows 10 Enterprise LTSC, Intel i7 13th Gen, 32GB RAM

doc/installation.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ conda activate micro-sam
3333
```
3434

3535
This will also install `pytorch` from the `conda-forge` channel. If you have a recent enough operating system, it will automatically install the best suitable `pytorch` version on your system.
36-
This means it will install the CPU version if you don't have a nVidia GPU, and will install a GPU version if you have.
36+
This means it will install the CPU version if you don't have a nvidia GPU, and will install a GPU version if you have.
3737
However, if you have an older operating system, or a CUDA version older than 12, than it may not install the correct version. In this case you will have to specify you're CUDA version, for example for CUDA 11, like this:
3838
```bash
3939
conda install -c conda-forge micro_sam "libtorch=*=cuda11*"

doc/python_library.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ We reimplement the training logic described in the [Segment Anything publication
3030
We use this functionality to provide the [finetuned microscopy models](#finetuned-models) and it can also be used to train models on your own data.
3131
In fact the best results can be expected when finetuning on your own data, and we found that it does not require much annotated training data to get significant improvements in model performance.
3232
So a good strategy is to annotate a few images with one of the provided models using our interactive annotation tools and, if the model is not working as good as required for your use-case, finetune on the annotated data.
33-
We recommend checking out our latest [preprint](https://doi.org/10.1101/2023.08.21.554208) for details on the results on how much data is required for finetuning Segment Anything.
33+
We recommend checking out our [paper](https://www.nature.com/articles/s41592-024-02580-4) for details on the results on how much data is required for finetuning Segment Anything.
3434

3535
The training logic is implemented in `micro_sam.training` and is based on [torch-em](https://github.com/constantinpape/torch-em). Check out [the finetuning notebook](https://github.com/computational-cell-analytics/micro-sam/blob/master/notebooks/sam_finetuning.ipynb) to see how to use it.
3636
We also support training an additional decoder for automatic instance segmentation. This yields better results than the automatic mask generation of segment anything and is significantly faster.

doc/start_page.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,6 +48,6 @@ You can also train models on your own data, see [here for details](#training-you
4848
## Citation
4949

5050
If you are using `micro_sam` in your research please cite
51-
- our [preprint](https://doi.org/10.1101/2023.08.21.554208)
51+
- our [paper](https://www.nature.com/articles/s41592-024-02580-4) (now published in Nature Methods!)
5252
- and the original [Segment Anything publication](https://arxiv.org/abs/2304.02643).
5353
- If you use a `vit-tiny` models, please also cite [Mobile SAM](https://arxiv.org/abs/2306.14289).

0 commit comments

Comments
 (0)