Skip to content

Commit a4b64b1

Browse files
authored
Update inference_module_guide.rst
1 parent d709f55 commit a4b64b1

File tree

1 file changed

+11
-11
lines changed

1 file changed

+11
-11
lines changed

docs/source/guides/inference_module_guide.rst

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -50,13 +50,13 @@ Interface and functionalities
5050

5151
Inference parameters
5252

53-
* **Loading data** :
53+
* **Loading data**:
5454

5555
| When launching the module, select either an **image layer** or an **image folder** containing the 3D volumes you wish to label.
5656
| When loading from folder : All images with the chosen extension ( currently **.tif**) will be labeled.
5757
| Specify an **output folder**, where the labelled results will be saved.
5858
59-
* **Model selection** :
59+
* **Model selection**:
6060

6161
| You can then choose from the listed **models** for inference.
6262
| You may also **load custom weights** rather than the pre-trained ones. Make sure these weights are **compatible** (e.g. produced from the training module for the same model).
@@ -66,19 +66,19 @@ Interface and functionalities
6666
Currently the SegResNet and SwinUNetR models require you to provide the size of the images the model was trained with.
6767
Provided weights use a size of 64, please leave it on the default value if you're not using custom weights.
6868

69-
* **Inference parameters** :
69+
* **Inference parameters**:
7070

7171
* **Window inference**: You can choose to use inference on the entire image at once (disabled) or divide the image (enabled) on smaller chunks, based on your memory constraints.
7272
* **Window overlap**: Define the overlap between windows to reduce border effects;
7373
recommended values are 0.1-0.3 for 3D inference.
7474
* **Keep on CPU**: You can choose to keep the dataset in RAM rather than VRAM to avoid running out of VRAM if you have several images.
7575
* **Device Selection**: You can choose to run inference on either CPU or GPU. A GPU is recommended for faster inference.
7676

77-
* **Anisotropy** :
77+
* **Anisotropy**:
7878

7979
For **anisotropic images** you may set the **resolution of your volume in micron**, to view and save the results without anisotropy.
8080

81-
* **Thresholding** :
81+
* **Thresholding**:
8282

8383
You can perform thresholding to **binarize your labels**.
8484
All values below the **confidence threshold** will be set to 0.
@@ -87,7 +87,7 @@ Interface and functionalities
8787
It is recommended to first run without thresholding. You can then use the napari contrast limits to find a good threshold value,
8888
and run inference later with your chosen threshold.
8989

90-
* **Instance segmentation** :
90+
* **Instance segmentation**:
9191

9292
| You can convert the semantic segmentation into instance labels by using either the `Voronoi-Otsu`_, `Watershed`_ or `Connected Components`_ method, as detailed in :ref:`utils_module_guide`.
9393
| Instance labels will be saved (and shown if applicable) separately from other results.
@@ -98,7 +98,7 @@ Interface and functionalities
9898
.. _Voronoi-Otsu: https://haesleinhuepf.github.io/BioImageAnalysisNotebooks/20_image_segmentation/11_voronoi_otsu_labeling.html
9999

100100

101-
* **Computing objects statistics** :
101+
* **Computing objects statistics**:
102102

103103
You can choose to compute various stats from the labels and save them to a **`.csv`** file for later use.
104104
Statistics include individual object details and general metrics.
@@ -109,7 +109,7 @@ Interface and functionalities
109109
* Sphericity
110110

111111

112-
Global metrics :
112+
Global metrics:
113113

114114
* Image size
115115
* Total image volume (pixels)
@@ -118,7 +118,7 @@ Interface and functionalities
118118
* The number of labeled objects
119119

120120

121-
* **Display options** :
121+
* **Display options**:
122122

123123
When running inference on a folder, you can choose to display the results in napari.
124124
If selected, you may choose the display quantity, and whether to display the original image alongside the results.
@@ -151,10 +151,10 @@ Unsupervised model - WNet3D
151151
| The `WNet3D model` is a fully self-supervised model used to segment images without any labels.
152152
| It functions similarly to the above models, with a few notable differences.
153153
154-
WNet3D works best with :
154+
WNet3D has been tested on:
155155

156156
* **MesoSPIM** data (whole-brain samples of mice imaged by mesoSPIM microscopy) with nuclei staining.
157-
* Other microscopy data with :
157+
* Other microscopy (i.e., confocal) data with:
158158
* **Sufficient contrast** between objects and background.
159159
* **Low to medium crowding** of objects. If all objects are adjacent to each other, instance segmentation methods provided here may not work well.
160160

0 commit comments

Comments
 (0)