You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/guides/inference_module_guide.rst
+11-11Lines changed: 11 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,13 +50,13 @@ Interface and functionalities
50
50
51
51
Inference parameters
52
52
53
-
* **Loading data**:
53
+
* **Loading data**:
54
54
55
55
|When launching the module, select either an **image layer** or an **image folder** containing the 3D volumes you wish to label.
56
56
|When loading from folder : All images with the chosen extension ( currently **.tif**) will be labeled.
57
57
|Specify an **output folder**, where the labelled results will be saved.
58
58
59
-
* **Model selection**:
59
+
* **Model selection**:
60
60
61
61
|You can then choose from the listed **models** for inference.
62
62
|You may also **load custom weights** rather than the pre-trained ones. Make sure these weights are **compatible** (e.g. produced from the training module for the same model).
@@ -66,19 +66,19 @@ Interface and functionalities
66
66
Currently the SegResNet and SwinUNetR models require you to provide the size of the images the model was trained with.
67
67
Provided weights use a size of 64, please leave it on the default value if you're not using custom weights.
68
68
69
-
* **Inference parameters**:
69
+
* **Inference parameters**:
70
70
71
71
* **Window inference**: You can choose to use inference on the entire image at once (disabled) or divide the image (enabled) on smaller chunks, based on your memory constraints.
72
72
* **Window overlap**: Define the overlap between windows to reduce border effects;
73
73
recommended values are 0.1-0.3 for 3D inference.
74
74
* **Keep on CPU**: You can choose to keep the dataset in RAM rather than VRAM to avoid running out of VRAM if you have several images.
75
75
* **Device Selection**: You can choose to run inference on either CPU or GPU. A GPU is recommended for faster inference.
76
76
77
-
* **Anisotropy**:
77
+
* **Anisotropy**:
78
78
79
79
For **anisotropic images** you may set the **resolution of your volume in micron**, to view and save the results without anisotropy.
80
80
81
-
* **Thresholding**:
81
+
* **Thresholding**:
82
82
83
83
You can perform thresholding to **binarize your labels**.
84
84
All values below the **confidence threshold** will be set to 0.
@@ -87,7 +87,7 @@ Interface and functionalities
87
87
It is recommended to first run without thresholding. You can then use the napari contrast limits to find a good threshold value,
88
88
and run inference later with your chosen threshold.
89
89
90
-
* **Instance segmentation**:
90
+
* **Instance segmentation**:
91
91
92
92
|You can convert the semantic segmentation into instance labels by using either the `Voronoi-Otsu`_, `Watershed`_ or `Connected Components`_ method, as detailed in :ref:`utils_module_guide`.
93
93
|Instance labels will be saved (and shown if applicable) separately from other results.
0 commit comments