You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _sources/source/guides/inference_module_guide.rst
+19-10Lines changed: 19 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -50,13 +50,13 @@ Interface and functionalities
50
50
51
51
Inference parameters
52
52
53
-
* **Loading data**:
53
+
* **Loading data**:
54
54
55
55
|When launching the module, select either an **image layer** or an **image folder** containing the 3D volumes you wish to label.
56
56
|When loading from folder : All images with the chosen extension ( currently **.tif**) will be labeled.
57
57
|Specify an **output folder**, where the labelled results will be saved.
58
58
59
-
* **Model selection**:
59
+
* **Model selection**:
60
60
61
61
|You can then choose from the listed **models** for inference.
62
62
|You may also **load custom weights** rather than the pre-trained ones. Make sure these weights are **compatible** (e.g. produced from the training module for the same model).
@@ -66,19 +66,19 @@ Interface and functionalities
66
66
Currently the SegResNet and SwinUNetR models require you to provide the size of the images the model was trained with.
67
67
Provided weights use a size of 64, please leave it on the default value if you're not using custom weights.
68
68
69
-
* **Inference parameters**:
69
+
* **Inference parameters**:
70
70
71
71
* **Window inference**: You can choose to use inference on the entire image at once (disabled) or divide the image (enabled) on smaller chunks, based on your memory constraints.
72
72
* **Window overlap**: Define the overlap between windows to reduce border effects;
73
73
recommended values are 0.1-0.3 for 3D inference.
74
74
* **Keep on CPU**: You can choose to keep the dataset in RAM rather than VRAM to avoid running out of VRAM if you have several images.
75
75
* **Device Selection**: You can choose to run inference on either CPU or GPU. A GPU is recommended for faster inference.
76
76
77
-
* **Anisotropy**:
77
+
* **Anisotropy**:
78
78
79
79
For **anisotropic images** you may set the **resolution of your volume in micron**, to view and save the results without anisotropy.
80
80
81
-
* **Thresholding**:
81
+
* **Thresholding**:
82
82
83
83
You can perform thresholding to **binarize your labels**.
84
84
All values below the **confidence threshold** will be set to 0.
@@ -87,7 +87,7 @@ Interface and functionalities
87
87
It is recommended to first run without thresholding. You can then use the napari contrast limits to find a good threshold value,
88
88
and run inference later with your chosen threshold.
89
89
90
-
* **Instance segmentation**:
90
+
* **Instance segmentation**:
91
91
92
92
|You can convert the semantic segmentation into instance labels by using either the `Voronoi-Otsu`_, `Watershed`_ or `Connected Components`_ method, as detailed in :ref:`utils_module_guide`.
93
93
|Instance labels will be saved (and shown if applicable) separately from other results.
This plugin has been developed by Cyril Achard and Maxime Vidal, supervised by Mackenzie Mathis for the `Mathis Laboratory of Adaptive Motor Control`_.
145
+
If you find our code or ideas useful, please cite:
Copy file name to clipboardExpand all lines: source/guides/inference_module_guide.html
+22-9Lines changed: 22 additions & 9 deletions
Original file line number
Diff line number
Diff line change
@@ -495,14 +495,14 @@ <h2>Interface and functionalities<a class="headerlink" href="#interface-and-func
495
495
</figcaption>
496
496
</figure>
497
497
<ul>
498
-
<li><p><strong>Loading data</strong>:</p>
498
+
<li><p><strong>Loading data</strong>:</p>
499
499
<divclass="line-block">
500
500
<divclass="line">When launching the module, select either an <strong>image layer</strong> or an <strong>image folder</strong> containing the 3D volumes you wish to label.</div>
501
501
<divclass="line">When loading from folder : All images with the chosen extension ( currently <strong>.tif</strong>) will be labeled.</div>
502
502
<divclass="line">Specify an <strong>output folder</strong>, where the labelled results will be saved.</div>
503
503
</div>
504
504
</li>
505
-
<li><p><strong>Model selection</strong>:</p>
505
+
<li><p><strong>Model selection</strong>:</p>
506
506
<divclass="line-block">
507
507
<divclass="line">You can then choose from the listed <strong>models</strong> for inference.</div>
508
508
<divclass="line">You may also <strong>load custom weights</strong> rather than the pre-trained ones. Make sure these weights are <strong>compatible</strong> (e.g. produced from the training module for the same model).</div>
@@ -516,7 +516,7 @@ <h2>Interface and functionalities<a class="headerlink" href="#interface-and-func
516
516
Provided weights use a size of 64, please leave it on the default value if you’re not using custom weights.</p>
517
517
</div>
518
518
<ulclass="simple">
519
-
<li><p><strong>Inference parameters</strong>:</p>
519
+
<li><p><strong>Inference parameters</strong>:</p>
520
520
<ul>
521
521
<li><p><strong>Window inference</strong>: You can choose to use inference on the entire image at once (disabled) or divide the image (enabled) on smaller chunks, based on your memory constraints.</p></li>
522
522
<li><p><strong>Window overlap</strong>: Define the overlap between windows to reduce border effects;
@@ -525,13 +525,13 @@ <h2>Interface and functionalities<a class="headerlink" href="#interface-and-func
525
525
<li><p><strong>Device Selection</strong>: You can choose to run inference on either CPU or GPU. A GPU is recommended for faster inference.</p></li>
526
526
</ul>
527
527
</li>
528
-
<li><p><strong>Anisotropy</strong>:</p></li>
528
+
<li><p><strong>Anisotropy</strong>:</p></li>
529
529
</ul>
530
530
<blockquote>
531
531
<div><p>For <strong>anisotropic images</strong> you may set the <strong>resolution of your volume in micron</strong>, to view and save the results without anisotropy.</p>
532
532
</div></blockquote>
533
533
<ul>
534
-
<li><p><strong>Thresholding</strong>:</p>
534
+
<li><p><strong>Thresholding</strong>:</p>
535
535
<p>You can perform thresholding to <strong>binarize your labels</strong>.
536
536
All values below the <strong>confidence threshold</strong> will be set to 0.</p>
537
537
</li>
@@ -542,15 +542,15 @@ <h2>Interface and functionalities<a class="headerlink" href="#interface-and-func
542
542
and run inference later with your chosen threshold.</p>
<divclass="line">You can convert the semantic segmentation into instance labels by using either the <aclass="reference external" href="https://haesleinhuepf.github.io/BioImageAnalysisNotebooks/20_image_segmentation/11_voronoi_otsu_labeling.html">Voronoi-Otsu</a>, <aclass="reference external" href="https://scikit-image.org/docs/dev/auto_examples/segmentation/plot_watershed.html">Watershed</a> or <aclass="reference external" href="https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.label">Connected Components</a> method, as detailed in <aclass="reference internal" href="utils_module_guide.html#utils-module-guide"><spanclass="std std-ref">Utilities 🛠</span></a>.</div>
548
548
<divclass="line">Instance labels will be saved (and shown if applicable) separately from other results.</div>
<p>You can choose to compute various stats from the labels and save them to a <strong>`.csv`</strong> file for later use.
555
555
Statistics include individual object details and general metrics.
556
556
For each object :</p>
@@ -559,7 +559,7 @@ <h2>Interface and functionalities<a class="headerlink" href="#interface-and-func
559
559
<li><p><spanclass="math notranslate nohighlight">\(X,Y,Z\)</span> coordinates of the centroid</p></li>
560
560
<li><p>Sphericity</p></li>
561
561
</ul>
562
-
<p>Global metrics:</p>
562
+
<p>Global metrics:</p>
563
563
<ulclass="simple">
564
564
<li><p>Image size</p></li>
565
565
<li><p>Total image volume (pixels)</p></li>
@@ -568,7 +568,7 @@ <h2>Interface and functionalities<a class="headerlink" href="#interface-and-func
568
568
<li><p>The number of labeled objects</p></li>
569
569
</ul>
570
570
</li>
571
-
<li><p><strong>Display options</strong>:</p>
571
+
<li><p><strong>Display options</strong>:</p>
572
572
<p>When running inference on a folder, you can choose to display the results in napari.
573
573
If selected, you may choose the display quantity, and whether to display the original image alongside the results.</p>
574
574
</li>
@@ -603,6 +603,19 @@ <h2>Unsupervised model - WNet3D<a class="headerlink" href="#unsupervised-model-w
603
603
<divclass="line">The <cite>WNet3D model</cite> is a fully self-supervised model used to segment images without any labels.</div>
604
604
<divclass="line">It functions similarly to the above models, with a few notable differences.</div>
605
605
</div>
606
+
<p>WNet3D has been tested on:</p>
607
+
<ulclass="simple">
608
+
<li><p><strong>MesoSPIM</strong> data (whole-brain samples of mice imaged by mesoSPIM microscopy) with nuclei staining.</p></li>
609
+
<li><dlclass="simple">
610
+
<dt>Other microscopy (i.e., confocal) data with:</dt><dd><ul>
611
+
<li><p><strong>Sufficient contrast</strong> between objects and background.</p></li>
612
+
<li><p><strong>Low to medium crowding</strong> of objects. If all objects are adjacent to each other, instance segmentation methods provided here may not work well.</p></li>
613
+
</ul>
614
+
</dd>
615
+
</dl>
616
+
</li>
617
+
</ul>
618
+
<p>Noise and object size are less critical, though objects still have to fit within the field of view of the model.</p>
606
619
<divclass="admonition note">
607
620
<pclass="admonition-title">Note</p>
608
621
<p>Our provided, pre-trained model uses an input size of 64x64x64. As such, window inference is always enabled
0 commit comments