Skip to content

Commit 3966e2e

Browse files
committed
deploy: a088c16
1 parent b87dae1 commit 3966e2e

File tree

10 files changed

+136
-59
lines changed

10 files changed

+136
-59
lines changed

_sources/source/code/_autosummary/napari_cellseg3d.code_plugins.plugin_convert.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ napari\_cellseg3d.code\_plugins.plugin\_convert
2222
FragmentUtils
2323
RemoveSmallUtils
2424
StatsUtils
25+
ThresholdGridSearchUtils
2526
ThresholdUtils
2627
ToInstanceUtils
2728
ToSemanticUtils

_sources/source/guides/inference_module_guide.rst

Lines changed: 19 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -50,13 +50,13 @@ Interface and functionalities
5050

5151
Inference parameters
5252

53-
* **Loading data** :
53+
* **Loading data**:
5454

5555
| When launching the module, select either an **image layer** or an **image folder** containing the 3D volumes you wish to label.
5656
| When loading from folder : All images with the chosen extension ( currently **.tif**) will be labeled.
5757
| Specify an **output folder**, where the labelled results will be saved.
5858
59-
* **Model selection** :
59+
* **Model selection**:
6060

6161
| You can then choose from the listed **models** for inference.
6262
| You may also **load custom weights** rather than the pre-trained ones. Make sure these weights are **compatible** (e.g. produced from the training module for the same model).
@@ -66,19 +66,19 @@ Interface and functionalities
6666
Currently the SegResNet and SwinUNetR models require you to provide the size of the images the model was trained with.
6767
Provided weights use a size of 64, please leave it on the default value if you're not using custom weights.
6868

69-
* **Inference parameters** :
69+
* **Inference parameters**:
7070

7171
* **Window inference**: You can choose to use inference on the entire image at once (disabled) or divide the image (enabled) on smaller chunks, based on your memory constraints.
7272
* **Window overlap**: Define the overlap between windows to reduce border effects;
7373
recommended values are 0.1-0.3 for 3D inference.
7474
* **Keep on CPU**: You can choose to keep the dataset in RAM rather than VRAM to avoid running out of VRAM if you have several images.
7575
* **Device Selection**: You can choose to run inference on either CPU or GPU. A GPU is recommended for faster inference.
7676

77-
* **Anisotropy** :
77+
* **Anisotropy**:
7878

7979
For **anisotropic images** you may set the **resolution of your volume in micron**, to view and save the results without anisotropy.
8080

81-
* **Thresholding** :
81+
* **Thresholding**:
8282

8383
You can perform thresholding to **binarize your labels**.
8484
All values below the **confidence threshold** will be set to 0.
@@ -87,7 +87,7 @@ Interface and functionalities
8787
It is recommended to first run without thresholding. You can then use the napari contrast limits to find a good threshold value,
8888
and run inference later with your chosen threshold.
8989

90-
* **Instance segmentation** :
90+
* **Instance segmentation**:
9191

9292
| You can convert the semantic segmentation into instance labels by using either the `Voronoi-Otsu`_, `Watershed`_ or `Connected Components`_ method, as detailed in :ref:`utils_module_guide`.
9393
| Instance labels will be saved (and shown if applicable) separately from other results.
@@ -98,7 +98,7 @@ Interface and functionalities
9898
.. _Voronoi-Otsu: https://haesleinhuepf.github.io/BioImageAnalysisNotebooks/20_image_segmentation/11_voronoi_otsu_labeling.html
9999

100100

101-
* **Computing objects statistics** :
101+
* **Computing objects statistics**:
102102

103103
You can choose to compute various stats from the labels and save them to a **`.csv`** file for later use.
104104
Statistics include individual object details and general metrics.
@@ -109,7 +109,7 @@ Interface and functionalities
109109
* Sphericity
110110

111111

112-
Global metrics :
112+
Global metrics:
113113

114114
* Image size
115115
* Total image volume (pixels)
@@ -118,7 +118,7 @@ Interface and functionalities
118118
* The number of labeled objects
119119

120120

121-
* **Display options** :
121+
* **Display options**:
122122

123123
When running inference on a folder, you can choose to display the results in napari.
124124
If selected, you may choose the display quantity, and whether to display the original image alongside the results.
@@ -151,7 +151,16 @@ Unsupervised model - WNet3D
151151
| The `WNet3D model` is a fully self-supervised model used to segment images without any labels.
152152
| It functions similarly to the above models, with a few notable differences.
153153
154-
.. _WNet3D model: https://arxiv.org/abs/1711.08506
154+
WNet3D has been tested on:
155+
156+
* **MesoSPIM** data (whole-brain samples of mice imaged by mesoSPIM microscopy) with nuclei staining.
157+
* Other microscopy (i.e., confocal) data with:
158+
* **Sufficient contrast** between objects and background.
159+
* **Low to medium crowding** of objects. If all objects are adjacent to each other, instance segmentation methods provided here may not work well.
160+
161+
Noise and object size are less critical, though objects still have to fit within the field of view of the model.
162+
163+
.. _WNet3D model: https://elifesciences.org/reviewed-preprints/99848
155164

156165
.. note::
157166
Our provided, pre-trained model uses an input size of 64x64x64. As such, window inference is always enabled

_sources/source/guides/utils_module_guide.rst

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,21 @@ See `Usage section <https://adaptivemotorcontrollab.github.io/CellSeg3d/welcome.
1111

1212
You may specify the results directory for saving; afterwards you can run each action on a folder or on the currently selected layer.
1313

14+
Default Paths for Saving Results
15+
________________________________
16+
17+
Each utility saves results to a default directory under the user's home directory. The default paths are as follows:
18+
19+
* Artifact Removal: ``~/cellseg3d/artifact_removed``
20+
* Fragmentation: ``~/cellseg3d/fragmented``
21+
* Anisotropy Correction: ``~/cellseg3d/anisotropy``
22+
* Small Object Removal: ``~/cellseg3d/small_removed``
23+
* Semantic Label Conversion: ``~/cellseg3d/semantic_labels``
24+
* Instance Label Conversion: ``~/cellseg3d/instance_labels``
25+
* Thresholding: ``~/cellseg3d/threshold``
26+
* Statistics: ``~/cellseg3d/stats``
27+
* Threshold Grid Search: ``~/cellseg3d/threshold_grid_search``
28+
1429
Available actions
1530
__________________
1631

@@ -89,6 +104,18 @@ Global metrics :
89104
| Clears labels that are larger than a given threshold.
90105
| This is useful for removing artifacts that are larger than the objects of interest.
91106
107+
11. Find the best threshold
108+
-----------------------
109+
| Finds the best threshold for separating objects from the background.
110+
| Requires a prediction from a model and GT labels as input.
111+
112+
.. caution::
113+
If the input prediction is not from the plugin, it will be remapped to the 0-1 range.
114+
115+
| The threshold is found by maximizing the Dice coefficient between the thresolded prediction and the binarized GT labels.
116+
117+
| The value for the best threshold will be displayed, and the prediction will be thresholded and saved with this value.
118+
92119
Source code
93120
___________
94121

_sources/welcome.rst

Lines changed: 11 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -9,17 +9,14 @@ Welcome to CellSeg3D!
99
Use CellSeg3D to:
1010

1111
* Review labeled cell volumes from whole-brain samples of mice imaged by mesoSPIM microscopy [1]_
12-
* Train and use segmentation models from the MONAI project [2]_ or implement your own custom 3D segmentation models using PyTorch.
12+
* Train and use segmentation models from the MONAI project [2]_
13+
* Train and use our WNet3D unsupervised model
14+
* Or implement your own custom 3D segmentation models using PyTorch!
1315

14-
No labeled data? Try our unsupervised model, based on the `WNet`_ model, to automate your data labelling.
15-
16-
The models provided should be adaptable to other tasks related to detection of 3D objects,
17-
outside of whole-brain light-sheet microscopy.
18-
This applies to the unsupervised model as well, feel free to try to generate labels for your own data!
1916

2017
.. figure:: https://images.squarespace-cdn.com/content/v1/57f6d51c9f74566f55ecf271/0d16a71b-3ff2-477a-9d83-18d96cb1ce28/full_demo.gif?format=500w
2118
:alt: CellSeg3D demo
22-
:width: 500
19+
:width: 800
2320
:align: center
2421

2522
Demo of the plugin
@@ -145,14 +142,14 @@ Other useful napari plugins
145142

146143
Acknowledgments & References
147144
---------------------------------------------
148-
This plugin has been developed by Cyril Achard and Maxime Vidal, supervised by Mackenzie Mathis for the `Mathis Laboratory of Adaptive Motor Control`_.
145+
If you find our code or ideas useful, please cite:
146+
147+
Achard Cyril, Kousi Timokleia, Frey Markus, Vidal Maxime, Paychère Yves, Hofmann Colin, Iqbal Asim, Hausmann Sebastien B, Pagès Stéphane, Mathis Mackenzie Weygandt (2024)
148+
CellSeg3D: self-supervised 3D cell segmentation for microscopy eLife https://doi.org/10.7554/eLife.99848.1
149149

150-
We also greatly thank Timokleia Kousi for her contributions to this project and the `Wyss Center`_ for project funding.
151150

152-
The TRAILMAP models and original weights used here were ported to PyTorch but originate from the `TRAILMAP project on GitHub`_.
153-
We also provide a model that was trained in-house on mesoSPIM nuclei data in collaboration with Dr. Stephane Pages and Timokleia Kousi.
154151

155-
This plugin mainly uses the following libraries and software:
152+
This plugin additionally uses the following libraries and software:
156153

157154
* `napari`_
158155

@@ -162,9 +159,9 @@ This plugin mainly uses the following libraries and software:
162159

163160
* `pyclEsperanto`_ (for the Voronoi Otsu labeling) by Robert Haase
164161

165-
* A new unsupervised 3D model based on the `WNet`_ by Xia and Kulis [3]_
166162

167-
.. _Mathis Laboratory of Adaptive Motor Control: http://www.mackenziemathislab.org/
163+
164+
.. _Mathis Laboratory of Adaptive Intelligence: http://www.mackenziemathislab.org/
168165
.. _Wyss Center: https://wysscenter.ch/
169166
.. _TRAILMAP project on GitHub: https://github.com/AlbertPun/TRAILMAP
170167
.. _napari: https://napari.org/
@@ -178,4 +175,3 @@ This plugin mainly uses the following libraries and software:
178175

179176
.. [1] The mesoSPIM initiative: open-source light-sheet microscopes for imaging cleared tissue, Voigt et al., 2019 ( https://doi.org/10.1038/s41592-019-0554-0 )
180177
.. [2] MONAI Project website ( https://monai.io/ )
181-
.. [3] W-Net: A Deep Model for Fully Unsupervised Image Segmentation, Xia and Kulis, 2018 ( https://arxiv.org/abs/1711.08506 )

objects.inv

78 Bytes
Binary file not shown.

searchindex.js

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

source/code/_autosummary/napari_cellseg3d.code_plugins.plugin_convert.html

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -445,13 +445,16 @@ <h1>napari_cellseg3d.code_plugins.plugin_convert</h1>
445445
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">StatsUtils</span></code>(viewer[, parent])</p></td>
446446
<td><p>Widget to save statistics of a labels layer.</p></td>
447447
</tr>
448-
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">ThresholdUtils</span></code>(viewer[, parent])</p></td>
448+
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">ThresholdGridSearchUtils</span></code>(viewer[, parent])</p></td>
449+
<td><p>Widget to run a grid search for thresholding.</p></td>
450+
</tr>
451+
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">ThresholdUtils</span></code>(viewer[, parent])</p></td>
449452
<td><p>Creates a ThresholdUtils widget.</p></td>
450453
</tr>
451-
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">ToInstanceUtils</span></code>(viewer[, parent])</p></td>
454+
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">ToInstanceUtils</span></code>(viewer[, parent])</p></td>
452455
<td><p>Widget to convert semantic labels to instance labels.</p></td>
453456
</tr>
454-
<tr class="row-even"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">ToSemanticUtils</span></code>(viewer[, parent])</p></td>
457+
<tr class="row-odd"><td><p><code class="xref py py-obj docutils literal notranslate"><span class="pre">ToSemanticUtils</span></code>(viewer[, parent])</p></td>
455458
<td><p>Widget to create semantic labels from instance labels.</p></td>
456459
</tr>
457460
</tbody>

source/guides/inference_module_guide.html

Lines changed: 22 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -495,14 +495,14 @@ <h2>Interface and functionalities<a class="headerlink" href="#interface-and-func
495495
</figcaption>
496496
</figure>
497497
<ul>
498-
<li><p><strong>Loading data</strong> :</p>
498+
<li><p><strong>Loading data</strong>:</p>
499499
<div class="line-block">
500500
<div class="line">When launching the module, select either an <strong>image layer</strong> or an <strong>image folder</strong> containing the 3D volumes you wish to label.</div>
501501
<div class="line">When loading from folder : All images with the chosen extension ( currently <strong>.tif</strong>) will be labeled.</div>
502502
<div class="line">Specify an <strong>output folder</strong>, where the labelled results will be saved.</div>
503503
</div>
504504
</li>
505-
<li><p><strong>Model selection</strong> :</p>
505+
<li><p><strong>Model selection</strong>:</p>
506506
<div class="line-block">
507507
<div class="line">You can then choose from the listed <strong>models</strong> for inference.</div>
508508
<div class="line">You may also <strong>load custom weights</strong> rather than the pre-trained ones. Make sure these weights are <strong>compatible</strong> (e.g. produced from the training module for the same model).</div>
@@ -516,7 +516,7 @@ <h2>Interface and functionalities<a class="headerlink" href="#interface-and-func
516516
Provided weights use a size of 64, please leave it on the default value if you’re not using custom weights.</p>
517517
</div>
518518
<ul class="simple">
519-
<li><p><strong>Inference parameters</strong> :</p>
519+
<li><p><strong>Inference parameters</strong>:</p>
520520
<ul>
521521
<li><p><strong>Window inference</strong>: You can choose to use inference on the entire image at once (disabled) or divide the image (enabled) on smaller chunks, based on your memory constraints.</p></li>
522522
<li><p><strong>Window overlap</strong>: Define the overlap between windows to reduce border effects;
@@ -525,13 +525,13 @@ <h2>Interface and functionalities<a class="headerlink" href="#interface-and-func
525525
<li><p><strong>Device Selection</strong>: You can choose to run inference on either CPU or GPU. A GPU is recommended for faster inference.</p></li>
526526
</ul>
527527
</li>
528-
<li><p><strong>Anisotropy</strong> :</p></li>
528+
<li><p><strong>Anisotropy</strong>:</p></li>
529529
</ul>
530530
<blockquote>
531531
<div><p>For <strong>anisotropic images</strong> you may set the <strong>resolution of your volume in micron</strong>, to view and save the results without anisotropy.</p>
532532
</div></blockquote>
533533
<ul>
534-
<li><p><strong>Thresholding</strong> :</p>
534+
<li><p><strong>Thresholding</strong>:</p>
535535
<p>You can perform thresholding to <strong>binarize your labels</strong>.
536536
All values below the <strong>confidence threshold</strong> will be set to 0.</p>
537537
</li>
@@ -542,15 +542,15 @@ <h2>Interface and functionalities<a class="headerlink" href="#interface-and-func
542542
and run inference later with your chosen threshold.</p>
543543
</div>
544544
<ul>
545-
<li><p><strong>Instance segmentation</strong> :</p>
545+
<li><p><strong>Instance segmentation</strong>:</p>
546546
<div class="line-block">
547547
<div class="line">You can convert the semantic segmentation into instance labels by using either the <a class="reference external" href="https://haesleinhuepf.github.io/BioImageAnalysisNotebooks/20_image_segmentation/11_voronoi_otsu_labeling.html">Voronoi-Otsu</a>, <a class="reference external" href="https://scikit-image.org/docs/dev/auto_examples/segmentation/plot_watershed.html">Watershed</a> or <a class="reference external" href="https://scikit-image.org/docs/dev/api/skimage.measure.html#skimage.measure.label">Connected Components</a> method, as detailed in <a class="reference internal" href="utils_module_guide.html#utils-module-guide"><span class="std std-ref">Utilities 🛠</span></a>.</div>
548548
<div class="line">Instance labels will be saved (and shown if applicable) separately from other results.</div>
549549
</div>
550550
</li>
551551
</ul>
552552
<ul>
553-
<li><p><strong>Computing objects statistics</strong> :</p>
553+
<li><p><strong>Computing objects statistics</strong>:</p>
554554
<p>You can choose to compute various stats from the labels and save them to a <strong>`.csv`</strong> file for later use.
555555
Statistics include individual object details and general metrics.
556556
For each object :</p>
@@ -559,7 +559,7 @@ <h2>Interface and functionalities<a class="headerlink" href="#interface-and-func
559559
<li><p><span class="math notranslate nohighlight">\(X,Y,Z\)</span> coordinates of the centroid</p></li>
560560
<li><p>Sphericity</p></li>
561561
</ul>
562-
<p>Global metrics :</p>
562+
<p>Global metrics:</p>
563563
<ul class="simple">
564564
<li><p>Image size</p></li>
565565
<li><p>Total image volume (pixels)</p></li>
@@ -568,7 +568,7 @@ <h2>Interface and functionalities<a class="headerlink" href="#interface-and-func
568568
<li><p>The number of labeled objects</p></li>
569569
</ul>
570570
</li>
571-
<li><p><strong>Display options</strong> :</p>
571+
<li><p><strong>Display options</strong>:</p>
572572
<p>When running inference on a folder, you can choose to display the results in napari.
573573
If selected, you may choose the display quantity, and whether to display the original image alongside the results.</p>
574574
</li>
@@ -603,6 +603,19 @@ <h2>Unsupervised model - WNet3D<a class="headerlink" href="#unsupervised-model-w
603603
<div class="line">The <cite>WNet3D model</cite> is a fully self-supervised model used to segment images without any labels.</div>
604604
<div class="line">It functions similarly to the above models, with a few notable differences.</div>
605605
</div>
606+
<p>WNet3D has been tested on:</p>
607+
<ul class="simple">
608+
<li><p><strong>MesoSPIM</strong> data (whole-brain samples of mice imaged by mesoSPIM microscopy) with nuclei staining.</p></li>
609+
<li><dl class="simple">
610+
<dt>Other microscopy (i.e., confocal) data with:</dt><dd><ul>
611+
<li><p><strong>Sufficient contrast</strong> between objects and background.</p></li>
612+
<li><p><strong>Low to medium crowding</strong> of objects. If all objects are adjacent to each other, instance segmentation methods provided here may not work well.</p></li>
613+
</ul>
614+
</dd>
615+
</dl>
616+
</li>
617+
</ul>
618+
<p>Noise and object size are less critical, though objects still have to fit within the field of view of the model.</p>
606619
<div class="admonition note">
607620
<p class="admonition-title">Note</p>
608621
<p>Our provided, pre-trained model uses an input size of 64x64x64. As such, window inference is always enabled

0 commit comments

Comments
 (0)