Skip to content

Commit 2acbf7f

Browse files
committed
deploy: c98d734
1 parent 9d3cb4f commit 2acbf7f

File tree

6 files changed

+101
-38
lines changed

6 files changed

+101
-38
lines changed

_sources/source/guides/cropping_module_guide.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -59,7 +59,7 @@ Once you have launched the review process, you will gain control over three slid
5959
you to **adjust the position** of the cropped volumes and labels in the x,y and z positions.
6060

6161
.. note::
62-
* If your **cropped volume isnt visible**, consider changing the **colormap** of the image and the cropped
62+
* If your **cropped volume isn't visible**, consider changing the **colormap** of the image and the cropped
6363
volume to improve their visibility.
6464
* You may want to adjust the **opacity** and **contrast thresholds** depending on your image.
6565
* If the image appears empty:

_sources/source/guides/training_wnet.rst

Lines changed: 51 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -4,22 +4,50 @@ Walkthrough - WNet3D training
44
===============================
55

66
This plugin provides a reimplemented, custom version of the WNet3D model from `WNet, A Deep Model for Fully Unsupervised Image Segmentation`_.
7+
78
For training your model, you can choose among:
89

910
* Directly within the plugin
1011
* The provided Jupyter notebook (locally)
11-
* Our Colab notebook (inspired by ZeroCostDL4Mic)
12+
* Our Colab notebook (inspired by https://github.com/HenriquesLab/ZeroCostDL4Mic)
13+
14+
Selecting training data
15+
-------------------------
16+
17+
The WNet3D **does not require a large amount of data to train**, but **choosing the right data** to train this unsupervised model **is crucial**.
18+
19+
You may find below some guidelines, based on our own data and testing.
20+
21+
The WNet3D is designed to segment objects based on their brightness, and is particularly well-suited for images with a clear contrast between objects and background.
22+
23+
The WNet3D is not suitable for images with artifacts, therefore care should be taken that the images are clean and that the objects are at least somewhat distinguishable from the background.
24+
25+
26+
.. important::
27+
For optimal performance, the following should be avoided for training:
28+
29+
- Images with very large, bright regions
30+
- Almost-empty and empty images
31+
- Images with large empty regions or "holes"
1232

13-
The WNet3D does not require a large amount of data to train, but during inference images should be similar to those
14-
the model was trained on; you can retrain from our pretrained model to your image dataset to quickly reach good performance.
33+
However, the model may be accomodate:
34+
35+
- Uneven brightness distribution
36+
- Varied object shapes and radius
37+
- Noisy images
38+
- Uneven illumination across the image
39+
40+
For optimal results, during inference, images should be similar to those the model was trained on; however this is not a strict requirement.
41+
42+
You may also retrain from our pretrained model to your image dataset to help quickly reach good performance if, simply check "Use pre-trained weights" in the training module, and lower the learning rate.
1543

1644
.. note::
17-
- The WNet3D relies on brightness to distinguish objects from the background. For better results, use image regions with minimal artifacts. If you notice many artifacts, consider training on one of the supervised models.
18-
- The model has two losses, the **`SoftNCut loss`**, which clusters pixels according to brightness, and a reconstruction loss, either **`Mean Square Error (MSE)`** or **`Binary Cross Entropy (BCE)`**. Unlike the method described in the original paper, these losses are added in a weighted sum and the backward pass is performed for the whole model at once. The SoftNcuts and BCE are bounded between 0 and 1; the MSE may take large positive values. It is recommended to watch for the weighted sum of losses to be **close to one on the first epoch**, for training stability.
19-
- For good performance, you should wait for the SoftNCut to reach a plateau; the reconstruction loss must also decrease but is generally less critical.
45+
- The WNet3D relies on brightness to distinguish objects from the background. For better results, use image regions with minimal artifacts. If you notice many artifacts, consider trying one of our supervised models (for lightsheet microscopy).
46+
- The model has two losses, the **`SoftNCut loss`**, which clusters pixels according to brightness, and a reconstruction loss, either **`Mean Square Error (MSE)`** or **`Binary Cross Entropy (BCE)`**.
47+
- For good performance, wait for the SoftNCut to reach a plateau; the reconstruction loss should also be decreasing overall, but this is generally less critical for segmentation performance.
2048

2149
Parameters
22-
----------
50+
-------------
2351

2452
.. figure:: ../images/training_tab_4.png
2553
:scale: 100 %
@@ -29,7 +57,7 @@ Parameters
2957

3058
_`When using the WNet3D training module`, the **Advanced** tab contains a set of additional options:
3159

32-
- **Number of classes** : Dictates the segmentation classes (default is 2). Increasing the number of classes will result in a more progressive segmentation according to brightness; can be useful if you have "halos" around your objects or artifacts with a significantly different brightness.
60+
- **Number of classes** : Dictates the segmentation classes (default is 2). Increasing the number of classes will result in a more progressive segmentation according to brightness; can be useful if you have "halos" around your objects, or to approximate boundary labels.
3361
- **Reconstruction loss** : Choose between MSE or BCE (default is MSE). MSE is more precise but also sensitive to outliers; BCE is more robust against outliers at the cost of precision.
3462

3563
- NCuts parameters:
@@ -43,22 +71,28 @@ _`When using the WNet3D training module`, the **Advanced** tab contains a set of
4371

4472
- Weights for the sum of losses :
4573
- **NCuts weight** : Sets the weight of the NCuts loss (default is 0.5).
46-
- **Reconstruction weight** : Sets the weight for the reconstruction loss (default is 0.5*1e-2).
74+
- **Reconstruction weight** : Sets the weight for the reconstruction loss (default is 5*1e-3).
4775

48-
.. note::
49-
The weight of the reconstruction loss should be adjusted to ensure the weighted sum is around one during the first epoch;
50-
ideally the reconstruction loss should be of the same order of magnitude as the NCuts loss after being multiplied by its weight.
76+
.. important::
77+
The weight of the reconstruction loss should be adjusted to ensure that both losses are balanced.
78+
79+
This balance can be assessed using the live view of training outputs :
80+
if the NCuts loss is "taking over", causing the segmentation to only label very large, brighter versus dimmer regions, the reconstruction loss should be increased.
81+
82+
This will help the model to focus on the details of the objects, rather than just the overall brightness of the volume.
5183

5284
Common issues troubleshooting
5385
------------------------------
54-
If you do not find a satisfactory answer here, please do not hesitate to `open an issue`_ on GitHub.
5586

56-
- **The NCuts loss explodes after a few epochs** : Lower the learning rate, first by a factor of two, then ten.
87+
.. important::
88+
If you do not find a satisfactory answer here, please do not hesitate to `open an issue`_ on GitHub.
89+
90+
91+
- **The NCuts loss "explodes" after a few epochs** : Lower the learning rate, for example start with a factor of two, then ten.
5792

58-
- **The NCuts loss does not converge and is unstable** :
59-
The normalization step might not be adapted to your images. Disable normalization and change intensity_sigma according to the distribution of values in your image. For reference, by default images are remapped to values between 0 and 100, and intensity_sigma=1.
93+
- **Reconstruction (decoder) performance is poor** : First, try increasing the weight of the reconstruction loss. If this is ineffective, switch to BCE loss and set the scaling factor of the reconstruction loss to 0.5, OR adjust the weight of the MSE loss.
6094

61-
- **Reconstruction (decoder) performance is poor** : switch to BCE and set the scaling factor of the reconstruction loss to 0.5, OR adjust the weight of the MSE loss to make it closer to 1 in the weighted sum.
95+
- **Segmentation only separates the brighter versus dimmer regions** : Increase the weight of the reconstruction loss.
6296

6397

6498
.. _WNet, A Deep Model for Fully Unsupervised Image Segmentation: https://arxiv.org/abs/1711.08506

objects.inv

26 Bytes
Binary file not shown.

searchindex.js

Lines changed: 1 addition & 1 deletion
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

source/guides/cropping_module_guide.html

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -502,7 +502,7 @@ <h2>Interface &amp; functionalities<a class="headerlink" href="#interface-functi
502502
<div class="admonition note">
503503
<p class="admonition-title">Note</p>
504504
<ul class="simple">
505-
<li><p>If your <strong>cropped volume isnt visible</strong>, consider changing the <strong>colormap</strong> of the image and the cropped
505+
<li><p>If your <strong>cropped volume isn’t visible</strong>, consider changing the <strong>colormap</strong> of the image and the cropped
506506
volume to improve their visibility.</p></li>
507507
<li><p>You may want to adjust the <strong>opacity</strong> and <strong>contrast thresholds</strong> depending on your image.</p></li>
508508
<li><dl class="simple">

source/guides/training_wnet.html

Lines changed: 47 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -422,6 +422,7 @@ <h2> Contents </h2>
422422
</div>
423423
<nav aria-label="Page">
424424
<ul class="visible nav section-nav flex-column">
425+
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#selecting-training-data">Selecting training data</a></li>
425426
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#parameters">Parameters</a></li>
426427
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#common-issues-troubleshooting">Common issues troubleshooting</a></li>
427428
</ul>
@@ -437,23 +438,46 @@ <h2> Contents </h2>
437438

438439
<section id="walkthrough-wnet3d-training">
439440
<span id="training-wnet"></span><h1>Walkthrough - WNet3D training<a class="headerlink" href="#walkthrough-wnet3d-training" title="Permalink to this heading">#</a></h1>
440-
<p>This plugin provides a reimplemented, custom version of the WNet3D model from <a class="reference external" href="https://arxiv.org/abs/1711.08506">WNet, A Deep Model for Fully Unsupervised Image Segmentation</a>.
441-
For training your model, you can choose among:</p>
441+
<p>This plugin provides a reimplemented, custom version of the WNet3D model from <a class="reference external" href="https://arxiv.org/abs/1711.08506">WNet, A Deep Model for Fully Unsupervised Image Segmentation</a>.</p>
442+
<p>For training your model, you can choose among:</p>
442443
<ul class="simple">
443444
<li><p>Directly within the plugin</p></li>
444445
<li><p>The provided Jupyter notebook (locally)</p></li>
445-
<li><p>Our Colab notebook (inspired by ZeroCostDL4Mic)</p></li>
446+
<li><p>Our Colab notebook (inspired by <a class="github reference external" href="https://github.com/HenriquesLab/ZeroCostDL4Mic">HenriquesLab/ZeroCostDL4Mic</a>)</p></li>
446447
</ul>
447-
<p>The WNet3D does not require a large amount of data to train, but during inference images should be similar to those
448-
the model was trained on; you can retrain from our pretrained model to your image dataset to quickly reach good performance.</p>
448+
<section id="selecting-training-data">
449+
<h2>Selecting training data<a class="headerlink" href="#selecting-training-data" title="Permalink to this heading">#</a></h2>
450+
<p>The WNet3D <strong>does not require a large amount of data to train</strong>, but <strong>choosing the right data</strong> to train this unsupervised model <strong>is crucial</strong>.</p>
451+
<p>You may find below some guidelines, based on our own data and testing.</p>
452+
<p>The WNet3D is designed to segment objects based on their brightness, and is particularly well-suited for images with a clear contrast between objects and background.</p>
453+
<p>The WNet3D is not suitable for images with artifacts, therefore care should be taken that the images are clean and that the objects are at least somewhat distinguishable from the background.</p>
454+
<div class="admonition important">
455+
<p class="admonition-title">Important</p>
456+
<p>For optimal performance, the following should be avoided for training:</p>
457+
<ul class="simple">
458+
<li><p>Images with very large, bright regions</p></li>
459+
<li><p>Almost-empty and empty images</p></li>
460+
<li><p>Images with large empty regions or “holes”</p></li>
461+
</ul>
462+
<p>However, the model may be accomodate:</p>
463+
<ul class="simple">
464+
<li><p>Uneven brightness distribution</p></li>
465+
<li><p>Varied object shapes and radius</p></li>
466+
<li><p>Noisy images</p></li>
467+
<li><p>Uneven illumination across the image</p></li>
468+
</ul>
469+
</div>
470+
<p>For optimal results, during inference, images should be similar to those the model was trained on; however this is not a strict requirement.</p>
471+
<p>You may also retrain from our pretrained model to your image dataset to help quickly reach good performance if, simply check “Use pre-trained weights” in the training module, and lower the learning rate.</p>
449472
<div class="admonition note">
450473
<p class="admonition-title">Note</p>
451474
<ul class="simple">
452-
<li><p>The WNet3D relies on brightness to distinguish objects from the background. For better results, use image regions with minimal artifacts. If you notice many artifacts, consider training on one of the supervised models.</p></li>
453-
<li><p>The model has two losses, the <strong>`SoftNCut loss`</strong>, which clusters pixels according to brightness, and a reconstruction loss, either <strong>`Mean Square Error (MSE)`</strong> or <strong>`Binary Cross Entropy (BCE)`</strong>. Unlike the method described in the original paper, these losses are added in a weighted sum and the backward pass is performed for the whole model at once. The SoftNcuts and BCE are bounded between 0 and 1; the MSE may take large positive values. It is recommended to watch for the weighted sum of losses to be <strong>close to one on the first epoch</strong>, for training stability.</p></li>
454-
<li><p>For good performance, you should wait for the SoftNCut to reach a plateau; the reconstruction loss must also decrease but is generally less critical.</p></li>
475+
<li><p>The WNet3D relies on brightness to distinguish objects from the background. For better results, use image regions with minimal artifacts. If you notice many artifacts, consider trying one of our supervised models (for lightsheet microscopy).</p></li>
476+
<li><p>The model has two losses, the <strong>`SoftNCut loss`</strong>, which clusters pixels according to brightness, and a reconstruction loss, either <strong>`Mean Square Error (MSE)`</strong> or <strong>`Binary Cross Entropy (BCE)`</strong>.</p></li>
477+
<li><p>For good performance, wait for the SoftNCut to reach a plateau; the reconstruction loss should also be decreasing overall, but this is generally less critical for segmentation performance.</p></li>
455478
</ul>
456479
</div>
480+
</section>
457481
<section id="parameters">
458482
<h2>Parameters<a class="headerlink" href="#parameters" title="Permalink to this heading">#</a></h2>
459483
<figure class="align-right" id="id1">
@@ -464,7 +488,7 @@ <h2>Parameters<a class="headerlink" href="#parameters" title="Permalink to this
464488
</figure>
465489
<p><span class="target" id="when-using-the-wnet3d-training-module">When using the WNet3D training module</span>, the <strong>Advanced</strong> tab contains a set of additional options:</p>
466490
<ul class="simple">
467-
<li><p><strong>Number of classes</strong> : Dictates the segmentation classes (default is 2). Increasing the number of classes will result in a more progressive segmentation according to brightness; can be useful if you have “halos” around your objects or artifacts with a significantly different brightness.</p></li>
491+
<li><p><strong>Number of classes</strong> : Dictates the segmentation classes (default is 2). Increasing the number of classes will result in a more progressive segmentation according to brightness; can be useful if you have “halos” around your objects, or to approximate boundary labels.</p></li>
468492
<li><p><strong>Reconstruction loss</strong> : Choose between MSE or BCE (default is MSE). MSE is more precise but also sensitive to outliers; BCE is more robust against outliers at the cost of precision.</p></li>
469493
<li><dl class="simple">
470494
<dt>NCuts parameters:</dt><dd><ul>
@@ -487,26 +511,30 @@ <h2>Parameters<a class="headerlink" href="#parameters" title="Permalink to this
487511
<li><dl class="simple">
488512
<dt>Weights for the sum of losses :</dt><dd><ul>
489513
<li><p><strong>NCuts weight</strong> : Sets the weight of the NCuts loss (default is 0.5).</p></li>
490-
<li><p><strong>Reconstruction weight</strong> : Sets the weight for the reconstruction loss (default is 0.5*1e-2).</p></li>
514+
<li><p><strong>Reconstruction weight</strong> : Sets the weight for the reconstruction loss (default is 5*1e-3).</p></li>
491515
</ul>
492516
</dd>
493517
</dl>
494518
</li>
495519
</ul>
496-
<div class="admonition note">
497-
<p class="admonition-title">Note</p>
498-
<p>The weight of the reconstruction loss should be adjusted to ensure the weighted sum is around one during the first epoch;
499-
ideally the reconstruction loss should be of the same order of magnitude as the NCuts loss after being multiplied by its weight.</p>
520+
<div class="admonition important">
521+
<p class="admonition-title">Important</p>
522+
<p>The weight of the reconstruction loss should be adjusted to ensure that both losses are balanced.</p>
523+
<p>This balance can be assessed using the live view of training outputs :
524+
if the NCuts loss is “taking over”, causing the segmentation to only label very large, brighter versus dimmer regions, the reconstruction loss should be increased.</p>
525+
<p>This will help the model to focus on the details of the objects, rather than just the overall brightness of the volume.</p>
500526
</div>
501527
</section>
502528
<section id="common-issues-troubleshooting">
503529
<h2>Common issues troubleshooting<a class="headerlink" href="#common-issues-troubleshooting" title="Permalink to this heading">#</a></h2>
530+
<div class="admonition important">
531+
<p class="admonition-title">Important</p>
504532
<p>If you do not find a satisfactory answer here, please do not hesitate to <a class="reference external" href="https://github.com/AdaptiveMotorControlLab/CellSeg3d/issues">open an issue</a> on GitHub.</p>
533+
</div>
505534
<ul class="simple">
506-
<li><p><strong>The NCuts loss explodes after a few epochs</strong> : Lower the learning rate, first by a factor of two, then ten.</p></li>
507-
<li><p><strong>The NCuts loss does not converge and is unstable</strong> :
508-
The normalization step might not be adapted to your images. Disable normalization and change intensity_sigma according to the distribution of values in your image. For reference, by default images are remapped to values between 0 and 100, and intensity_sigma=1.</p></li>
509-
<li><p><strong>Reconstruction (decoder) performance is poor</strong> : switch to BCE and set the scaling factor of the reconstruction loss to 0.5, OR adjust the weight of the MSE loss to make it closer to 1 in the weighted sum.</p></li>
535+
<li><p><strong>The NCuts loss “explodes” after a few epochs</strong> : Lower the learning rate, for example start with a factor of two, then ten.</p></li>
536+
<li><p><strong>Reconstruction (decoder) performance is poor</strong> : First, try increasing the weight of the reconstruction loss. If this is ineffective, switch to BCE loss and set the scaling factor of the reconstruction loss to 0.5, OR adjust the weight of the MSE loss.</p></li>
537+
<li><p><strong>Segmentation only separates the brighter versus dimmer regions</strong> : Increase the weight of the reconstruction loss.</p></li>
510538
</ul>
511539
</section>
512540
</section>
@@ -574,6 +602,7 @@ <h2>Common issues troubleshooting<a class="headerlink" href="#common-issues-trou
574602
</div>
575603
<nav class="bd-toc-nav page-toc">
576604
<ul class="visible nav section-nav flex-column">
605+
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#selecting-training-data">Selecting training data</a></li>
577606
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#parameters">Parameters</a></li>
578607
<li class="toc-h2 nav-item toc-entry"><a class="reference internal nav-link" href="#common-issues-troubleshooting">Common issues troubleshooting</a></li>
579608
</ul>

0 commit comments

Comments
 (0)