You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _sources/source/guides/training_wnet.rst
+51-17Lines changed: 51 additions & 17 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,22 +4,50 @@ Walkthrough - WNet3D training
4
4
===============================
5
5
6
6
This plugin provides a reimplemented, custom version of the WNet3D model from `WNet, A Deep Model for Fully Unsupervised Image Segmentation`_.
7
+
7
8
For training your model, you can choose among:
8
9
9
10
* Directly within the plugin
10
11
* The provided Jupyter notebook (locally)
11
-
* Our Colab notebook (inspired by ZeroCostDL4Mic)
12
+
* Our Colab notebook (inspired by https://github.com/HenriquesLab/ZeroCostDL4Mic)
13
+
14
+
Selecting training data
15
+
-------------------------
16
+
17
+
The WNet3D **does not require a large amount of data to train**, but **choosing the right data** to train this unsupervised model **is crucial**.
18
+
19
+
You may find below some guidelines, based on our own data and testing.
20
+
21
+
The WNet3D is designed to segment objects based on their brightness, and is particularly well-suited for images with a clear contrast between objects and background.
22
+
23
+
The WNet3D is not suitable for images with artifacts, therefore care should be taken that the images are clean and that the objects are at least somewhat distinguishable from the background.
24
+
25
+
26
+
.. important::
27
+
For optimal performance, the following should be avoided for training:
28
+
29
+
- Images with very large, bright regions
30
+
- Almost-empty and empty images
31
+
- Images with large empty regions or "holes"
12
32
13
-
The WNet3D does not require a large amount of data to train, but during inference images should be similar to those
14
-
the model was trained on; you can retrain from our pretrained model to your image dataset to quickly reach good performance.
33
+
However, the model may be accomodate:
34
+
35
+
- Uneven brightness distribution
36
+
- Varied object shapes and radius
37
+
- Noisy images
38
+
- Uneven illumination across the image
39
+
40
+
For optimal results, during inference, images should be similar to those the model was trained on; however this is not a strict requirement.
41
+
42
+
You may also retrain from our pretrained model to your image dataset to help quickly reach good performance if, simply check "Use pre-trained weights" in the training module, and lower the learning rate.
15
43
16
44
.. note::
17
-
- The WNet3D relies on brightness to distinguish objects from the background. For better results, use image regions with minimal artifacts. If you notice many artifacts, consider training on one of the supervised models.
18
-
- The model has two losses, the **`SoftNCut loss`**, which clusters pixels according to brightness, and a reconstruction loss, either **`Mean Square Error (MSE)`** or **`Binary Cross Entropy (BCE)`**. Unlike the method described in the original paper, these losses are added in a weighted sum and the backward pass is performed for the whole model at once. The SoftNcuts and BCE are bounded between 0 and 1; the MSE may take large positive values. It is recommended to watch for the weighted sum of losses to be **close to one on the first epoch**, for training stability.
19
-
- For good performance, you should wait for the SoftNCut to reach a plateau; the reconstruction loss must also decrease but is generally less critical.
45
+
- The WNet3D relies on brightness to distinguish objects from the background. For better results, use image regions with minimal artifacts. If you notice many artifacts, consider trying one of our supervised models (for lightsheet microscopy).
46
+
- The model has two losses, the **`SoftNCut loss`**, which clusters pixels according to brightness, and a reconstruction loss, either **`Mean Square Error (MSE)`** or **`Binary Cross Entropy (BCE)`**.
47
+
- For good performance, wait for the SoftNCut to reach a plateau; the reconstruction loss should also be decreasing overall, but this is generally less critical for segmentation performance.
20
48
21
49
Parameters
22
-
----------
50
+
-------------
23
51
24
52
.. figure:: ../images/training_tab_4.png
25
53
:scale:100 %
@@ -29,7 +57,7 @@ Parameters
29
57
30
58
_`When using the WNet3D training module`, the **Advanced** tab contains a set of additional options:
31
59
32
-
- **Number of classes** : Dictates the segmentation classes (default is 2). Increasing the number of classes will result in a more progressive segmentation according to brightness; can be useful if you have "halos" around your objects or artifacts with a significantly different brightness.
60
+
- **Number of classes** : Dictates the segmentation classes (default is 2). Increasing the number of classes will result in a more progressive segmentation according to brightness; can be useful if you have "halos" around your objects, or to approximate boundary labels.
33
61
- **Reconstruction loss** : Choose between MSE or BCE (default is MSE). MSE is more precise but also sensitive to outliers; BCE is more robust against outliers at the cost of precision.
34
62
35
63
- NCuts parameters:
@@ -43,22 +71,28 @@ _`When using the WNet3D training module`, the **Advanced** tab contains a set of
43
71
44
72
- Weights for the sum of losses :
45
73
- **NCuts weight** : Sets the weight of the NCuts loss (default is 0.5).
46
-
- **Reconstruction weight** : Sets the weight for the reconstruction loss (default is 0.5*1e-2).
74
+
- **Reconstruction weight** : Sets the weight for the reconstruction loss (default is 5*1e-3).
47
75
48
-
.. note::
49
-
The weight of the reconstruction loss should be adjusted to ensure the weighted sum is around one during the first epoch;
50
-
ideally the reconstruction loss should be of the same order of magnitude as the NCuts loss after being multiplied by its weight.
76
+
.. important::
77
+
The weight of the reconstruction loss should be adjusted to ensure that both losses are balanced.
78
+
79
+
This balance can be assessed using the live view of training outputs :
80
+
if the NCuts loss is "taking over", causing the segmentation to only label very large, brighter versus dimmer regions, the reconstruction loss should be increased.
81
+
82
+
This will help the model to focus on the details of the objects, rather than just the overall brightness of the volume.
51
83
52
84
Common issues troubleshooting
53
85
------------------------------
54
-
If you do not find a satisfactory answer here, please do not hesitate to `open an issue`_ on GitHub.
55
86
56
-
- **The NCuts loss explodes after a few epochs** : Lower the learning rate, first by a factor of two, then ten.
87
+
.. important::
88
+
If you do not find a satisfactory answer here, please do not hesitate to `open an issue`_ on GitHub.
89
+
90
+
91
+
- **The NCuts loss "explodes" after a few epochs** : Lower the learning rate, for example start with a factor of two, then ten.
57
92
58
-
- **The NCuts loss does not converge and is unstable** :
59
-
The normalization step might not be adapted to your images. Disable normalization and change intensity_sigma according to the distribution of values in your image. For reference, by default images are remapped to values between 0 and 100, and intensity_sigma=1.
93
+
- **Reconstruction (decoder) performance is poor** : First, try increasing the weight of the reconstruction loss. If this is ineffective, switch to BCE loss and set the scaling factor of the reconstruction loss to 0.5, OR adjust the weight of the MSE loss.
60
94
61
-
- **Reconstruction (decoder) performance is poor** : switch to BCE and set the scaling factor of the reconstruction loss to 0.5, OR adjust the weight of the MSE loss to make it closer to 1 in the weighted sum.
95
+
- **Segmentation only separates the brighter versus dimmer regions** : Increase the weight of the reconstruction loss.
62
96
63
97
64
98
.. _WNet, A Deep Model for Fully Unsupervised Image Segmentation: https://arxiv.org/abs/1711.08506
<spanid="training-wnet"></span><h1>Walkthrough - WNet3D training<aclass="headerlink" href="#walkthrough-wnet3d-training" title="Permalink to this heading">#</a></h1>
440
-
<p>This plugin provides a reimplemented, custom version of the WNet3D model from <aclass="reference external" href="https://arxiv.org/abs/1711.08506">WNet, A Deep Model for Fully Unsupervised Image Segmentation</a>.
441
-
For training your model, you can choose among:</p>
441
+
<p>This plugin provides a reimplemented, custom version of the WNet3D model from <aclass="reference external" href="https://arxiv.org/abs/1711.08506">WNet, A Deep Model for Fully Unsupervised Image Segmentation</a>.</p>
442
+
<p>For training your model, you can choose among:</p>
<li><p>Our Colab notebook (inspired by ZeroCostDL4Mic)</p></li>
446
+
<li><p>Our Colab notebook (inspired by <aclass="github reference external" href="https://github.com/HenriquesLab/ZeroCostDL4Mic">HenriquesLab/ZeroCostDL4Mic</a>)</p></li>
446
447
</ul>
447
-
<p>The WNet3D does not require a large amount of data to train, but during inference images should be similar to those
448
-
the model was trained on; you can retrain from our pretrained model to your image dataset to quickly reach good performance.</p>
448
+
<sectionid="selecting-training-data">
449
+
<h2>Selecting training data<aclass="headerlink" href="#selecting-training-data" title="Permalink to this heading">#</a></h2>
450
+
<p>The WNet3D <strong>does not require a large amount of data to train</strong>, but <strong>choosing the right data</strong> to train this unsupervised model <strong>is crucial</strong>.</p>
451
+
<p>You may find below some guidelines, based on our own data and testing.</p>
452
+
<p>The WNet3D is designed to segment objects based on their brightness, and is particularly well-suited for images with a clear contrast between objects and background.</p>
453
+
<p>The WNet3D is not suitable for images with artifacts, therefore care should be taken that the images are clean and that the objects are at least somewhat distinguishable from the background.</p>
454
+
<divclass="admonition important">
455
+
<pclass="admonition-title">Important</p>
456
+
<p>For optimal performance, the following should be avoided for training:</p>
457
+
<ulclass="simple">
458
+
<li><p>Images with very large, bright regions</p></li>
459
+
<li><p>Almost-empty and empty images</p></li>
460
+
<li><p>Images with large empty regions or “holes”</p></li>
461
+
</ul>
462
+
<p>However, the model may be accomodate:</p>
463
+
<ulclass="simple">
464
+
<li><p>Uneven brightness distribution</p></li>
465
+
<li><p>Varied object shapes and radius</p></li>
466
+
<li><p>Noisy images</p></li>
467
+
<li><p>Uneven illumination across the image</p></li>
468
+
</ul>
469
+
</div>
470
+
<p>For optimal results, during inference, images should be similar to those the model was trained on; however this is not a strict requirement.</p>
471
+
<p>You may also retrain from our pretrained model to your image dataset to help quickly reach good performance if, simply check “Use pre-trained weights” in the training module, and lower the learning rate.</p>
449
472
<divclass="admonition note">
450
473
<pclass="admonition-title">Note</p>
451
474
<ulclass="simple">
452
-
<li><p>The WNet3D relies on brightness to distinguish objects from the background. For better results, use image regions with minimal artifacts. If you notice many artifacts, consider training on one of the supervised models.</p></li>
453
-
<li><p>The model has two losses, the <strong>`SoftNCut loss`</strong>, which clusters pixels according to brightness, and a reconstruction loss, either <strong>`Mean Square Error (MSE)`</strong> or <strong>`Binary Cross Entropy (BCE)`</strong>. Unlike the method described in the original paper, these losses are added in a weighted sum and the backward pass is performed for the whole model at once. The SoftNcuts and BCE are bounded between 0 and 1; the MSE may take large positive values. It is recommended to watch for the weighted sum of losses to be <strong>close to one on the first epoch</strong>, for training stability.</p></li>
454
-
<li><p>For good performance, you should wait for the SoftNCut to reach a plateau; the reconstruction loss must also decrease but is generally less critical.</p></li>
475
+
<li><p>The WNet3D relies on brightness to distinguish objects from the background. For better results, use image regions with minimal artifacts. If you notice many artifacts, consider trying one of our supervised models (for lightsheet microscopy).</p></li>
476
+
<li><p>The model has two losses, the <strong>`SoftNCut loss`</strong>, which clusters pixels according to brightness, and a reconstruction loss, either <strong>`Mean Square Error (MSE)`</strong> or <strong>`Binary Cross Entropy (BCE)`</strong>.</p></li>
477
+
<li><p>For good performance, wait for the SoftNCut to reach a plateau; the reconstruction loss should also be decreasing overall, but this is generally less critical for segmentation performance.</p></li>
455
478
</ul>
456
479
</div>
480
+
</section>
457
481
<sectionid="parameters">
458
482
<h2>Parameters<aclass="headerlink" href="#parameters" title="Permalink to this heading">#</a></h2>
459
483
<figureclass="align-right" id="id1">
@@ -464,7 +488,7 @@ <h2>Parameters<a class="headerlink" href="#parameters" title="Permalink to this
464
488
</figure>
465
489
<p><spanclass="target" id="when-using-the-wnet3d-training-module">When using the WNet3D training module</span>, the <strong>Advanced</strong> tab contains a set of additional options:</p>
466
490
<ulclass="simple">
467
-
<li><p><strong>Number of classes</strong> : Dictates the segmentation classes (default is 2). Increasing the number of classes will result in a more progressive segmentation according to brightness; can be useful if you have “halos” around your objects or artifacts with a significantly different brightness.</p></li>
491
+
<li><p><strong>Number of classes</strong> : Dictates the segmentation classes (default is 2). Increasing the number of classes will result in a more progressive segmentation according to brightness; can be useful if you have “halos” around your objects, or to approximate boundary labels.</p></li>
468
492
<li><p><strong>Reconstruction loss</strong> : Choose between MSE or BCE (default is MSE). MSE is more precise but also sensitive to outliers; BCE is more robust against outliers at the cost of precision.</p></li>
469
493
<li><dlclass="simple">
470
494
<dt>NCuts parameters:</dt><dd><ul>
@@ -487,26 +511,30 @@ <h2>Parameters<a class="headerlink" href="#parameters" title="Permalink to this
487
511
<li><dlclass="simple">
488
512
<dt>Weights for the sum of losses :</dt><dd><ul>
489
513
<li><p><strong>NCuts weight</strong> : Sets the weight of the NCuts loss (default is 0.5).</p></li>
490
-
<li><p><strong>Reconstruction weight</strong> : Sets the weight for the reconstruction loss (default is 0.5*1e-2).</p></li>
514
+
<li><p><strong>Reconstruction weight</strong> : Sets the weight for the reconstruction loss (default is 5*1e-3).</p></li>
491
515
</ul>
492
516
</dd>
493
517
</dl>
494
518
</li>
495
519
</ul>
496
-
<divclass="admonition note">
497
-
<pclass="admonition-title">Note</p>
498
-
<p>The weight of the reconstruction loss should be adjusted to ensure the weighted sum is around one during the first epoch;
499
-
ideally the reconstruction loss should be of the same order of magnitude as the NCuts loss after being multiplied by its weight.</p>
520
+
<divclass="admonition important">
521
+
<pclass="admonition-title">Important</p>
522
+
<p>The weight of the reconstruction loss should be adjusted to ensure that both losses are balanced.</p>
523
+
<p>This balance can be assessed using the live view of training outputs :
524
+
if the NCuts loss is “taking over”, causing the segmentation to only label very large, brighter versus dimmer regions, the reconstruction loss should be increased.</p>
525
+
<p>This will help the model to focus on the details of the objects, rather than just the overall brightness of the volume.</p>
500
526
</div>
501
527
</section>
502
528
<sectionid="common-issues-troubleshooting">
503
529
<h2>Common issues troubleshooting<aclass="headerlink" href="#common-issues-troubleshooting" title="Permalink to this heading">#</a></h2>
530
+
<divclass="admonition important">
531
+
<pclass="admonition-title">Important</p>
504
532
<p>If you do not find a satisfactory answer here, please do not hesitate to <aclass="reference external" href="https://github.com/AdaptiveMotorControlLab/CellSeg3d/issues">open an issue</a> on GitHub.</p>
533
+
</div>
505
534
<ulclass="simple">
506
-
<li><p><strong>The NCuts loss explodes after a few epochs</strong> : Lower the learning rate, first by a factor of two, then ten.</p></li>
507
-
<li><p><strong>The NCuts loss does not converge and is unstable</strong> :
508
-
The normalization step might not be adapted to your images. Disable normalization and change intensity_sigma according to the distribution of values in your image. For reference, by default images are remapped to values between 0 and 100, and intensity_sigma=1.</p></li>
509
-
<li><p><strong>Reconstruction (decoder) performance is poor</strong> : switch to BCE and set the scaling factor of the reconstruction loss to 0.5, OR adjust the weight of the MSE loss to make it closer to 1 in the weighted sum.</p></li>
535
+
<li><p><strong>The NCuts loss “explodes” after a few epochs</strong> : Lower the learning rate, for example start with a factor of two, then ten.</p></li>
536
+
<li><p><strong>Reconstruction (decoder) performance is poor</strong> : First, try increasing the weight of the reconstruction loss. If this is ineffective, switch to BCE loss and set the scaling factor of the reconstruction loss to 0.5, OR adjust the weight of the MSE loss.</p></li>
537
+
<li><p><strong>Segmentation only separates the brighter versus dimmer regions</strong> : Increase the weight of the reconstruction loss.</p></li>
0 commit comments