You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: _sources/source/guides/training_wnet.rst
+9-12Lines changed: 9 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,24 +18,21 @@ The WNet3D **does not require a large amount of data to train**, but **choosing
18
18
19
19
You may find below some guidelines, based on our own data and testing.
20
20
21
-
The WNet3D is designed to segment objects based on their brightness, and is particularly well-suited for images with a clear contrast between objects and background.
22
-
23
-
The WNet3D is not suitable for images with artifacts, therefore care should be taken that the images are clean and that the objects are at least somewhat distinguishable from the background.
21
+
The WNet3D is a self-supervised learning approach for 3D cell segmentation, and relies on the assumption that structural and morphological features of cells can be inferred directly from unlabeled data. This involves leveraging inherent properties such as spatial coherence and local contrast in imaging volumes to distinguish cellular structures. This approach assumes that meaningful representations of cellular boundaries and nuclei can emerge solely from raw 3D volumes. Thus, we strongly recommend that you use WNet3D on stacks that have clear foreground/background segregation and limited noise. Even if your final samples have noise, it is best to train on data that is as clean as you can.
24
22
25
23
26
24
.. important::
27
25
For optimal performance, the following should be avoided for training:
28
26
29
-
- Images with very large, bright regions
30
-
- Almost-empty and empty images
31
-
- Images with large empty regions or "holes"
27
+
- Images with over-exposed pixels/artifacts you do not want to be learned!
28
+
- Almost-empty and/or fully empty images, especially if noise is present (it will learn to segment very small objects!).
32
29
33
-
However, the model may be accomodate:
30
+
However, the model may accomodate:
34
31
35
-
- Uneven brightness distribution
36
-
- Varied object shapes and radius
37
-
- Noisy images
38
-
- Uneven illumination across the image
32
+
- Uneven brightness distribution in your image!
33
+
- Varied object shapes and radius!
34
+
- Noisy images (as long as resolution is sufficient and boundaries are clear)!
35
+
- Uneven illumination across the image!
39
36
40
37
For optimal results, during inference, images should be similar to those the model was trained on; however this is not a strict requirement.
41
38
@@ -88,7 +85,7 @@ Common issues troubleshooting
88
85
If you do not find a satisfactory answer here, please do not hesitate to `open an issue`_ on GitHub.
89
86
90
87
91
-
- **The NCuts loss "explodes" after a few epochs** : Lower the learning rate, for example start with a factor of two, then ten.
88
+
- **The NCuts loss "explodes" upward after a few epochs** : Lower the learning rate, for example start with a factor of two, then ten.
92
89
93
90
- **Reconstruction (decoder) performance is poor** : First, try increasing the weight of the reconstruction loss. If this is ineffective, switch to BCE loss and set the scaling factor of the reconstruction loss to 0.5, OR adjust the weight of the MSE loss.
Copy file name to clipboardExpand all lines: source/guides/training_wnet.html
+9-11Lines changed: 9 additions & 11 deletions
Original file line number
Diff line number
Diff line change
@@ -449,22 +449,20 @@ <h2> Contents </h2>
449
449
<h2>Selecting training data<aclass="headerlink" href="#selecting-training-data" title="Permalink to this heading">#</a></h2>
450
450
<p>The WNet3D <strong>does not require a large amount of data to train</strong>, but <strong>choosing the right data</strong> to train this unsupervised model <strong>is crucial</strong>.</p>
451
451
<p>You may find below some guidelines, based on our own data and testing.</p>
452
-
<p>The WNet3D is designed to segment objects based on their brightness, and is particularly well-suited for images with a clear contrast between objects and background.</p>
453
-
<p>The WNet3D is not suitable for images with artifacts, therefore care should be taken that the images are clean and that the objects are at least somewhat distinguishable from the background.</p>
452
+
<p>The WNet3D is a self-supervised learning approach for 3D cell segmentation, and relies on the assumption that structural and morphological features of cells can be inferred directly from unlabeled data. This involves leveraging inherent properties such as spatial coherence and local contrast in imaging volumes to distinguish cellular structures. This approach assumes that meaningful representations of cellular boundaries and nuclei can emerge solely from raw 3D volumes. Thus, we strongly recommend that you use WNet3D on stacks that have clear foreground/background segregation and limited noise. Even if your final samples have noise, it is best to train on data that is as clean as you can.</p>
454
453
<divclass="admonition important">
455
454
<pclass="admonition-title">Important</p>
456
455
<p>For optimal performance, the following should be avoided for training:</p>
457
456
<ulclass="simple">
458
-
<li><p>Images with very large, bright regions</p></li>
459
-
<li><p>Almost-empty and empty images</p></li>
460
-
<li><p>Images with large empty regions or “holes”</p></li>
457
+
<li><p>Images with over-exposed pixels/artifacts you do not want to be learned!</p></li>
458
+
<li><p>Almost-empty and/or fully empty images, especially if noise is present (it will learn to segment very small objects!).</p></li>
461
459
</ul>
462
-
<p>However, the model may be accomodate:</p>
460
+
<p>However, the model may accomodate:</p>
463
461
<ulclass="simple">
464
-
<li><p>Uneven brightness distribution</p></li>
465
-
<li><p>Varied object shapes and radius</p></li>
466
-
<li><p>Noisy images</p></li>
467
-
<li><p>Uneven illumination across the image</p></li>
462
+
<li><p>Uneven brightness distribution in your image!</p></li>
463
+
<li><p>Varied object shapes and radius!</p></li>
464
+
<li><p>Noisy images (as long as resolution is sufficient and boundaries are clear)!</p></li>
465
+
<li><p>Uneven illumination across the image!</p></li>
468
466
</ul>
469
467
</div>
470
468
<p>For optimal results, during inference, images should be similar to those the model was trained on; however this is not a strict requirement.</p>
<p>If you do not find a satisfactory answer here, please do not hesitate to <aclass="reference external" href="https://github.com/AdaptiveMotorControlLab/CellSeg3D/issues">open an issue</a> on GitHub.</p>
533
531
</div>
534
532
<ulclass="simple">
535
-
<li><p><strong>The NCuts loss “explodes” after a few epochs</strong> : Lower the learning rate, for example start with a factor of two, then ten.</p></li>
533
+
<li><p><strong>The NCuts loss “explodes” upward after a few epochs</strong> : Lower the learning rate, for example start with a factor of two, then ten.</p></li>
536
534
<li><p><strong>Reconstruction (decoder) performance is poor</strong> : First, try increasing the weight of the reconstruction loss. If this is ineffective, switch to BCE loss and set the scaling factor of the reconstruction loss to 0.5, OR adjust the weight of the MSE loss.</p></li>
537
535
<li><p><strong>Segmentation only separates the brighter versus dimmer regions</strong> : Increase the weight of the reconstruction loss.</p></li>
0 commit comments