You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/guides/detailed_walkthrough.rst
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,6 +1,6 @@
1
1
.. _detailed_walkthrough:
2
2
3
-
Detailed walkthrough - Supervised learning
3
+
Walkthrough - Supervised learning
4
4
==========================================
5
5
6
6
This guide will show you step-by-step how to use the plugin's workflow, beginning with human annotated datasets, to generating predictions on new volumes.
@@ -116,7 +116,7 @@ In most cases this should left enabled.
116
116
* **VNet** is a larger (than SegResNet) CNN from MONAI designed for medical image segmentation.
117
117
* **TRAILMAP** is our implementation in PyTorch additionally trained on mouse cortical neural nuclei from mesoSPIM data.
118
118
* **SwinUNetR** is a MONAI implementation of the SwinUNetR model. It is costly in compute and memory, but can achieve high performance.
119
-
* **WNet** is our reimplementation of an unsupervised model, which can be used to produce segmentation without labels.
119
+
* **WNet3D** is our extension of an unsupervised model, which can be used to produce segmentation without labels. See :ref:`training_wnet` for more information.
120
120
121
121
122
122
* **The loss** : For 3D volume object detection, the Dice or Dice-focal Loss is the most efficient.
Our provided, pre-trained model uses an input size of 64x64x64. As such, window inference is always enabled
155
155
and set to 64. If you want to use a different size, you will have to train your own model using the options listed in :ref:`training_wnet`.
156
156
Additionally, window inference and the number of classes are for now fixed in the plugin to support our pre-trained model only (2 classes and window size 64).
157
+
157
158
For the best inference performance, the model should be retrained on images of the same modality as the ones you want to segment.
158
159
Please see :ref:`training_wnet` for more details on how to train your own model.
159
160
160
161
.. hint::
161
-
|WNet, as an unsupervised model, may not always output the background class in the same dimension.
162
+
|WNet3D, as an unsupervised model, may not always output the background class in the same dimension.
162
163
|This might cause the result from inference to appear densely populated.
163
164
|The plugin will automatically attempt to show the foreground class, but this might not always succeed.
164
165
|If the displayed output seems dominated by the background, you can manually adjust the visible class. To do this, **use the slider positioned at the bottom of the napari window**.
Copy file name to clipboardExpand all lines: docs/source/guides/review_module_guide.rst
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,14 @@
1
1
.. _review_module_guide:
2
2
3
-
Review🔍
3
+
Labeling🔍
4
4
=================================
5
5
6
6
.. figure:: ../images/plugin_review.png
7
7
:align:center
8
8
9
9
Layout of the review module
10
10
11
-
**Review** allows you to inspect your labels, which may be manually created or predicted by a model, and make necessary corrections.
11
+
**Labeling** allows you to inspect your labels, which may be manually created or predicted by a model, and make necessary corrections.
12
12
The system will save the updated status of each file in a csv file.
13
13
Additionally, the time taken per slice review is logged, enabling efficient monitoring.
14
14
@@ -38,7 +38,7 @@ Launching the review process
38
38
- If an identical CSV file already exists, it will be used. If not, a new one will be generated.
39
39
- If you choose to create a new dataset, a new CSV will always be created. If multiple copies already exist, a sequential number will be appended to the new file's name.
40
40
41
-
4. **Beginning the Review:**
41
+
4. **Beginning the labeling:**
42
42
Press **`Start reviewing`** once you are ready to start the review process.
43
43
44
44
.. warning::
@@ -51,9 +51,9 @@ Interface & functionalities
51
51
.. figure:: ../images/review_process_example.png
52
52
:align:center
53
53
54
-
Interface of the review process.
54
+
Interface for the labeling process.
55
55
56
-
Once you have launched the review process, you will have access to the following functionalities:
56
+
Once you have launched the labeling process, you will have access to the following functionalities:
@@ -36,7 +36,7 @@ VNet `Fully Convolutional Neural Networks for Volumetric Medical Ima
36
36
.. _TRAILMAP project on GitHub: https://github.com/AlbertPun/TRAILMAP
37
37
.. _3DUnet for Pytorch: https://github.com/wolny/pytorch-3dunet
38
38
.. _Swin UNETR, Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images: https://arxiv.org/abs/2201.01266
39
-
.. _WNet, A Deep Model for Fully Unsupervised Image Segmentation: https://arxiv.org/abs/1711.08506
39
+
.. _WNet3D, A Deep Model for Fully Unsupervised Image Segmentation: https://arxiv.org/abs/1711.08506
40
40
41
41
Training
42
42
===================
@@ -167,7 +167,7 @@ ____________________
167
167
1) **Advanced** tab
168
168
___________________
169
169
170
-
This tab is only available with WNet training. For more information please see the :ref:`WNet parameters list <When using the WNet training module>` section.
170
+
This tab is only available with WNet3D training. For more information please see the :ref:`WNet3D parameters list <When using the WNet3D training module>` section.
171
171
172
172
Running the training
173
173
____________________
@@ -195,7 +195,7 @@ The model's inputs (image, label) and outputs (raw & binarized) will also be dis
195
195
Unsupervised model
196
196
==============================================
197
197
198
-
The training of our custom WNet implementation is now available as part of the Training module.
198
+
The training of our custom WNet3D implementation is now available as part of the Training module.
199
199
200
200
Please see the :ref:`training_wnet` section for more information.
Copy file name to clipboardExpand all lines: docs/source/guides/training_wnet.rst
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,20 +1,20 @@
1
1
.. _training_wnet:
2
2
3
-
Advanced : WNet training
4
-
========================
3
+
Walkthrough - WNet3D training
4
+
===============================
5
5
6
-
This plugin provides a reimplemented, custom version of the WNet model from `WNet, A Deep Model for Fully Unsupervised Image Segmentation`_.
6
+
This plugin provides a reimplemented, custom version of the WNet3D model from `WNet, A Deep Model for Fully Unsupervised Image Segmentation`_.
7
7
For training your model, you can choose among:
8
8
9
9
* Directly within the plugin
10
10
* The provided Jupyter notebook (locally)
11
11
* Our Colab notebook (inspired by ZeroCostDL4Mic)
12
12
13
-
The WNet does not require a large amount of data to train, but during inference images should be similar to those
13
+
The WNet3D does not require a large amount of data to train, but during inference images should be similar to those
14
14
the model was trained on; you can retrain from our pretrained model to your image dataset to quickly reach good performance.
15
15
16
16
.. note::
17
-
- The WNet relies on brightness to distinguish objects from the background. For better results, use image regions with minimal artifacts. If you notice many artifacts, consider training on one of the supervised models.
17
+
- The WNet3D relies on brightness to distinguish objects from the background. For better results, use image regions with minimal artifacts. If you notice many artifacts, consider training on one of the supervised models.
18
18
- The model has two losses, the **`SoftNCut loss`**, which clusters pixels according to brightness, and a reconstruction loss, either **`Mean Square Error (MSE)`** or **`Binary Cross Entropy (BCE)`**. Unlike the method described in the original paper, these losses are added in a weighted sum and the backward pass is performed for the whole model at once. The SoftNcuts and BCE are bounded between 0 and 1; the MSE may take large positive values. It is recommended to watch for the weighted sum of losses to be **close to one on the first epoch**, for training stability.
19
19
- For good performance, you should wait for the SoftNCut to reach a plateau; the reconstruction loss must also decrease but is generally less critical.
20
20
@@ -27,7 +27,7 @@ Parameters
27
27
28
28
Advanced tab
29
29
30
-
_`When using the WNet training module`, the **Advanced** tab contains a set of additional options:
30
+
_`When using the WNet3D training module`, the **Advanced** tab contains a set of additional options:
31
31
32
32
- **Number of classes** : Dictates the segmentation classes (default is 2). Increasing the number of classes will result in a more progressive segmentation according to brightness; can be useful if you have "halos" around your objects or artifacts with a significantly different brightness.
33
33
- **Reconstruction loss** : Choose between MSE or BCE (default is MSE). MSE is more precise but also sensitive to outliers; BCE is more robust against outliers at the cost of precision.
0 commit comments