Skip to content

Commit eda500c

Browse files
authored
Small docs update for WNet (#68)
1 parent 6b70a02 commit eda500c

File tree

10 files changed

+34
-33
lines changed

10 files changed

+34
-33
lines changed

docs/TODO.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
TODO:
33
- [ ] Add a way to get the current version of the library
44
- [x] Update all modules
5-
- [x] Better WNet tutorial
5+
- [x] Better WNet3D tutorial
66
- [x] Setup GH Actions
77
- [ ] Add a bibliography
88
)

docs/_toc.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,9 @@ parts:
1414
- caption : Walkthroughs
1515
chapters:
1616
- file: source/guides/detailed_walkthrough.rst
17+
- file: source/guides/training_wnet.rst
1718
- caption : Advanced guides
1819
chapters:
19-
- file: source/guides/training_wnet.rst
2020
- file: source/guides/custom_model_template.rst
2121
- caption : Code
2222
chapters:

docs/source/guides/detailed_walkthrough.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
.. _detailed_walkthrough:
22

3-
Detailed walkthrough - Supervised learning
3+
Walkthrough - Supervised learning
44
==========================================
55

66
This guide will show you step-by-step how to use the plugin's workflow, beginning with human annotated datasets, to generating predictions on new volumes.
@@ -116,7 +116,7 @@ In most cases this should left enabled.
116116
* **VNet** is a larger (than SegResNet) CNN from MONAI designed for medical image segmentation.
117117
* **TRAILMAP** is our implementation in PyTorch additionally trained on mouse cortical neural nuclei from mesoSPIM data.
118118
* **SwinUNetR** is a MONAI implementation of the SwinUNetR model. It is costly in compute and memory, but can achieve high performance.
119-
* **WNet** is our reimplementation of an unsupervised model, which can be used to produce segmentation without labels.
119+
* **WNet3D** is our extension of an unsupervised model, which can be used to produce segmentation without labels. See :ref:`training_wnet` for more information.
120120

121121

122122
* **The loss** : For 3D volume object detection, the Dice or Dice-focal Loss is the most efficient.

docs/source/guides/inference_module_guide.rst

Lines changed: 9 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ Model Link to original paper
2323
============== ================================================================================================
2424
SwinUNetR `Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images`_
2525
SegResNet `3D MRI brain tumor segmentation using autoencoder regularization`_
26-
WNet `WNet, A Deep Model for Fully Unsupervised Image Segmentation`_
26+
WNet3D `WNet3D, A Deep Model for Fully Unsupervised Image Segmentation`_
2727
TRAILMAP_MS An implementation of the `TRAILMAP project on GitHub`_ using `3DUNet for PyTorch`_
2828
VNet `Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation`_
2929
============== ================================================================================================
@@ -33,10 +33,10 @@ VNet `Fully Convolutional Neural Networks for Volumetric Medical Ima
3333
.. _TRAILMAP project on GitHub: https://github.com/AlbertPun/TRAILMAP
3434
.. _3DUnet for Pytorch: https://github.com/wolny/pytorch-3dunet
3535
.. _Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images: https://arxiv.org/abs/2201.01266
36-
.. _WNet, A Deep Model for Fully Unsupervised Image Segmentation: https://arxiv.org/abs/1711.08506
36+
.. _WNet3D, A Deep Model for Fully Unsupervised Image Segmentation: https://arxiv.org/abs/1711.08506
3737

3838
.. note::
39-
For WNet-specific instruction please refer to the appropriate section below.
39+
For WNet3D-specific instruction please refer to the appropriate section below.
4040

4141

4242
Interface and functionalities
@@ -142,23 +142,24 @@ In the ``notebooks`` folder you will find a plotting guide for cell statistics d
142142
Simply load the csv file in the notebook and use the provided functions to plot the desired statistics.
143143

144144

145-
Unsupervised model - WNet
146-
-------------------------
145+
Unsupervised model - WNet3D
146+
--------------------------------
147147

148-
| The `WNet model` is a fully unsupervised model used to segment images without any labels.
148+
| The `WNet3D model` is a fully unsupervised model used to segment images without any labels.
149149
| It functions similarly to the above models, with a few notable differences.
150150
151-
.. _WNet model: https://arxiv.org/abs/1711.08506
151+
.. _WNet3D model: https://arxiv.org/abs/1711.08506
152152

153153
.. note::
154154
Our provided, pre-trained model uses an input size of 64x64x64. As such, window inference is always enabled
155155
and set to 64. If you want to use a different size, you will have to train your own model using the options listed in :ref:`training_wnet`.
156156
Additionally, window inference and the number of classes are for now fixed in the plugin to support our pre-trained model only (2 classes and window size 64).
157+
157158
For the best inference performance, the model should be retrained on images of the same modality as the ones you want to segment.
158159
Please see :ref:`training_wnet` for more details on how to train your own model.
159160

160161
.. hint::
161-
| WNet, as an unsupervised model, may not always output the background class in the same dimension.
162+
| WNet3D, as an unsupervised model, may not always output the background class in the same dimension.
162163
| This might cause the result from inference to appear densely populated.
163164
| The plugin will automatically attempt to show the foreground class, but this might not always succeed.
164165
| If the displayed output seems dominated by the background, you can manually adjust the visible class. To do this, **use the slider positioned at the bottom of the napari window**.

docs/source/guides/installation_guide.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ Successful installation will add the napari-cellseg3D plugin to napari’s Plugi
7070

7171

7272
M1/M2 (ARM64) Mac installation
73-
-------------------------------
73+
--------------------------------------------
7474
.. _ARM64_Mac_installation:
7575

7676
For ARM64 Macs, we recommend using our custom CONDA environment. This is particularly important for M1 or M2 MacBooks.
@@ -112,10 +112,10 @@ OR to install from source:
112112
pip install -e .
113113
114114
Optional requirements
115-
---------------------
115+
------------------------------
116116

117117
Additional functionalities
118-
~~~~~~~~~~~~~~~~~~~~~~~~~~~
118+
______________________________
119119

120120
Several additional functionalities are available optionally. To install them, use the following commands:
121121

@@ -141,7 +141,7 @@ Several additional functionalities are available optionally. To install them, us
141141
pip install napari-cellseg3D[onnx-gpu]
142142
143143
Development requirements
144-
~~~~~~~~~~~~~~~~~~~~~~~~~~~
144+
______________________________
145145

146146
- Building the documentation:
147147

docs/source/guides/review_module_guide.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
.. _review_module_guide:
22

3-
Review🔍
3+
Labeling🔍
44
=================================
55

66
.. figure:: ../images/plugin_review.png
77
:align: center
88

99
Layout of the review module
1010

11-
**Review** allows you to inspect your labels, which may be manually created or predicted by a model, and make necessary corrections.
11+
**Labeling** allows you to inspect your labels, which may be manually created or predicted by a model, and make necessary corrections.
1212
The system will save the updated status of each file in a csv file.
1313
Additionally, the time taken per slice review is logged, enabling efficient monitoring.
1414

@@ -38,7 +38,7 @@ Launching the review process
3838
- If an identical CSV file already exists, it will be used. If not, a new one will be generated.
3939
- If you choose to create a new dataset, a new CSV will always be created. If multiple copies already exist, a sequential number will be appended to the new file's name.
4040

41-
4. **Beginning the Review:**
41+
4. **Beginning the labeling:**
4242
Press **`Start reviewing`** once you are ready to start the review process.
4343

4444
.. warning::
@@ -51,9 +51,9 @@ Interface & functionalities
5151
.. figure:: ../images/review_process_example.png
5252
:align: center
5353

54-
Interface of the review process.
54+
Interface for the labeling process.
5555

56-
Once you have launched the review process, you will have access to the following functionalities:
56+
Once you have launched the labeling process, you will have access to the following functionalities:
5757

5858
.. hlist::
5959
:columns: 1

docs/source/guides/training_module_guide.rst

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ Model Link to original paper
2626
============== ================================================================================================
2727
SegResNet `3D MRI brain tumor segmentation using autoencoder regularization`_
2828
SwinUNetR `Swin UNETR, Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images`_
29-
WNet `WNet, A Deep Model for Fully Unsupervised Image Segmentation`_
29+
WNet3D `WNet3D, A Deep Model for Fully Unsupervised Image Segmentation`_
3030
TRAILMAP_MS An implementation of the `TRAILMAP project on GitHub`_ using `3DUNet for PyTorch`_
3131
VNet `Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation`_
3232
============== ================================================================================================
@@ -36,7 +36,7 @@ VNet `Fully Convolutional Neural Networks for Volumetric Medical Ima
3636
.. _TRAILMAP project on GitHub: https://github.com/AlbertPun/TRAILMAP
3737
.. _3DUnet for Pytorch: https://github.com/wolny/pytorch-3dunet
3838
.. _Swin UNETR, Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images: https://arxiv.org/abs/2201.01266
39-
.. _WNet, A Deep Model for Fully Unsupervised Image Segmentation: https://arxiv.org/abs/1711.08506
39+
.. _WNet3D, A Deep Model for Fully Unsupervised Image Segmentation: https://arxiv.org/abs/1711.08506
4040

4141
Training
4242
===================
@@ -167,7 +167,7 @@ ____________________
167167
1) **Advanced** tab
168168
___________________
169169

170-
This tab is only available with WNet training. For more information please see the :ref:`WNet parameters list <When using the WNet training module>` section.
170+
This tab is only available with WNet3D training. For more information please see the :ref:`WNet3D parameters list <When using the WNet3D training module>` section.
171171

172172
Running the training
173173
____________________
@@ -195,7 +195,7 @@ The model's inputs (image, label) and outputs (raw & binarized) will also be dis
195195
Unsupervised model
196196
==============================================
197197

198-
The training of our custom WNet implementation is now available as part of the Training module.
198+
The training of our custom WNet3D implementation is now available as part of the Training module.
199199

200200
Please see the :ref:`training_wnet` section for more information.
201201

docs/source/guides/training_wnet.rst

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,20 @@
11
.. _training_wnet:
22

3-
Advanced : WNet training
4-
========================
3+
Walkthrough - WNet3D training
4+
===============================
55

6-
This plugin provides a reimplemented, custom version of the WNet model from `WNet, A Deep Model for Fully Unsupervised Image Segmentation`_.
6+
This plugin provides a reimplemented, custom version of the WNet3D model from `WNet, A Deep Model for Fully Unsupervised Image Segmentation`_.
77
For training your model, you can choose among:
88

99
* Directly within the plugin
1010
* The provided Jupyter notebook (locally)
1111
* Our Colab notebook (inspired by ZeroCostDL4Mic)
1212

13-
The WNet does not require a large amount of data to train, but during inference images should be similar to those
13+
The WNet3D does not require a large amount of data to train, but during inference images should be similar to those
1414
the model was trained on; you can retrain from our pretrained model to your image dataset to quickly reach good performance.
1515

1616
.. note::
17-
- The WNet relies on brightness to distinguish objects from the background. For better results, use image regions with minimal artifacts. If you notice many artifacts, consider training on one of the supervised models.
17+
- The WNet3D relies on brightness to distinguish objects from the background. For better results, use image regions with minimal artifacts. If you notice many artifacts, consider training on one of the supervised models.
1818
- The model has two losses, the **`SoftNCut loss`**, which clusters pixels according to brightness, and a reconstruction loss, either **`Mean Square Error (MSE)`** or **`Binary Cross Entropy (BCE)`**. Unlike the method described in the original paper, these losses are added in a weighted sum and the backward pass is performed for the whole model at once. The SoftNcuts and BCE are bounded between 0 and 1; the MSE may take large positive values. It is recommended to watch for the weighted sum of losses to be **close to one on the first epoch**, for training stability.
1919
- For good performance, you should wait for the SoftNCut to reach a plateau; the reconstruction loss must also decrease but is generally less critical.
2020

@@ -27,7 +27,7 @@ Parameters
2727

2828
Advanced tab
2929

30-
_`When using the WNet training module`, the **Advanced** tab contains a set of additional options:
30+
_`When using the WNet3D training module`, the **Advanced** tab contains a set of additional options:
3131

3232
- **Number of classes** : Dictates the segmentation classes (default is 2). Increasing the number of classes will result in a more progressive segmentation according to brightness; can be useful if you have "halos" around your objects or artifacts with a significantly different brightness.
3333
- **Reconstruction loss** : Choose between MSE or BCE (default is MSE). MSE is more precise but also sensitive to outliers; BCE is more robust against outliers at the cost of precision.

docs/welcome.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -160,7 +160,7 @@ This plugin mainly uses the following libraries and software:
160160

161161
* `pyclEsperanto`_ (for the Voronoi Otsu labeling) by Robert Haase
162162

163-
* A custom re-implementation of the `WNet`_ by Xia and Kulis [3]_
163+
* A new unsupervised 3D model based on the `WNet`_ by Xia and Kulis [3]_
164164

165165
.. _Mathis Laboratory of Adaptive Motor Control: http://www.mackenziemathislab.org/
166166
.. _Wyss Center: https://wysscenter.ch/

napari_cellseg3d/napari.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ contributions:
2727

2828
widgets:
2929
- command: napari_cellseg3d.load
30-
display_name: Review
30+
display_name: Labeling
3131

3232
- command: napari_cellseg3d.infer
3333
display_name: Inference

0 commit comments

Comments
 (0)