Skip to content

Commit 0c3450a

Browse files
committed
Started docs update
1 parent e4b10a3 commit 0c3450a

File tree

8 files changed

+80
-30
lines changed

8 files changed

+80
-30
lines changed

docs/res/code/model_framework.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ Class : ModelFramework
1212
Methods
1313
**********************
1414
.. autoclass:: napari_cellseg3d.code_models.model_framework::ModelFramework
15-
:members: __init__, send_log, save_log, save_log_to_path, display_status_report, create_train_dataset_dict, get_model, get_available_models, get_device, empty_cuda_cache
15+
:members: __init__, send_log, save_log, save_log_to_path, display_status_report, create_train_dataset_dict, get_available_models, get_device, empty_cuda_cache
1616
:noindex:
1717

1818

docs/res/code/plugin_model_training.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,12 +11,12 @@ Class : Trainer
1111
Methods
1212
**********************
1313
.. autoclass:: napari_cellseg3d.code_plugins.plugin_model_training::Trainer
14-
:members: __init__, get_loss, check_ready, send_log, start, on_start, on_finish, on_error, on_yield, plot_loss, update_loss_plot
14+
:members: __init__, check_ready, send_log, start, on_start, on_finish, on_error, on_yield, update_loss_plot
1515
:noindex:
1616

1717

1818

1919
Attributes
2020
*********************
2121
.. autoclass:: napari_cellseg3d.code_plugins.plugin_model_training::Trainer
22-
:members: _viewer, worker, loss_dict, canvas, train_loss_plot, dice_metric_plot
22+
:members: _viewer, worker, canvas

docs/res/code/workers.rst

Lines changed: 30 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Class : LogSignal
1010

1111
Attributes
1212
************************
13-
.. autoclass:: napari_cellseg3d.code_models.workers::LogSignal
13+
.. autoclass:: napari_cellseg3d.code_models.workers_utils::LogSignal
1414
:members: log_signal
1515
:noindex:
1616

@@ -24,21 +24,47 @@ Class : InferenceWorker
2424

2525
Methods
2626
************************
27-
.. autoclass:: napari_cellseg3d.code_models.workers::InferenceWorker
27+
.. autoclass:: napari_cellseg3d.code_models.worker_inference::InferenceWorker
2828
:members: __init__, log, create_inference_dict, inference
2929
:noindex:
3030

3131
.. _here: https://napari-staging-site.github.io/guides/stable/threading.html
3232

3333

34-
Class : TrainingWorker
34+
Class : TrainingWorkerBase
3535
-------------------------------------------
3636

3737
.. important::
3838
Inherits from :py:class:`napari.qt.threading.GeneratorWorker`
3939

4040
Methods
4141
************************
42-
.. autoclass:: napari_cellseg3d.code_models.workers::TrainingWorker
42+
.. autoclass:: napari_cellseg3d.code_models.worker_training::TrainingWorkerBase
4343
:members: __init__, log, train
4444
:noindex:
45+
46+
47+
Class : WNetTrainingWorker
48+
-------------------------------------------
49+
50+
.. important::
51+
Inherits from :py:class:`TrainingWorkerBase`
52+
53+
Methods
54+
************************
55+
.. autoclass:: napari_cellseg3d.code_models.worker_training::WNetTrainingWorker
56+
:members: __init__, train, eval, get_patch_dataset, get_dataset_eval, get_dataset
57+
:noindex:
58+
59+
60+
Class : SupervisedTrainingWorker
61+
-------------------------------------------
62+
63+
.. important::
64+
Inherits from :py:class:`TrainingWorkerBase`
65+
66+
Methods
67+
************************
68+
.. autoclass:: napari_cellseg3d.code_models.worker_training::SupervisedTrainingWorker
69+
:members: __init__, train
70+
:noindex:

docs/res/guides/detailed_walkthrough.rst

Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -120,9 +120,9 @@ Finally, the last tab lets you choose :
120120

121121
* SegResNet is a lightweight model (low memory requirements) from MONAI originally designed for 3D fMRI data.
122122
* VNet is a larger (than SegResNet) CNN from MONAI designed for medical image segmentation.
123-
* TRAILMAP is our PyTorch implementation of a 3D CNN model trained for axonal detection in cleared tissue.
124123
* TRAILMAP_MS is our implementation in PyTorch additionally trained on mouse cortical neural nuclei from mesoSPIM data.
125-
* Note, the code is very modular, so it is relatively straightforward to use (and contribute) your model as well.
124+
* SwinUNetR is a MONAI implementation of the SwinUNetR model. It is costly in compute and memory, but can achieve high performance.
125+
* WNet is our reimplementation of an unsupervised model, which can be used to produce segmentation without labels.
126126

127127

128128
* The loss : for object detection in 3D volumes you'll likely want to use the Dice or Dice-focal Loss.
@@ -239,13 +239,12 @@ Scoring, review, analysis
239239
----------------------------
240240

241241

242-
.. Using the metrics utility module, you can compare the model's predictions to any ground truth
243-
labels you might have.
244-
Simply provide your prediction and ground truth labels, and compute the results.
245-
A Dice metric of 1 indicates perfect matching, whereas a score of 0 indicates complete mismatch.
246-
Select which score **you consider as sub-optimal**, and all results below this will be **shown in napari**.
247-
If at any time the **orientation of your prediction labels changed compared to the ground truth**, check the
248-
"Find best orientation" option to compensate for it.
242+
.. Using the metrics utility module, you can compare the model's predictions to any ground truth labels you might have.
243+
Simply provide your prediction and ground truth labels, and compute the results.
244+
A Dice metric of 1 indicates perfect matching, whereas a score of 0 indicates complete mismatch.
245+
Select which score **you consider as sub-optimal**, and all results below this will be **shown in napari**.
246+
If at any time the **orientation of your prediction labels changed compared to the ground truth**, check the
247+
"Find best orientation" option to compensate for it.
249248
250249
251250
Labels review

docs/res/guides/training_module_guide.rst

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ Training module guide - Unsupervised models
44
==============================================
55

66
.. important::
7-
The WNet training is for now only available in the provided jupyter notebook, in the ``notebooks`` folder.
7+
The WNet training is for now available as part of the plugin in the Training module.
88
Please see the :ref:`training_wnet` section for more information.
99

1010
Training module guide - Supervised models
@@ -25,14 +25,15 @@ Model Link to original paper
2525
============== ================================================================================================
2626
VNet `Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation`_
2727
SegResNet `3D MRI brain tumor segmentation using autoencoder regularization`_
28-
TRAILMAP_MS A PyTorch implementation of the `TRAILMAP project on GitHub`_ pretrained with MesoSpim data
29-
TRAILMAP An implementation of the `TRAILMAP project on GitHub`_ using a `3DUNet for PyTorch`_
28+
TRAILMAP_MS An implementation of the `TRAILMAP project on GitHub`_ using `3DUNet for PyTorch`_
29+
SwinUNetR `Swin UNETR, Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images`_
3030
============== ================================================================================================
3131

3232
.. _Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation: https://arxiv.org/pdf/1606.04797.pdf
3333
.. _3D MRI brain tumor segmentation using autoencoder regularization: https://arxiv.org/pdf/1810.11654.pdf
3434
.. _TRAILMAP project on GitHub: https://github.com/AlbertPun/TRAILMAP
3535
.. _3DUnet for Pytorch: https://github.com/wolny/pytorch-3dunet
36+
.. _Swin UNETR, Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images: https://arxiv.org/abs/2201.01266
3637

3738
.. important::
3839
| The machine learning models used by this program require all images of a dataset to be of the same size.

docs/res/guides/training_wnet.rst

Lines changed: 30 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -15,21 +15,45 @@ the model was trained on; you can retrain from our pretrained model to your set
1515
The model has two losses, the SoftNCut loss which clusters pixels according to brightness, and a reconstruction loss, either
1616
Mean Square Error (MSE) or Binary Cross Entropy (BCE).
1717
Unlike the original paper, these losses are added in a weighted sum and the backward pass is performed for the whole model at once.
18-
The SoftNcuts is bounded between 0 and 1; the MSE may take large values.
18+
The SoftNcuts is bounded between 0 and 1; the MSE may take large positive values.
1919

20-
For good performance, one should wait for the SoftNCut to reach a plateau, the reconstruction loss must also diminish but it's generally less critical.
20+
For good performance, one should wait for the SoftNCut to reach a plateau; the reconstruction loss must also diminish but it's generally less critical.
2121

22+
Parameters
23+
-------------------------------
24+
25+
When using the WNet training module, additional options will be provided in the Advanced tab of the training module:
26+
27+
- Number of classes : number of classes to segment (default 2). Additional classes will result in a more progressive segmentation according to brightness; can be useful if you have "halos" around your objects or artifacts with a significantly different brightness.
28+
- Reconstruction loss : either MSE or BCE (default MSE). MSE is more sensitive to outliers, but can be more precise; BCE is more robust to outliers but can be less precise.
29+
30+
- NCuts parameters:
31+
- Intensity sigma : standard deviation of the feature similarity term (brightness here, default 1)
32+
- Spatial sigma : standard deviation of the spatial proximity term (default 4)
33+
- Radius : radius of the loss computation in pixels (default 2)
34+
35+
.. note::
36+
Intensity sigma depends on pixel values in the image. The default of 1 is tailored to images being mapped between 0 and 100, which is done automatically by the plugin.
37+
.. note::
38+
Raising the radius might improve performance in some cases, but will also greatly increase computation time.
39+
40+
- Weights for the sum of losses :
41+
- NCuts weight : weight of the NCuts loss (default 0.5)
42+
- Reconstruction weight : weight of the reconstruction loss (default 0.5*1e-2)
43+
44+
.. note::
45+
The weight of the reconstruction loss should be adjusted according to its empirical value; ideally the reconstruction loss should be of the same order of magnitude as the NCuts loss after being multiplied by its weight.
2246

2347
Common issues troubleshooting
2448
------------------------------
25-
If you do not find a satisfactory answer here, please `open an issue`_ !
49+
If you do not find a satisfactory answer here, please do not hesitate to `open an issue`_ on GitHub.
2650

27-
- **The NCuts loss explodes after a few epochs** : Lower the learning rate
51+
- **The NCuts loss explodes after a few epochs** : Lower the learning rate, first by a factor of two, then ten.
2852

2953
- **The NCuts loss does not converge and is unstable** :
30-
The normalization step might not be adapted to your images. Disable normalization and change intensity_sigma according to the distribution of values in your image; for reference, by default images are remapped to values between 0 and 100, and intensity_sigma=1.
54+
The normalization step might not be adapted to your images. Disable normalization and change intensity_sigma according to the distribution of values in your image. For reference, by default images are remapped to values between 0 and 100, and intensity_sigma=1.
3155

32-
- **Reconstruction (decoder) performance is poor** : switch to BCE and set the scaling factor of the reconstruction loss ot 0.5, OR adjust the weight of the MSE loss to make it closer to 1.
56+
- **Reconstruction (decoder) performance is poor** : switch to BCE and set the scaling factor of the reconstruction loss ot 0.5, OR adjust the weight of the MSE loss to make it closer to 1 in the weighted sum.
3357

3458

3559
.. _WNet, A Deep Model for Fully Unsupervised Image Segmentation: https://arxiv.org/abs/1711.08506

napari_cellseg3d/_tests/test_training.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ def test_unsupervised_training(make_napari_viewer_proxy):
124124
{"image": im_path_str, "label": im_path_str}
125125
]
126126
widget.worker._get_data()
127-
eval_res = widget.worker._eval(
127+
eval_res = widget.worker.eval(
128128
model=WNetFixture(),
129129
epoch=-10,
130130
)

napari_cellseg3d/code_models/worker_training.py

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -108,10 +108,10 @@ def set_download_log(self, widget):
108108
self.downloader.log_widget = widget
109109

110110
def log(self, text):
111-
"""Sends a signal that ``text`` should be logged
111+
"""Sends a Qt signal that the provided text should be logged
112112
Goes in a Log object, defined in :py:mod:`napari_cellseg3d.interface
113113
Sends a signal to the main thread to log the text.
114-
Signal is defined in napari_cellseg3d.workers_utils.LogSignal
114+
Signal is defined in napari_cellseg3d.workers_utils.LogSignal.
115115
116116
Args:
117117
text (str): text to logged
@@ -653,7 +653,7 @@ def train(
653653
):
654654
model.eval()
655655
self.log("Validating...")
656-
yield self._eval(model, epoch) # validation
656+
yield self.eval(model, epoch) # validation
657657

658658
if self._abort_requested:
659659
self.dataloader = None
@@ -736,7 +736,7 @@ def train(
736736
self.quit()
737737
raise e
738738

739-
def _eval(self, model, epoch) -> TrainingReport:
739+
def eval(self, model, epoch) -> TrainingReport:
740740
with torch.no_grad():
741741
device = self.config.device
742742
for _k, val_data in enumerate(self.eval_dataloader):

0 commit comments

Comments
 (0)