Skip to content

Commit f92e6b1

Browse files
committed
updated docs
1 parent 7abeac4 commit f92e6b1

File tree

11 files changed

+161
-44
lines changed

11 files changed

+161
-44
lines changed

docs/index.rst

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,9 +28,11 @@ Welcome to napari-cellseg-3d's documentation!
2828
res/code/interface
2929
res/code/plugin_base
3030
res/code/plugin_review
31+
res/code/launch_review
3132
res/code/plugin_dock
3233
res/code/plugin_crop
33-
res/code/launch_review
34+
res/code/plugin_convert
35+
res/code/plugin_metrics
3436
res/code/model_framework
3537
res/code/model_workers
3638
res/code/model_instance_seg

docs/res/code/plugin_convert.rst

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
plugin_convert.py
2+
==================================
3+
4+
5+
Class : ConvertUtils
6+
------------------------------------------
7+
8+
.. important::
9+
Inherits from : :doc:`plugin_base`
10+
11+
Methods
12+
**********************
13+
.. autoclass:: napari_cellseg_3d.plugin_convert::ConvertUtils
14+
:members: __init__, build, folder_to_semantic, layer_to_semantic, folder_to_instance, layer_to_instance, layer_remove_small, folder_remove_small ,check_ready_folder, check_ready_layer :noindex:
15+
:noindex:
16+
17+
Attributes
18+
*********************
19+
20+
.. autoclass:: napari_cellseg_3d.plugin_convert::ConvertUtils
21+
:members: _viewer

docs/res/code/plugin_metrics.rst

Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
plugin_metrics.py
2+
==================================
3+
4+
5+
Class : MetricsUtils
6+
------------------------------------------
7+
8+
.. important::
9+
Inherits from : :doc:`plugin_base`
10+
11+
Methods
12+
**********************
13+
14+
.. autoclass:: napari_cellseg_3d.plugin_metrics::MetricsUtils
15+
:members: __init__, build, plot_dice, remove_plots, compute_dice
16+
:noindex:
17+
18+
Attributes
19+
*********************
20+
21+
.. autoclass:: napari_cellseg_3d.plugin_metrics::MetricsUtils
22+
:members: _viewer, layout, canvas, plots

docs/res/guides/convert_module_guide.rst

Lines changed: 14 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,4 +15,17 @@ You can :
1515
This will convert instance labels with unique IDs per object into 0/1 semantic labels, for example for training.
1616

1717
* Remove small objects :
18-
You can specify a size threshold in pixels; all objects smaller than this size will be removed in the image.
18+
You can specify a size threshold in pixels; all objects smaller than this size will be removed in the image.
19+
20+
21+
Source code
22+
-------------------------------------------------
23+
24+
* :doc:`../code/plugin_base`
25+
* :doc:`../code/plugin_convert`
26+
27+
28+
29+
30+
31+

docs/res/guides/inference_module_guide.rst

Lines changed: 32 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -8,50 +8,61 @@ to automatically label cells.
88

99
Currently, the following pre-trained models are available :
1010

11-
=========== ================================================================================================
12-
Model Link to original paper
13-
=========== ================================================================================================
14-
VNet `Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation`_
15-
SegResNet `3D MRI brain tumor segmentation using autoencoder regularization`_
16-
TRAILMAP An emulation in Pytorch of the `TRAILMAP project on GitHub`_
17-
=========== ================================================================================================
11+
============== ================================================================================================
12+
Model Link to original paper
13+
============== ================================================================================================
14+
VNet `Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation`_
15+
SegResNet `3D MRI brain tumor segmentation using autoencoder regularization`_
16+
TRAILMAP_test An emulation of the `TRAILMAP project on GitHub`_ using a custom copy in Pytorch
17+
TRAILMAP An emulation of the `TRAILMAP project on GitHub`_ using `3DUnet for Pytorch`_
18+
============== ================================================================================================
1819

1920
.. _Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation: https://arxiv.org/pdf/1606.04797.pdf
2021
.. _3D MRI brain tumor segmentation using autoencoder regularization: https://arxiv.org/pdf/1810.11654.pdf
2122
.. _TRAILMAP project on GitHub: https://github.com/AlbertPun/TRAILMAP
23+
.. _3DUnet for Pytorch: https://github.com/wolny/pytorch-3dunet
2224

2325
Interface and functionalities
2426
--------------------------------
2527

2628
.. image:: ../images/inference_plugin_layout.png
2729
:align: right
30+
:scale: 40%
2831

29-
* Loading data : When launching the module, you will be asked to provide an image folder containing all the volumes you'd like to be labeled.
30-
All images with the chosen (**.tif** or **.tiff** currently supported) extension in this folder will be labeled.
31-
You can then choose an output folder, where all the results will be saved.
32+
* **Loading data** :
3233

34+
| When launching the module, you will be asked to provide an image folder containing all the volumes you'd like to be labeled.
35+
| All images with the chosen (**.tif** or **.tiff** currently supported) extension in this folder will be labeled.
36+
| You can then choose an output folder, where all the results will be saved.
3337
3438

35-
* Model choice : You can then choose one of the selected models above, which will be used for inference.
39+
* **Model choice** :
3640

41+
| You can then choose one of the provided models above, which will be used for inference.
42+
| You may also choose to load custom weights rather than the pre-trained ones, simply ensure they are compatible (e.g. produced from the training module for the same model)
3743
3844

39-
* Anisotropy :If you want to see your results without anisotropy when you have anisotropic images, you can specify that you have anisotropic data
40-
and set the resolution of your image in micron, this wil save & show the results without anisotropy.
45+
* **Anisotropy** :
4146

47+
| If you want to see your results without anisotropy when you have anisotropic images, you can specify that you have anisotropic data
48+
| and set the resolution of your image in micron, this wil save and show the results without anisotropy.
4249
4350

44-
* Thresholding : You can perform thresholding to binarize your labels, all values beneath the confidence threshold will be set to 0 using this.
45-
If you wish to use instance segmentation it is recommended to use threshlding.
51+
* **Thresholding** :
4652

47-
* Instance segmentatin : You can convert the semantic segmentation into instance labels by using either the watershed or connected components method.
48-
You can set the probability threshhold from which a pixel is considered as a valid instance, as well as the minimum size in pixels for objects. All smaller objects will be removed.
49-
Instance labels will be saved (and shown if applicable) separately from other results.
53+
| You can perform thresholding to binarize your labels, all values beneath the confidence threshold will be set to 0 using this.
54+
| If you wish to use instance segmentation it is recommended to use thresholding.
5055
51-
* Viewing results : You can also select whether you'd like to see the results in napari afterwards; by default the first image processed will be displayed,
52-
but you can choose to display up to ten at once. You can also request to see the originals.
56+
* **Instance segmentation** :
5357

58+
| You can convert the semantic segmentation into instance labels by using either the watershed or connected components method.
59+
| You can set the probability threshold from which a pixel is considered as a valid instance, as well as the minimum size in pixels for objects. All smaller objects will be removed.
60+
| Instance labels will be saved (and shown if applicable) separately from other results.
5461
62+
* **Viewing results** :
63+
64+
| You can also select whether you'd like to see the results in napari afterwards.
65+
| By default the first image processed will be displayed, but you can choose to display up to ten at once. You can also request to see the originals.
5566
5667

5768
When you are done choosing your parameters, you can press the **Start** button to begin the inference process.
@@ -65,6 +76,7 @@ On the left side, a progress bar and a log will keep you informed on the process
6576
| ``{original_name}_{model}_{date & time}_pred{id}.file_ext``
6677
| For example, using a VNet on the third image of a folder, called "somatomotor.tif" will yield the following name :
6778
| *somatomotor_VNet_2022_04_06_15_49_42_pred3.tif*
79+
| Instance labels will have the "Instance_seg" prefix appened to the name.
6880
6981

7082
.. hint::

docs/res/guides/metrics_module_guide.rst

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -25,4 +25,15 @@ Pairs with a low score will be displayed on the viewer for checking, ground trut
2525
.. note::
2626
Due to changes in orientation of images after running inference, the utility will rotate and flip images to find the best Dice coefficient
2727
to compensate. If you have small images with a very large number of labels, this can lead to an inexact metric being computed.
28-
Images with a low score might be in the wrong orientation as well when displayed for comparison.
28+
Images with a low score might be in the wrong orientation as well when displayed for comparison.
29+
30+
31+
Source code
32+
-------------------------------------------------
33+
34+
* :doc:`../code/plugin_base`
35+
* :doc:`../code/plugin_metrics`
36+
37+
38+
39+

docs/res/guides/review_module_guide.rst

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -11,21 +11,23 @@ and correct them if needed. It then saves the status of each file in a csv, for
1111
Launching the review process
1212
---------------------------------
1313

14-
First, you will be asked to load your images and labels; you can use the checkbox above the Open buttons to
15-
choose whether you want to load a single 3D **.tif** image or a folder of 2D images as a 3D stack.
16-
Folders can be stacks of either **.png** or **.tif** files, ideally numbered with the index of the slice at the end.
14+
* Data paths :
15+
First, you will be asked to load your images and labels; you can use the checkbox above the Open buttons to
16+
choose whether you want to load a single 3D **.tif** image or a folder of 2D images as a 3D stack.
17+
Folders can be stacks of either **.png** or **.tif** files, ideally numbered with the index of the slice at the end.
1718

1819
.. note::
1920
Only single .tif files or folder of several .png or .tif are supported.
2021

21-
You can then provide a model name, which will be used in the csv file recording the status of each slice.
22+
* Model name :
23+
You can then provide a model name, which will be used to name the csv file recording the status of each slice.
2224

23-
If a corresponding csv file exists already, it will be used. If not, a new one will be created.
25+
If a corresponding csv file exists already, it will be used. If not, a new one will be created.
26+
If you choose to create a new dataset, a new csv will be created no matter what,
27+
with a trailing number if several copies of it already exists.
2428

25-
If you choose to create a new dataset, a new csv will be created no matter what,
26-
with a trailing number if several copies of it already exists.
27-
28-
Once you are ready, you can press **Run** to start the review process.
29+
* Start :
30+
Once you are ready, you can press **Start reviewing** to start the review process.
2931

3032
.. note::
3133
You can find the csv file containing the annotation status **in the same folder as the labels**

docs/res/guides/training_module_guide.rst

Lines changed: 29 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -6,37 +6,56 @@ Training module guide
66
This module allows you to train pre-defined Pytorch models for cell segmentation.
77
Pre-defined models are stored in napari-cellseg-3d/models.
88

9+
Currently, the following pre-defined models are available :
10+
11+
============== ================================================================================================
12+
Model Link to original paper
13+
============== ================================================================================================
14+
VNet `Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation`_
15+
SegResNet `3D MRI brain tumor segmentation using autoencoder regularization`_
16+
TRAILMAP_test An emulation of the `TRAILMAP project on GitHub`_ using a custom copy in Pytorch
17+
TRAILMAP An emulation of the `TRAILMAP project on GitHub`_ using `3DUnet for Pytorch`_
18+
============== ================================================================================================
19+
20+
.. _Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation: https://arxiv.org/pdf/1606.04797.pdf
21+
.. _3D MRI brain tumor segmentation using autoencoder regularization: https://arxiv.org/pdf/1810.11654.pdf
22+
.. _TRAILMAP project on GitHub: https://github.com/AlbertPun/TRAILMAP
23+
.. _3DUnet for Pytorch: https://github.com/wolny/pytorch-3dunet
24+
925
.. important::
10-
The machine learning models used by this program require all images of a dataset to all be of the same size.
26+
The machine learning models used by this program require all images of a dataset to be of the same size.
1127
Please ensure that all the images you are loading are of the **same size**, or to use the **"extract patches" (in augmentation tab)** with an appropriately small size
12-
to ensure all images being used are of a proper size.
28+
to ensure all images being used by the model are of a workable size.
1329

1430
The training module is comprised of several tabs.
1531

1632

17-
1) The first one, **Data**, will let you choose :
33+
1) The first one, **Data**, will let you set :
1834

19-
* The images folder
20-
* The labels folder
35+
* The path to the images folder
36+
* The path to the labels folder
37+
* The path to the results folder
2138

2239
2) The second tab, **Augmentation**, lets you define dataset and augmentation parameters such as :
2340

2441
* Whether to use images "as is" (**requires all images to be of the same size and cubic**) or extract patches.
2542

2643
* If you're extracting patches :
27-
* The size of patches to be extracted (ideally, please use a value **close to a pwoer of two**, such as 120 or 60.
44+
* The size of patches to be extracted (ideally, please use a value **close to a power of two**, such as 120 or 60.
2845
* The number of samples to extract from each of your image to ensure correct size and perform data augmentation. A larger number will likely mean better performances, but longer training and larger memory usage.
2946
* Whether to perform data augmentation or not (elastic deforms, intensity shifts. random flipping,etc). A rule of thumb for augmentation is :
30-
* If you're using the patch extraction method, enable it if you have more than 10 samples per image with at least 5 images
47+
* If you're using the patch extraction method, enable it if you are using more than 10 samples per image with at least 5 images
3148
* If you have a large dataset and are not using patches extraction, enable it.
3249

3350

3451
3) The third contains training related parameters :
35-
* The model to use for training
52+
53+
* The model to use for training (see table above)
3654
* The loss function used for training (see table below)
3755
* The batch size (larger means quicker training and possibly better performance but increased memory usage)
3856
* The number of epochs (a possibility is to start with 60 epochs, and decrease or increase depending on performance.)
3957
* The epoch interval for validation (for example, if set to two, the module will use the validation dataset to evaluate the model with the dice metric every two epochs.)
58+
4059
If the dice metric is better on that validation interval, the model weights will be saved in the results folder.
4160

4261
The available loss functions are :
@@ -68,9 +87,9 @@ perform data augmentation if you chose to, select a CUDA device if one is presen
6887
**The training will stop after the next validation interval is performed, to save the latest model should it be better.**
6988

7089
.. note::
71-
You can save the log to record the losses and validation metrics numerical value at each step. This log is autosaved as well when training completes.
90+
You can save the log to record the losses and validation metrics numerical values at each step. This log is autosaved as well when training completes.
7291

73-
After two validations steps have been performed, the training loss values and validation metrics will be automatically plotted
92+
After two validations steps have been performed (depending on the interval you set), the training loss values and validation metrics will be automatically plotted
7493
and shown on napari every time a validation step completes.
7594
This plot automatically saved each time validation is performed for now. The final version is stored separately in the results folder.
7695

0 Bytes
Loading
-99.1 KB
Loading

0 commit comments

Comments
 (0)