You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
.. _Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation: https://arxiv.org/pdf/1606.04797.pdf
20
21
.. _3D MRI brain tumor segmentation using autoencoder regularization: https://arxiv.org/pdf/1810.11654.pdf
21
22
.. _TRAILMAP project on GitHub: https://github.com/AlbertPun/TRAILMAP
23
+
.. _3DUnet for Pytorch: https://github.com/wolny/pytorch-3dunet
22
24
23
25
Interface and functionalities
24
26
--------------------------------
25
27
26
28
.. image:: ../images/inference_plugin_layout.png
27
29
:align:right
30
+
:scale:40%
28
31
29
-
* Loading data : When launching the module, you will be asked to provide an image folder containing all the volumes you'd like to be labeled.
30
-
All images with the chosen (**.tif** or **.tiff** currently supported) extension in this folder will be labeled.
31
-
You can then choose an output folder, where all the results will be saved.
32
+
* **Loading data** :
32
33
34
+
|When launching the module, you will be asked to provide an image folder containing all the volumes you'd like to be labeled.
35
+
|All images with the chosen (**.tif** or **.tiff** currently supported) extension in this folder will be labeled.
36
+
|You can then choose an output folder, where all the results will be saved.
33
37
34
38
35
-
* Model choice : You can then choose one of the selected models above, which will be used for inference.
39
+
* **Model choice** :
36
40
41
+
|You can then choose one of the provided models above, which will be used for inference.
42
+
|You may also choose to load custom weights rather than the pre-trained ones, simply ensure they are compatible (e.g. produced from the training module for the same model)
37
43
38
44
39
-
* Anisotropy :If you want to see your results without anisotropy when you have anisotropic images, you can specify that you have anisotropic data
40
-
and set the resolution of your image in micron, this wil save & show the results without anisotropy.
45
+
* **Anisotropy** :
41
46
47
+
|If you want to see your results without anisotropy when you have anisotropic images, you can specify that you have anisotropic data
48
+
|and set the resolution of your image in micron, this wil save and show the results without anisotropy.
42
49
43
50
44
-
* Thresholding : You can perform thresholding to binarize your labels, all values beneath the confidence threshold will be set to 0 using this.
45
-
If you wish to use instance segmentation it is recommended to use threshlding.
51
+
* **Thresholding** :
46
52
47
-
* Instance segmentatin : You can convert the semantic segmentation into instance labels by using either the watershed or connected components method.
48
-
You can set the probability threshhold from which a pixel is considered as a valid instance, as well as the minimum size in pixels for objects. All smaller objects will be removed.
49
-
Instance labels will be saved (and shown if applicable) separately from other results.
53
+
|You can perform thresholding to binarize your labels, all values beneath the confidence threshold will be set to 0 using this.
54
+
|If you wish to use instance segmentation it is recommended to use thresholding.
50
55
51
-
* Viewing results : You can also select whether you'd like to see the results in napari afterwards; by default the first image processed will be displayed,
52
-
but you can choose to display up to ten at once. You can also request to see the originals.
56
+
* **Instance segmentation** :
53
57
58
+
|You can convert the semantic segmentation into instance labels by using either the watershed or connected components method.
59
+
|You can set the probability threshold from which a pixel is considered as a valid instance, as well as the minimum size in pixels for objects. All smaller objects will be removed.
60
+
|Instance labels will be saved (and shown if applicable) separately from other results.
54
61
62
+
* **Viewing results** :
63
+
64
+
|You can also select whether you'd like to see the results in napari afterwards.
65
+
|By default the first image processed will be displayed, but you can choose to display up to ten at once. You can also request to see the originals.
55
66
56
67
57
68
When you are done choosing your parameters, you can press the **Start** button to begin the inference process.
@@ -65,6 +76,7 @@ On the left side, a progress bar and a log will keep you informed on the process
.. _Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation: https://arxiv.org/pdf/1606.04797.pdf
21
+
.. _3D MRI brain tumor segmentation using autoencoder regularization: https://arxiv.org/pdf/1810.11654.pdf
22
+
.. _TRAILMAP project on GitHub: https://github.com/AlbertPun/TRAILMAP
23
+
.. _3DUnet for Pytorch: https://github.com/wolny/pytorch-3dunet
24
+
9
25
.. important::
10
-
The machine learning models used by this program require all images of a dataset to all be of the same size.
26
+
The machine learning models used by this program require all images of a dataset to be of the same size.
11
27
Please ensure that all the images you are loading are of the **same size**, or to use the **"extract patches" (in augmentation tab)** with an appropriately small size
12
-
to ensure all images being used are of a proper size.
28
+
to ensure all images being used by the model are of a workable size.
13
29
14
30
The training module is comprised of several tabs.
15
31
16
32
17
-
1) The first one, **Data**, will let you choose :
33
+
1) The first one, **Data**, will let you set :
18
34
19
-
* The images folder
20
-
* The labels folder
35
+
* The path to the images folder
36
+
* The path to the labels folder
37
+
* The path to the results folder
21
38
22
39
2) The second tab, **Augmentation**, lets you define dataset and augmentation parameters such as :
23
40
24
41
* Whether to use images "as is" (**requires all images to be of the same size and cubic**) or extract patches.
25
42
26
43
* If you're extracting patches :
27
-
* The size of patches to be extracted (ideally, please use a value **close to a pwoer of two**, such as 120 or 60.
44
+
* The size of patches to be extracted (ideally, please use a value **close to a power of two**, such as 120 or 60.
28
45
* The number of samples to extract from each of your image to ensure correct size and perform data augmentation. A larger number will likely mean better performances, but longer training and larger memory usage.
29
46
* Whether to perform data augmentation or not (elastic deforms, intensity shifts. random flipping,etc). A rule of thumb for augmentation is :
30
-
* If you're using the patch extraction method, enable it if you have more than 10 samples per image with at least 5 images
47
+
* If you're using the patch extraction method, enable it if you are using more than 10 samples per image with at least 5 images
31
48
* If you have a large dataset and are not using patches extraction, enable it.
32
49
33
50
34
51
3) The third contains training related parameters :
35
-
* The model to use for training
52
+
53
+
* The model to use for training (see table above)
36
54
* The loss function used for training (see table below)
37
55
* The batch size (larger means quicker training and possibly better performance but increased memory usage)
38
56
* The number of epochs (a possibility is to start with 60 epochs, and decrease or increase depending on performance.)
39
57
* The epoch interval for validation (for example, if set to two, the module will use the validation dataset to evaluate the model with the dice metric every two epochs.)
58
+
40
59
If the dice metric is better on that validation interval, the model weights will be saved in the results folder.
41
60
42
61
The available loss functions are :
@@ -68,9 +87,9 @@ perform data augmentation if you chose to, select a CUDA device if one is presen
68
87
**The training will stop after the next validation interval is performed, to save the latest model should it be better.**
69
88
70
89
.. note::
71
-
You can save the log to record the losses and validation metrics numerical value at each step. This log is autosaved as well when training completes.
90
+
You can save the log to record the losses and validation metrics numerical values at each step. This log is autosaved as well when training completes.
72
91
73
-
After two validations steps have been performed, the training loss values and validation metrics will be automatically plotted
92
+
After two validations steps have been performed (depending on the interval you set), the training loss values and validation metrics will be automatically plotted
74
93
and shown on napari every time a validation step completes.
75
94
This plot automatically saved each time validation is performed for now. The final version is stored separately in the results folder.
0 commit comments