You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
|For example, using a VNet on the third image of a folder, called "volume_1.tif" will yield :
34
-
|*volume_1_VNet_2022_04_06_15_49_42_pred3.tif*
29
+
* Loading data : When launching the module, you will be asked to provide an image folder containing all the volumes you'd like to be labeled.
30
+
All images with the chosen (**.tif** or **.tiff** currently supported) extension in this folder will be labeled.
31
+
You can then choose an output folder, where all the results will be saved.
32
+
33
+
34
+
35
+
* Model choice : You can then choose one of the selected models above, which will be used for inference.
36
+
37
+
38
+
39
+
* Anisotropy :If you want to see your results without anisotropy when you have anisotropic images, you can specify that you have anisotropic data
40
+
and set the resolution of your image in micron, this wil save & show the results without anisotropy.
41
+
42
+
43
+
44
+
* Thresholding : You can perform thresholding to binarize your labels, all values beneath the confidence threshold will be set to 0 using this.
45
+
If you wish to use instance segmentation it is recommended to use threshlding.
35
46
36
-
You can then choose one of the selected models above, which will be used for inference.
37
47
38
-
If you want to see your results without anisotropy when you have anistropic images, you can specify that you have anisotropic data
39
-
and set the resolution of your image in micron, this wil save & show the results without anisotropy.
40
48
41
-
You can perform thresholding to binarize your labels, all values beneath the confidence threshold will be set to 0 using this.
42
-
If you wish to use instance segmentation it is recommended to use threshlding.
49
+
* Viewing results : You can also select whether you'd like to see the results in napari afterwards; by default the first image processed will be displayed,
50
+
but you can choose to display up to ten at once. You can also request to see the originals.
51
+
52
+
43
53
44
-
You can also select whether you'd like to see the results in napari afterwards; by default the first image processed will be displayed,
45
-
but you can choose to display up to ten at once. You can also request to see the originals.
46
54
47
55
When you are done choosing your parameters, you can press the **Start** button to begin the inference process.
48
56
Once it has finished, results will be saved then displayed in napari; each output will be paired with its original.
57
+
On the left side, a progress bar and a log will keep you informed on the process.
58
+
59
+
60
+
61
+
.. note::
62
+
|The files will be saved using the following format :
|For example, using a VNet on the third image of a folder, called "somatomotor.tif" will yield the following name :
65
+
|*somatomotor_VNet_2022_04_06_15_49_42_pred3.tif*
66
+
49
67
50
68
.. hint::
51
-
|**Results** will be displayed using the **twilight shifted** colormap if raw or **turbo** if thresholded,
52
-
|whereas the **original** image will be shown in the **inferno** colormap.
69
+
|**Results** will be displayed using the **twilight shifted** colormap if raw or **turbo** if thresholding has been applied, whereas the **original** image will be shown in the **inferno** colormap.
53
70
|Feel free to change the **colormap** or **contrast** when viewing results to ensure you can properly see the labels.
54
71
|You'll most likely want to use **3D view** and **grid mode** in napari when checking results more broadly.
55
72
56
73
.. image:: images/inference_results_example.png
57
74
75
+
.. note::
76
+
You can save the log after the worker is finished to easily remember which parameters you ran inference with.
Copy file name to clipboardExpand all lines: docs/res/training_module_guide.rst
+32-9Lines changed: 32 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,21 +6,38 @@ Training module guide
6
6
This module allows you to train pre-defined Pytorch models for cell segmentation.
7
7
Pre-defined models are stored in napari-cellseg-annotator/models.
8
8
9
+
.. important::
10
+
The machine learning models used by this program require all images of a dataset to all be of the same size.
11
+
Please ensure that all the images you are loading are of the **same size**, or to use the **"extract patches" (in augmentation tab)** with an appropriately small size
12
+
to ensure all images being used are of a proper size.
13
+
9
14
The training module is comprised of several tabs.
10
-
The first one will let you choose :
15
+
16
+
17
+
1) The first one, **Data**, will let you choose :
11
18
12
19
* The images folder
13
20
* The labels folder
14
-
* The model
15
-
* The number of samples to extract from each of your image to ensure correct size and perform data augmentation.
16
21
17
-
The second lets you define training parameters such as :
22
+
2) The second tab, **Augmentation**, lets you define dataset and augmentation parameters such as :
23
+
24
+
* Whether to use images "as is" (**requires all images to be of the same size and cubic**) or extract patches.
25
+
26
+
* If you're extracting patches :
27
+
* The size of patches to be extracted (ideally, please use a value **close to a pwoer of two**, such as 120 or 60.
28
+
* The number of samples to extract from each of your image to ensure correct size and perform data augmentation. A larger number will likely mean better performances, but longer training and larger memory usage.
29
+
* Whether to perform data augmentation or not (elastic deforms, intensity shifts. random flipping,etc). A rule of thumb for augmentation is :
30
+
* If you're using the patch extraction method, enable it if you have more than 10 samples per image with at least 5 images
31
+
* If you have a large dataset and are not using patches extraction, enable it.
18
32
19
-
* The loss function used for training
20
-
* The batch size
21
-
* The number of epochs
33
+
34
+
3) The third contains training related parameters :
35
+
* The model to use for training
36
+
* The loss function used for training (see table below)
37
+
* The batch size (larger means quicker training and possibly better performance but increased memory usage)
38
+
* The number of epochs (a possibility is to start with 60 epochs, and decrease or increase depending on performance.)
22
39
* The epoch interval for validation (for example, if set to two, the module will use the validation dataset to evaluate the model with the dice metric every two epochs.)
23
-
If the dice metric is better on that validation interval, the model weights will be saved in the results folder.
40
+
If the dice metric is better on that validation interval, the model weights will be saved in the results folder.
24
41
25
42
The available loss functions are :
26
43
@@ -43,12 +60,18 @@ Tversky loss `Tversky Loss from MONAI`_ with ``sigmoid=true``
43
60
.. _Tversky Loss from MONAI: https://docs.monai.io/en/stable/losses.html#tverskyloss
44
61
45
62
Once you are ready, press the Start button to begin training. The module will automatically load your dataset,
46
-
perform data augmentation, select a CUDA device if one is present, and train the model.
63
+
perform data augmentation if you chose to, select a CUDA device if one is present, and train the model.
47
64
48
65
.. note::
49
66
You can stop the training at any time by clicking on the start button again.
50
67
51
68
**The training will stop after the next validation interval is performed, to save the latest model should it be better.**
52
69
70
+
.. note::
71
+
You can save the log to record the losses and validation metrics numerical value at each step. This log is autosaved as well when training completes.
72
+
53
73
After two validations steps have been performed, the training loss values and validation metrics will be automatically plotted
54
74
and shown on napari every time a validation step completes.
75
+
This plot automatically saved each time validation is performed for now. The final version is stored separately in the results folder.
0 commit comments