You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/res/guides/training_module_guide.rst
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,6 +44,8 @@ The training module is comprised of several tabs.
44
44
* Whether to use pre-trained weights that are provided; if you choose to do so, the model will be initialized with the specified weights, possibly improving performance (transfer learning).
45
45
You can also load custom weights; simply ensure they are compatible with the model.
46
46
47
+
* The proportion of the dataset to keep for training versus validation; if you have a large dataset, you can set it to a lower value to have more accurate validation steps.
48
+
47
49
2) The second tab, **Augmentation**, lets you define dataset and augmentation parameters such as :
48
50
49
51
* Whether to use images "as is" (**requires all images to be of the same size and cubic**) or extract patches.
@@ -53,6 +55,7 @@ The training module is comprised of several tabs.
53
55
* The size of patches to be extracted (ideally, please use a value **close to a power of two**, such as 120 or 60 to ensure correct size.)
54
56
* The number of samples to extract from each of your images. A larger number will likely mean better performances, but longer training and larger memory usage.
55
57
58
+
56
59
* Whether to perform data augmentation or not (elastic deforms, intensity shifts. random flipping,etc). A rule of thumb for augmentation is :
57
60
58
61
* If you're using the patch extraction method, enable it if you are using more than 10 samples per image with at least 5 images
@@ -63,6 +66,7 @@ The training module is comprised of several tabs.
63
66
64
67
* The **model** to use for training (see table above)
65
68
* The **loss function** used for training (see table below)
69
+
* The **learning rate** of the optimizer. Setting it to a lower value if you're using pre-trained weights can improve performance.
66
70
* The **batch size** (larger means quicker training and possibly better performance but increased memory usage)
67
71
* The **number of epochs** (a possibility is to start with 60 epochs, and decrease or increase depending on performance.)
68
72
* The **epoch interval** for validation (for example, if set to two, the module will use the validation dataset to evaluate the model with the dice metric every two epochs.)
0 commit comments