|
71 | 71 | "- CascadeRCNN\n",
|
72 | 72 | "- CascadeRPN\n",
|
73 | 73 | "- DCN\n",
|
| 74 | + "- Detectors\n", |
| 75 | + "- DoubleHeads\n", |
| 76 | + "- DynamicRCNN\n", |
| 77 | + "- EmpiricalAttention\n", |
| 78 | + "- FCOS\n", |
| 79 | + "- FoveaBox\n", |
| 80 | + "- FSAF\n", |
| 81 | + "- GHM\n", |
| 82 | + "- LibraRCNN\n", |
| 83 | + "- PaFPN\n", |
| 84 | + "- PISA\n", |
| 85 | + "- RegNet\n", |
| 86 | + "- RepPoints\n", |
| 87 | + "- Res2Net\n", |
| 88 | + "- SABL\n", |
| 89 | + "- VFNet\n", |
74 | 90 | "\n",
|
75 | 91 | "**Pixel Classification**\n",
|
76 | 92 | "\n",
|
|
81 | 97 | "- APCNet\n",
|
82 | 98 | "- CCNet\n",
|
83 | 99 | "- CGNet\n",
|
84 |
| - "- HRNet\n" |
| 100 | + "- HRNet\n", |
| 101 | + "- DeepLabV3Plus\n", |
| 102 | + "- DMNet\n", |
| 103 | + "- DNLNet\n", |
| 104 | + "- EMANet\n", |
| 105 | + "- FastSCNN\n", |
| 106 | + "- FCN\n", |
| 107 | + "- GCNet\n", |
| 108 | + "- MobileNetV2\n", |
| 109 | + "- NonLocalNet\n", |
| 110 | + "- OCRNet\n", |
| 111 | + "- PSANet\n", |
| 112 | + "- SemFPN\n", |
| 113 | + "- UperNet\n" |
85 | 114 | ]
|
86 | 115 | },
|
87 | 116 | {
|
|
128 | 157 | "cell_type": "markdown",
|
129 | 158 | "metadata": {},
|
130 | 159 | "source": [
|
131 |
| - "Prepare data for `AutoDL` class using `prepare_data()` in `arcgis.learn`" |
| 160 | + "Prepare the data for `AutoDL` class using `prepare_data()` in `arcgis.learn`, the recommended value for the batch_size parameter is `None` as `AutoDL` class supports automatic evaluation of the batch_size based on the GPU capacity." |
132 | 161 | ]
|
133 | 162 | },
|
134 | 163 | {
|
|
137 | 166 | "metadata": {},
|
138 | 167 | "outputs": [],
|
139 | 168 | "source": [
|
140 |
| - "data = prepare_data(\"path_to_data_folder\", batch_size=2)" |
| 169 | + "data = prepare_data(\"path_to_data_folder\", batch_size=None)" |
141 | 170 | ]
|
142 | 171 | },
|
143 | 172 | {
|
|
169 | 198 | " The list of models that will be used in the training process. If the user does not provide the parameter value, the `AutoDL` class selects all of the supported networks, however the user can select one or more networks by passing the network names as string in a list.\n",
|
170 | 199 | "\n",
|
171 | 200 | " - _Supported Object Detection models_\n",
|
172 |
| - " - SingleShotDetector, RetinaNet, FasterRCNN, YOLOv3, ATSS, CARAFE, CascadeRCNN, CascadeRPN, DCN\n", |
| 201 | + " - SingleShotDetector, RetinaNet, FasterRCNN, YOLOv3, ATSS, CARAFE, CascadeRCNN, CascadeRPN, DCN, Detectors, DoubleHeads, DynamicRCNN, EmpiricalAttention, FCOS, FoveaBox, FSAF, GHM, LibraRCNN, PaFPN, PISA, RegNet, RepPoints, Res2Net, SABL, VFNet\n", |
| 202 | + " \n", |
173 | 203 | " - _Supported Object Detection models_\n",
|
174 |
| - " - DeepLab, UnetClassifier, PSPNetClassifier, ANN, APCNet, CCNet, CGNet, HRNet\n", |
| 204 | + " - DeepLab, UnetClassifier, PSPNetClassifier, ANN, APCNet, CCNet, CGNet, HRNet, DeepLabV3Plus, DMNet, DNLNet, EMANet, FastSCNN, FCN, GCNet, MobileNetV2, NonLocalNet, OCRNet, PSANet, SemFPN, UperNet\n", |
175 | 205 | " \n",
|
176 | 206 | "\n",
|
177 | 207 | "- `verbose` (Optional Parameter): Optional Boolean.\n",
|
|
191 | 221 | "metadata": {},
|
192 | 222 | "source": [
|
193 | 223 | "- **Basic**\n",
|
194 |
| - " - In this mode we iterate through all of the supported networks once with the default backbone, train it with the passed data, and calculate the network performance. At the end of each iteration, the function will save the model to the disk. The maximum number of epochs to train each network is 20, however, if the remaining time left to process the network is less than than the expected time (minimum time required to train the network), the program will automatically calculate the maximum number of epochs to train the network.\n", |
| 224 | + " - In this mode we iterate through all of the supported networks once with the default backbone, train it with the passed data, and calculate the network performance. At the end of each iteration, the function will save the model to the disk. Based on the alloted time, the program will automatically calculate the maximum number of epochs to train each network. However,the training will stop if the model stops improving for 5 epochs. A minimum difference of 0.001 in the validation loss is required for it to be considered as an improvement.\n", |
195 | 225 | " \n",
|
196 | 226 | " \n",
|
197 | 227 | "- **Advanced**\n",
|
198 |
| - " - To be used when the user wants to tune the hyper parameters of two best performing networks from the basic mode. This mode will divide the total time into two halves. In the first half, it works like basic mode, where it will iterate through all of the supported networks once. In the second half, it checks for the two best performing networks. The program then trains the selected networks with different supported backbones. At the end of each iteration, the function will save the model to the disk. The maximum number of epochs to train each network is 20, however, if the remaining time left to process the network is less than the expected time (minimum time required to train the network), the program will automatically calculate the number of epochs to train the network.\n", |
| 228 | + " - To be used when the user wants to tune the hyper-parameters of two best performing networks from the basic mode. This mode will divide the total time into two halves. In the first half, it works like basic mode, where it will iterate through all of the supported networks once. In the second half, it checks for the two best performing networks. The program then trains the selected networks with different supported backbones. At the end of each iteration, the function will save the model to the disk. Based on the alloted time, the program will automatically calculate the maximum number of epochs to train each network. However,the training will stop if the model stops improving for 5 epochs. A minimum difference of 0.001 in the validation loss is required for it to be considered as an improvement. \n", |
| 229 | + " - In this mode we use `optuna` to tune the hyper-paramaeters of the network. \n", |
199 | 230 | " "
|
200 | 231 | ]
|
201 | 232 | },
|
|
212 | 243 | "cell_type": "markdown",
|
213 | 244 | "metadata": {},
|
214 | 245 | "source": [
|
215 |
| - "<p>When the <code>AutoDL</code> class is initialized, it calculates the number of images that can be processed in the given time and the time required to process the all of the data. The output of the cell above can then be used to analyze and update the <code>total_time_limit</code> and <code>networks</code> parameters while initializing the class. </p><p><strong>Here is an example of the output</strong> </p><ul><li>Given time to process the dataset is: 5.0 hours</li><li>Number of images that can be processed in the given time: 290</li><li>Time required to process the entire dataset of 3000 images is 52 hours</li></ul><p>This explains how many images can be processed to train all of the selected networks in the selected mode within the given time, as well as provides an estimate of the time that <code>AutoDL</code> will take to train all of the selected networks with the entire dataset.</p>" |
| 246 | + "<p>When the <code>AutoDL</code> class is initialized, it calculates the number of images that can be processed in the given time and the time required to process the all of the data. The output of the cell above can then be used to analyze and update the <code>total_time_limit</code> and <code>networks</code> parameters while initializing the class. </p><p><strong>Here is an example of the output</strong> </p><ul><li>Given time to process the dataset is: 5.0 hours</li><li>Number of images that can be processed in the given time: 290</li><li>Time required to process the entire dataset of 3000 images is 52 hours</li></ul><p>This explains how many images can be processed to train all of the selected networks in the selected mode within the given time, as well as it provides an estimate of the time that <code>AutoDL</code> will take to train all of the selected networks with the entire dataset.</p>" |
216 | 247 | ]
|
217 | 248 | },
|
218 | 249 | {
|
|
319 | 350 | "dl.score()"
|
320 | 351 | ]
|
321 | 352 | },
|
| 353 | + { |
| 354 | + "cell_type": "markdown", |
| 355 | + "metadata": {}, |
| 356 | + "source": [ |
| 357 | + "### Report \n", |
| 358 | + "<a id='report'></a>" |
| 359 | + ] |
| 360 | + }, |
| 361 | + { |
| 362 | + "cell_type": "code", |
| 363 | + "execution_count": null, |
| 364 | + "metadata": {}, |
| 365 | + "outputs": [], |
| 366 | + "source": [ |
| 367 | + "dl.report()" |
| 368 | + ] |
| 369 | + }, |
| 370 | + { |
| 371 | + "cell_type": "markdown", |
| 372 | + "metadata": {}, |
| 373 | + "source": [ |
| 374 | + "This method will return an advanced html report of the networks trained by `AutoDL`. In the `basic` mode it shows the leaderboard of all the networks based on their performance during the training phase, and some important parameter details and charts for the best evaluated model. Additionally, in the `advanced` mode it shows details of all the `optuna` based trails with the hyper-tuned parameter details and the feature importance chart for each of the model evaluated during the `advanced` mode." |
| 375 | + ] |
| 376 | + }, |
322 | 377 | {
|
323 | 378 | "cell_type": "markdown",
|
324 | 379 | "metadata": {},
|
|
428 | 483 | "metadata": {},
|
429 | 484 | "source": [
|
430 | 485 | "- The load method is used to load a saved model from the disk using the `AutoDL` class. It accepts the following parameters:\n",
|
431 |
| - " - path: Path to the ESRI Model Definition (.emd) file\n", |
| 486 | + " - path: Path to the ESRI Model Definition (emd or dlpk) file\n", |
432 | 487 | " - data: Returned data object from `prepare_data` function"
|
433 | 488 | ]
|
434 | 489 | },
|
|
552 | 607 | "name": "python",
|
553 | 608 | "nbconvert_exporter": "python",
|
554 | 609 | "pygments_lexer": "ipython3",
|
555 |
| - "version": "3.9.15" |
| 610 | + "version": "3.9.16" |
556 | 611 | }
|
557 | 612 | },
|
558 | 613 | "nbformat": 4,
|
|
0 commit comments