Skip to content

Commit 5d96e47

Browse files
committed
PM feedback
1 parent 03515d8 commit 5d96e47

File tree

1 file changed

+15
-12
lines changed

1 file changed

+15
-12
lines changed

articles/machine-learning/how-to-auto-train-image-models.md

Lines changed: 15 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -186,7 +186,7 @@ automl_image_config = AutoMLImageConfig(compute_target=compute_target)
186186

187187
With support for computer vision tasks, you can control the model algorithm and sweep hyperparameters. These model algorithms and hyperparameters are passed in as the parameter space for the sweep.
188188

189-
The model algorithm is required and is passed in via `model_name` parameter. You can either specify a single `model_name` or choose between multiple. In addition to controlling the model algorithm, you can also tune hyperparameters used for model training. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are task-specific or model-specific. [Learn more about the available hyperparameters for these instances](reference-automl-images-hyperparameters.md).
189+
The model algorithm is required and is passed in via `model_name` parameter. You can either specify a single `model_name` or choose between multiple.
190190

191191
### Supported model algorithms
192192

@@ -198,6 +198,9 @@ Image classification<br> (multi-class and multi-label)| **MobileNet**: Light-wei
198198
Object detection | **YOLOv5**: One stage object detection model <br> **Faster RCNN ResNet FPN**: Two stage object detection models <br> **RetinaNet ResNet FPN**: address class imbalance with Focal Loss <br> <br>*Note: Refer to [`model_size` hyperparameter](reference-automl-images-hyperparameters.md#model-specific-hyperparameters) for YOLOv5 model sizes.*| ***`yolov5`\**** <br> `fasterrcnn_resnet18_fpn` <br> `fasterrcnn_resnet34_fpn` <br> `fasterrcnn_resnet50_fpn` <br> `fasterrcnn_resnet101_fpn` <br> `fasterrcnn_resnet152_fpn` <br> `retinanet_resnet50_fpn`
199199
Instance segmentation | **MaskRCNN ResNet FPN**| `maskrcnn_resnet18_fpn` <br> `maskrcnn_resnet34_fpn` <br> ***`maskrcnn_resnet50_fpn`\**** <br> `maskrcnn_resnet101_fpn` <br> `maskrcnn_resnet152_fpn` <br>`maskrcnn_resnet50_fpn`
200200

201+
202+
In addition to controlling the model algorithm, you can also tune hyperparameters used for model training. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are task-specific or model-specific. [Learn more about the available hyperparameters for these instances](reference-automl-images-hyperparameters.md).
203+
201204
### Data augmentation
202205

203206
In general, deep learning model performance can often improve with more data. Data augmentation is a practical technique to amplify the data size and variability of a dataset which helps to prevent overfitting and improve the model’s generalization ability on unseen data. Automated ML applies different data augmentation techniques based on the computer vision task, before feeding input images to the model. Currently, there is no exposed hyperparameter to control data augmentations.
@@ -300,23 +303,13 @@ arguments = ["--early_stopping", 1, "--evaluation_frequency", 2]
300303
automl_image_config = AutoMLImageConfig(arguments=arguments)
301304
```
302305

303-
## Submit the run
304-
305-
When you have your `AutoMLImageConfig` object ready, you can submit the experiment.
306-
307-
```python
308-
ws = Workspace.from_config()
309-
experiment = Experiment(ws, "Tutorial-automl-image-object-detection")
310-
automl_image_run = experiment.submit(automl_image_config)
311-
```
312-
313306
## Incremental training (optional)
314307

315308
Once the training run is done, you have the option to further train the model by loading the trained model checkpoint. You can either use the same dataset or a different one for incremental training.
316309

317310
There are two available options for incremental training. You can,
318311

319-
* Pass the run ID that you want to load the checkpoint from
312+
* Pass the run ID that you want to load the checkpoint from.
320313
* Pass the checkpoints through a FileDataset.
321314

322315
### Pass the checkpoint via run ID
@@ -373,6 +366,16 @@ automl_image_run.wait_for_completion(wait_post_processing=True)
373366

374367
```
375368

369+
## Submit the run
370+
371+
When you have your `AutoMLImageConfig` object ready, you can submit the experiment.
372+
373+
```python
374+
ws = Workspace.from_config()
375+
experiment = Experiment(ws, "Tutorial-automl-image-object-detection")
376+
automl_image_run = experiment.submit(automl_image_config)
377+
```
378+
376379
## Outputs and evaluation metrics
377380

378381
The automated ML training runs generates output model files, evaluation metrics, logs and deployment artifacts like the scoring file and the environment file which can be viewed from the outputs and logs and metrics tab of the child runs.

0 commit comments

Comments
 (0)