Skip to content

Commit 0db5ea3

Browse files
author
shubham soni
committed
document changes
1 parent 270997c commit 0db5ea3

7 files changed

+28
-28
lines changed

articles/machine-learning/how-to-auto-train-image-models.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.subservice: automl
1010
ms.custom: event-tier1-build-2022, ignite-2022
1111
ms.topic: how-to
1212
ms.date: 07/13/2022
13-
#Customer intent: I'm a data scientist with ML knowledge in the computer vision space, looking to build ML models using image data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
13+
#Customer intent: I'm a data scientist with ML knowledge in the computer vision space, looking to build ML models using image data in Azure Machine Learning with full control of the model architecture, hyperparameters, and training and deployment environments.
1414
---
1515

1616
# Set up AutoML to train computer vision models
@@ -285,18 +285,18 @@ Automatic sweeps can yield competitive results for many datasets. Additionally,
285285

286286
An AutoML training job uses a primary metric for model optimization and hyperparameter tuning. The primary metric depends on the task type as shown below; other primary metric values are currently not supported.
287287

288-
* `accuracy` for IMAGE_CLASSIFICATION
289-
* `iou` for IMAGE_CLASSIFICATION_MULTILABEL
290-
* `mean_average_precision` for IMAGE_OBJECT_DETECTION
291-
* `mean_average_precision` for IMAGE_INSTANCE_SEGMENTATION
288+
* Accuracy for image classification
289+
* Intersection over union for image classification multilabel
290+
* Mean average precision for image object detection
291+
* Mean average precision for image instance segmentation
292292

293293
### Job limits
294294

295295
You can control the resources spent on your AutoML Image training job by specifying the `timeout_minutes`, `max_trials` and the `max_concurrent_trials` for the job in limit settings as described in the below example.
296296

297297
Parameter | Detail
298298
-----|----
299-
`max_trials` | Parameter for maximum number of configurations to sweep. Must be an integer between 1 and 1000. When exploring just the default hyperparameters for a given model algorithm, set this parameter to 1. The default value is 1.
299+
`max_trials` | Parameter for maximum number of configurations to sweep. Must be an integer between 1 and 1000. When exploring just the default hyperparameters for a given model architecture, set this parameter to 1. The default value is 1.
300300
`max_concurrent_trials`| Maximum number of runs that can run concurrently. If specified, must be an integer between 1 and 100. The default value is 1. <br><br> **NOTE:** <li> The number of concurrent runs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency. <li> `max_concurrent_trials` is capped at `max_trials` internally. For example, if user sets `max_concurrent_trials=4`, `max_trials=2`, values would be internally updated as `max_concurrent_trials=2`, `max_trials=2`.
301301
`timeout_minutes`| The amount of time in minutes before the experiment terminates. If none specified, default experiment timeout_minutes is seven days (maximum 60 days)
302302

@@ -357,26 +357,26 @@ A number of runs between 10 and 20 will likely work well on many datasets. The [
357357

358358
### Individual runs
359359

360-
In individual runs, you directly control the model algorithm and hyperparameters. The model algorithm is passed via the `model_name` parameter.
360+
In individual runs, you directly control the model architecture and hyperparameters. The model architecture is passed via the `model_name` parameter.
361361

362-
#### Supported model algorithms
362+
#### Supported model architectures
363363

364364
The following table summarizes the supported models for each computer vision task.
365365

366-
Task | Model algorithms | String literal syntax<br> ***`default_model`\**** denoted with \*
366+
Task | model architectures | String literal syntax<br> ***`default_model`\**** denoted with \*
367367
---|----------|----------
368368
Image classification<br> (multi-class and multi-label)| **MobileNet**: Light-weighted models for mobile applications <br> **ResNet**: Residual networks<br> **ResNeSt**: Split attention networks<br> **SE-ResNeXt50**: Squeeze-and-Excitation networks<br> **ViT**: Vision transformer networks| `mobilenetv2` <br>`resnet18` <br>`resnet34` <br> `resnet50` <br> `resnet101` <br> `resnet152` <br> `resnest50` <br> `resnest101` <br> `seresnext` <br> `vits16r224` (small) <br> ***`vitb16r224`\**** (base) <br>`vitl16r224` (large)|
369369
Object detection | **YOLOv5**: One stage object detection model <br> **Faster RCNN ResNet FPN**: Two stage object detection models <br> **RetinaNet ResNet FPN**: address class imbalance with Focal Loss <br> <br>*Note: Refer to [`model_size` hyperparameter](reference-automl-images-hyperparameters.md#model-specific-hyperparameters) for YOLOv5 model sizes.*| ***`yolov5`\**** <br> `fasterrcnn_resnet18_fpn` <br> `fasterrcnn_resnet34_fpn` <br> `fasterrcnn_resnet50_fpn` <br> `fasterrcnn_resnet101_fpn` <br> `fasterrcnn_resnet152_fpn` <br> `retinanet_resnet50_fpn`
370370
Instance segmentation | **MaskRCNN ResNet FPN**| `maskrcnn_resnet18_fpn` <br> `maskrcnn_resnet34_fpn` <br> ***`maskrcnn_resnet50_fpn`\**** <br> `maskrcnn_resnet101_fpn` <br> `maskrcnn_resnet152_fpn`
371371

372372

373-
In addition to controlling the model algorithm, you can also tune hyperparameters used for model training. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are task-specific or model-specific. [Learn more about the available hyperparameters for these instances](reference-automl-images-hyperparameters.md).
373+
In addition to controlling the model architecture, you can also tune hyperparameters used for model training. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are task-specific or model-specific. [Learn more about the available hyperparameters for these instances](reference-automl-images-hyperparameters.md).
374374

375375
# [Azure CLI](#tab/cli)
376376

377377
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
378378

379-
If you wish to use the default hyperparameter values for a given algorithm (say yolov5), you can specify it using the model_name key in the training_parameters section. For example,
379+
If you wish to use the default hyperparameter values for a given architecture (say yolov5), you can specify it using the model_name key in the training_parameters section. For example,
380380

381381
```yaml
382382
training_parameters:
@@ -386,7 +386,7 @@ training_parameters:
386386

387387
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
388388

389-
If you wish to use the default hyperparameter values for a given algorithm (say yolov5), you can specify it using the model_name parameter in the set_training_parameters method of the task specific `automl` job. For example,
389+
If you wish to use the default hyperparameter values for a given architecture (say yolov5), you can specify it using the model_name parameter in the set_training_parameters method of the task specific `automl` job. For example,
390390

391391
```python
392392
image_object_detection_job.set_training_parameters(model_name="yolov5")
@@ -439,9 +439,9 @@ search_space:
439439

440440
#### Define the parameter search space
441441

442-
You can define the model algorithms and hyperparameters to sweep in the parameter space. You can either specify a single model algorithm or multiple ones.
442+
You can define the model architectures and hyperparameters to sweep in the parameter space. You can either specify a single model architecture or multiple ones.
443443

444-
* See [Individual runs](#individual-runs) for the list of supported model algorithms for each task type.
444+
* See [Individual runs](#individual-runs) for the list of supported model architectures for each task type.
445445
* See [Hyperparameters for computer vision tasks](reference-automl-images-hyperparameters.md) hyperparameters for each computer vision task type.
446446
* See [details on supported distributions for discrete and continuous hyperparameters](how-to-tune-hyperparameters.md#define-the-search-space).
447447

articles/machine-learning/how-to-configure-auto-train.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -207,9 +207,9 @@ Classification | Regression | Time Series Forecasting
207207

208208
With additional algorithms below.
209209

210-
* [Image Classification Multi-class Algorithms](how-to-auto-train-image-models.md#supported-model-algorithms)
211-
* [Image Classification Multi-label Algorithms](how-to-auto-train-image-models.md#supported-model-algorithms)
212-
* [Image Object Detection Algorithms](how-to-auto-train-image-models.md#supported-model-algorithms)
210+
* [Image Classification Multi-class Algorithms](how-to-auto-train-image-models.md#supported-model-architectures)
211+
* [Image Classification Multi-label Algorithms](how-to-auto-train-image-models.md#supported-model-architectures)
212+
* [Image Object Detection Algorithms](how-to-auto-train-image-models.md#supported-model-architectures)
213213
* [NLP Text Classification Multi-label Algorithms](how-to-auto-train-nlp-models.md#language-settings)
214214
* [NLP Text Named Entity Recognition (NER) Algorithms](how-to-auto-train-nlp-models.md#language-settings)
215215

articles/machine-learning/how-to-inference-onnx-automl-image-models.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -145,7 +145,7 @@ env = Environment(
145145
)
146146
```
147147

148-
Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](how-to-auto-train-image-models.md#configure-experiments) and for supported object detection model names refer to the [supported model algorithm section](how-to-auto-train-image-models.md#supported-model-algorithms).
148+
Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](how-to-auto-train-image-models.md#configure-experiments) and for supported object detection model names refer to the [supported model architecture section](how-to-auto-train-image-models.md#supported-model-architectures).
149149

150150
To get the argument values needed to create the batch scoring model, refer to the scoring scripts generated under the outputs folder of the AutoML training runs. Use the hyperparameter values available in the model settings variable inside the scoring file for the best child run.
151151

@@ -778,7 +778,7 @@ assert batch_size == img_data.shape[0]
778778

779779
# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
780780

781-
For object detection with the Faster R-CNN algorithm, follow the same preprocessing steps as image classification, except for image cropping. You can resize the image with height `600` and width `800`. You can get the expected input height and width with the following code.
781+
For object detection with the Faster R-CNN architecture, follow the same preprocessing steps as image classification, except for image cropping. You can resize the image with height `600` and width `800`. You can get the expected input height and width with the following code.
782782

783783
```python
784784
batch, channel, height_onnx, width_onnx = session.get_inputs()[0].shape
@@ -841,7 +841,7 @@ assert batch_size == img_data.shape[0]
841841

842842
# [Object detection with YOLO](#tab/object-detect-yolo)
843843

844-
For object detection with the YOLO algorithm, follow the same preprocessing steps as image classification, except for image cropping. You can resize the image with height `600` and width `800`, and get the expected input height and width with the following code.
844+
For object detection with the YOLO architecture, follow the same preprocessing steps as image classification, except for image cropping. You can resize the image with height `600` and width `800`, and get the expected input height and width with the following code.
845845

846846
```python
847847
batch, channel, height_onnx, width_onnx = session.get_inputs()[0].shape
@@ -1144,7 +1144,7 @@ for image_idx, class_idx in zip(image_wise_preds[0], image_wise_preds[1]):
11441144
print('image: {}, class_index: {}, class_name: {}'.format(image_files[image_idx], class_idx, classes[class_idx]))
11451145
```
11461146

1147-
For multi-class and multi-label classification, you can follow the same steps mentioned earlier for all the supported algorithms in AutoML.
1147+
For multi-class and multi-label classification, you can follow the same steps mentioned earlier for all the supported architectures in AutoML.
11481148

11491149

11501150
# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)

articles/machine-learning/reference-automl-images-cli-classification.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ The source JSON schema can be found at https://azuremlsdk2.blob.core.windows.net
4444
| `validation_data` | object | The validation data to be used within the job. It should contain both training features and label column (optionally a sample weights column). If `validation_data` is specified, then `training_data` and `target_column_name` parameters must be specified. For more information on keys and their descriptions, see [Training or validation data](#training-or-validation-data) section. For an example, see [Consume data](./how-to-auto-train-image-models.md?tabs=cli#consume-data) section| | |
4545
| `validation_data_size` | float | What fraction of the data to hold out for validation when user validation data isn't specified. | A value in range (0.0, 1.0) | |
4646
| `limits` | object | Dictionary of limit configurations of the job. The key is name for the limit within the context of the job and the value is limit value. For more information, see [Configure your experiment settings](./how-to-auto-train-image-models.md?tabs=cli#job-limits) section. | | |
47-
| `training_parameters` | object | Dictionary containing training parameters for the job. Provide an object that has keys as listed in following sections. <br> - [Model agnostic hyperparameters](./reference-automl-images-hyperparameters.md#model-agnostic-hyperparameters) <br> - [Image classification (multi-class and multi-label) specific hyperparameters](./reference-automl-images-hyperparameters.md#image-classification-multi-class-and-multi-label-specific-hyperparameters). <br> <br> For an example, see [Supported model algorithms](./how-to-auto-train-image-models.md?tabs=cli#supported-model-algorithms) section. | | |
47+
| `training_parameters` | object | Dictionary containing training parameters for the job. Provide an object that has keys as listed in following sections. <br> - [Model agnostic hyperparameters](./reference-automl-images-hyperparameters.md#model-agnostic-hyperparameters) <br> - [Image classification (multi-class and multi-label) specific hyperparameters](./reference-automl-images-hyperparameters.md#image-classification-multi-class-and-multi-label-specific-hyperparameters). <br> <br> For an example, see [Supported model architectures](./how-to-auto-train-image-models.md?tabs=cli#supported-model-architectures) section. | | |
4848
| `sweep` | object | Dictionary containing sweep parameters for the job. It has two keys - `sampling_algorithm` (**required**) and `early_termination`. For more information and an example, see [Sampling methods for the sweep](./how-to-auto-train-image-models.md?tabs=cli#sampling-methods-for-the-sweep), [Early termination policies](./how-to-auto-train-image-models.md?tabs=cli#early-termination-policies) sections. | | |
4949
| `search_space` | object | Dictionary of the hyperparameter search space. The key is the name of the hyperparameter and the value is the parameter expression. The user can find the possible hyperparameters from parameters specified for `training_parameters` key. For an example, see [Sweeping hyperparameters for your model](./how-to-auto-train-image-models.md?tabs=cli#manually-sweeping-model-hyperparameters) section. | | |
5050
| `search_space.<hyperparameter>` | object | There are two types of hyperparameters: <br> - **Discrete Hyperparameters**: Discrete hyperparameters are specified as a [`choice`](./reference-yaml-job-sweep.md#choice) among discrete values. `choice` can be one or more comma-separated values, a `range` object, or any arbitrary `list` object. Advanced discrete hyperparameters can also be specified using a distribution - [`randint`](./reference-yaml-job-sweep.md#randint), [`qlognormal`, `qnormal`](./reference-yaml-job-sweep.md#qlognormal-qnormal), [`qloguniform`, `quniform`](./reference-yaml-job-sweep.md#qloguniform-quniform). For more information, see this [section](./how-to-tune-hyperparameters.md#discrete-hyperparameters). <br> - **Continuous hyperparameters**: Continuous hyperparameters are specified as a distribution over a continuous range of values. Currently supported distributions are - [`lognormal`, `normal`](./reference-yaml-job-sweep.md#lognormal-normal), [`loguniform`](./reference-yaml-job-sweep.md#loguniform), [`uniform`](./reference-yaml-job-sweep.md#uniform). For more information, see this [section](./how-to-tune-hyperparameters.md#continuous-hyperparameters). <br> <br> See [Parameter expressions](./reference-yaml-job-sweep.md#parameter-expressions) for the set of possible expressions to use. | | |

articles/machine-learning/reference-automl-images-cli-object-detection.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ For information on all the keys in Yaml syntax, see [Yaml syntax](./reference-au
3232
| --- | ---- | ----------- | -------------- | ------------- |
3333
| `task` | const | **Required.** The type of AutoML task. | `image_object_detection` | `image_object_detection` |
3434
| `primary_metric` | string | The metric that AutoML will optimize for model selection. |`mean_average_precision` | `mean_average_precision` |
35-
| `training_parameters` | object | Dictionary containing training parameters for the job. Provide an object that has keys as listed in following sections. <br> - [Model Specific Hyperparameters](./reference-automl-images-hyperparameters.md#model-specific-hyperparameters) for yolov5 (if you're using yolov5 for object detection) <br> - [Model agnostic hyperparameters](./reference-automl-images-hyperparameters.md#model-agnostic-hyperparameters) <br> - [Object detection and instance segmentation task specific hyperparameters](./reference-automl-images-hyperparameters.md#object-detection-and-instance-segmentation-task-specific-hyperparameters). <br> <br> For an example, see [Supported model algorithms](./how-to-auto-train-image-models.md?tabs=cli#supported-model-algorithms) section.| | |
35+
| `training_parameters` | object | Dictionary containing training parameters for the job. Provide an object that has keys as listed in following sections. <br> - [Model Specific Hyperparameters](./reference-automl-images-hyperparameters.md#model-specific-hyperparameters) for yolov5 (if you're using yolov5 for object detection) <br> - [Model agnostic hyperparameters](./reference-automl-images-hyperparameters.md#model-agnostic-hyperparameters) <br> - [Object detection and instance segmentation task specific hyperparameters](./reference-automl-images-hyperparameters.md#object-detection-and-instance-segmentation-task-specific-hyperparameters). <br> <br> For an example, see [Supported model architectures](./how-to-auto-train-image-models.md?tabs=cli#supported-model-architectures) section.| | |
3636

3737
## Remarks
3838

articles/machine-learning/reference-automl-images-hyperparameters.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -22,11 +22,11 @@ ms.date: 01/18/2022
2222
2323
Learn which hyperparameters are available specifically for computer vision tasks in automated ML experiments.
2424

25-
With support for computer vision tasks, you can control the model algorithm and sweep hyperparameters. These model algorithms and hyperparameters are passed in as the parameter space for the sweep. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are model-specific or task-specific.
25+
With support for computer vision tasks, you can control the model architecture and sweep hyperparameters. These model architectures and hyperparameters are passed in as the parameter space for the sweep. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are model-specific or task-specific.
2626

2727
## Model-specific hyperparameters
2828

29-
This table summarizes hyperparameters specific to the `yolov5` algorithm.
29+
This table summarizes hyperparameters specific to the `yolov5` architecture.
3030

3131
| Parameter name | Description | Default |
3232
| ------------- |-------------|----|
@@ -98,7 +98,7 @@ The following table summarizes hyperparmeters for image classification (multi-cl
9898
The following hyperparameters are for object detection and instance segmentation tasks.
9999

100100
> [!WARNING]
101-
> These parameters are not supported with the `yolov5` algorithm. See the [model specific hyperparameters](#model-specific-hyperparameters) section for `yolov5` supported hyperparmeters.
101+
> These parameters are not supported with the `yolov5` architecture. See the [model specific hyperparameters](#model-specific-hyperparameters) section for `yolov5` supported hyperparmeters.
102102
103103
| Parameter name | Description | Default |
104104
| ------------- |-------------|-----|

0 commit comments

Comments
 (0)