You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-auto-train-image-models.md
+14-14Lines changed: 14 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ ms.subservice: automl
10
10
ms.custom: event-tier1-build-2022, ignite-2022
11
11
ms.topic: how-to
12
12
ms.date: 07/13/2022
13
-
#Customer intent: I'm a data scientist with ML knowledge in the computer vision space, looking to build ML models using image data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
13
+
#Customer intent: I'm a data scientist with ML knowledge in the computer vision space, looking to build ML models using image data in Azure Machine Learning with full control of the model architecture, hyperparameters, and training and deployment environments.
14
14
---
15
15
16
16
# Set up AutoML to train computer vision models
@@ -285,18 +285,18 @@ Automatic sweeps can yield competitive results for many datasets. Additionally,
285
285
286
286
An AutoML training job uses a primary metric for model optimization and hyperparameter tuning. The primary metric depends on the task typeas shown below; other primary metric values are currently not supported.
*Intersection over union forimage classification multilabel
290
+
*Mean average precision forimage object detection
291
+
*Mean average precision forimage instance segmentation
292
292
293
293
### Job limits
294
294
295
295
You can control the resources spent on your AutoML Image training job by specifying the `timeout_minutes`, `max_trials`and the `max_concurrent_trials`for the job in limit settings as described in the below example.
296
296
297
297
Parameter | Detail
298
298
-----|----
299
-
`max_trials`| Parameter for maximum number of configurations to sweep. Must be an integer between 1and1000. When exploring just the default hyperparameters for a given model algorithm, set this parameter to 1. The default value is1.
299
+
`max_trials`| Parameter for maximum number of configurations to sweep. Must be an integer between 1and1000. When exploring just the default hyperparameters for a given model architecture, set this parameter to 1. The default value is1.
300
300
`max_concurrent_trials`| Maximum number of runs that can run concurrently. If specified, must be an integer between 1and100. The default value is1. <br><br>**NOTE:**<li> The number of concurrent runs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency. <li>`max_concurrent_trials`is capped at `max_trials` internally. For example, if user sets `max_concurrent_trials=4`, `max_trials=2`, values would be internally updated as`max_concurrent_trials=2`, `max_trials=2`.
301
301
`timeout_minutes`| The amount of time in minutes before the experiment terminates. If none specified, default experiment timeout_minutes is seven days (maximum 60 days)
302
302
@@ -357,26 +357,26 @@ A number of runs between 10 and 20 will likely work well on many datasets. The [
357
357
358
358
### Individual runs
359
359
360
-
In individual runs, you directly control the model algorithmand hyperparameters. The model algorithmis passed via the `model_name` parameter.
360
+
In individual runs, you directly control the model architectureand hyperparameters. The model architectureis passed via the `model_name` parameter.
361
361
362
-
#### Supported model algorithms
362
+
#### Supported model architectures
363
363
364
364
The following table summarizes the supported models for each computer vision task.
365
365
366
-
Task |Model algorithms| String literal syntax<br>***`default_model`\**** denoted with \*
366
+
Task |model architectures| String literal syntax<br>***`default_model`\**** denoted with \*
In addition to controlling the model algorithm, you can also tune hyperparameters used for model training. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are task-specific or model-specific. [Learn more about the available hyperparameters for these instances](reference-automl-images-hyperparameters.md).
373
+
In addition to controlling the model architecture, you can also tune hyperparameters used for model training. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are task-specific or model-specific. [Learn more about the available hyperparameters for these instances](reference-automl-images-hyperparameters.md).
If you wish to use the default hyperparameter values for a given algorithm (say yolov5), you can specify it using the model_name key in the training_parameters section. For example,
379
+
If you wish to use the default hyperparameter values for a given architecture (say yolov5), you can specify it using the model_name key in the training_parameters section. For example,
If you wish to use the default hyperparameter values for a given algorithm (say yolov5), you can specify it using the model_name parameter in the set_training_parameters method of the task specific `automl` job. For example,
389
+
If you wish to use the default hyperparameter values for a given architecture (say yolov5), you can specify it using the model_name parameter in the set_training_parameters method of the task specific `automl` job. For example,
You can define the model algorithmsand hyperparameters to sweep in the parameter space. You can either specify a single model algorithmor multiple ones.
442
+
You can define the model architecturesand hyperparameters to sweep in the parameter space. You can either specify a single model architectureor multiple ones.
443
443
444
-
* See [Individual runs](#individual-runs) for the list of supported model algorithms for each task type.
444
+
* See [Individual runs](#individual-runs) for the list of supported model architectures for each task type.
445
445
* See [Hyperparameters for computer vision tasks](reference-automl-images-hyperparameters.md) hyperparameters for each computer vision task type.
446
446
* See [details on supported distributions for discrete and continuous hyperparameters](how-to-tune-hyperparameters.md#define-the-search-space).
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-inference-onnx-automl-image-models.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -145,7 +145,7 @@ env = Environment(
145
145
)
146
146
```
147
147
148
-
Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](how-to-auto-train-image-models.md#configure-experiments) and for supported object detection model names refer to the [supported model algorithm section](how-to-auto-train-image-models.md#supported-model-algorithms).
148
+
Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](how-to-auto-train-image-models.md#configure-experiments) and for supported object detection model names refer to the [supported model architecture section](how-to-auto-train-image-models.md#supported-model-architectures).
149
149
150
150
To get the argument values needed to create the batch scoring model, refer to the scoring scripts generated under the outputs folder of the AutoML training runs. Use the hyperparameter values available in the model settings variable inside the scoring filefor the best child run.
# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
780
780
781
-
For object detection with the Faster R-CNNalgorithm, follow the same preprocessing steps as image classification, exceptfor image cropping. You can resize the image with height `600`and width `800`. You can get the expected input height and width with the following code.
781
+
For object detection with the Faster R-CNNarchitecture, follow the same preprocessing steps as image classification, exceptfor image cropping. You can resize the image with height `600`and width `800`. You can get the expected input height and width with the following code.
# [Object detection with YOLO](#tab/object-detect-yolo)
843
843
844
-
For object detection with the YOLOalgorithm, follow the same preprocessing steps as image classification, exceptfor image cropping. You can resize the image with height `600`and width `800`, and get the expected input height and width with the following code.
844
+
For object detection with the YOLOarchitecture, follow the same preprocessing steps as image classification, exceptfor image cropping. You can resize the image with height `600`and width `800`, and get the expected input height and width with the following code.
Copy file name to clipboardExpand all lines: articles/machine-learning/reference-automl-images-cli-classification.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,7 +44,7 @@ The source JSON schema can be found at https://azuremlsdk2.blob.core.windows.net
44
44
|`validation_data`| object | The validation data to be used within the job. It should contain both training features and label column (optionally a sample weights column). If `validation_data` is specified, then `training_data` and `target_column_name` parameters must be specified. For more information on keys and their descriptions, see [Training or validation data](#training-or-validation-data) section. For an example, see [Consume data](./how-to-auto-train-image-models.md?tabs=cli#consume-data) section|||
45
45
|`validation_data_size`| float | What fraction of the data to hold out for validation when user validation data isn't specified. | A value in range (0.0, 1.0) ||
46
46
|`limits`| object | Dictionary of limit configurations of the job. The key is name for the limit within the context of the job and the value is limit value. For more information, see [Configure your experiment settings](./how-to-auto-train-image-models.md?tabs=cli#job-limits) section. |||
47
-
|`training_parameters`| object | Dictionary containing training parameters for the job. Provide an object that has keys as listed in following sections. <br> - [Model agnostic hyperparameters](./reference-automl-images-hyperparameters.md#model-agnostic-hyperparameters) <br> - [Image classification (multi-class and multi-label) specific hyperparameters](./reference-automl-images-hyperparameters.md#image-classification-multi-class-and-multi-label-specific-hyperparameters). <br> <br> For an example, see [Supported model algorithms](./how-to-auto-train-image-models.md?tabs=cli#supported-model-algorithms) section. |||
47
+
|`training_parameters`| object | Dictionary containing training parameters for the job. Provide an object that has keys as listed in following sections. <br> - [Model agnostic hyperparameters](./reference-automl-images-hyperparameters.md#model-agnostic-hyperparameters) <br> - [Image classification (multi-class and multi-label) specific hyperparameters](./reference-automl-images-hyperparameters.md#image-classification-multi-class-and-multi-label-specific-hyperparameters). <br> <br> For an example, see [Supported model architectures](./how-to-auto-train-image-models.md?tabs=cli#supported-model-architectures) section. |||
48
48
|`sweep`| object | Dictionary containing sweep parameters for the job. It has two keys - `sampling_algorithm` (**required**) and `early_termination`. For more information and an example, see [Sampling methods for the sweep](./how-to-auto-train-image-models.md?tabs=cli#sampling-methods-for-the-sweep), [Early termination policies](./how-to-auto-train-image-models.md?tabs=cli#early-termination-policies) sections. |||
49
49
|`search_space`| object | Dictionary of the hyperparameter search space. The key is the name of the hyperparameter and the value is the parameter expression. The user can find the possible hyperparameters from parameters specified for `training_parameters` key. For an example, see [Sweeping hyperparameters for your model](./how-to-auto-train-image-models.md?tabs=cli#manually-sweeping-model-hyperparameters) section. |||
50
50
| `search_space.<hyperparameter>` | object | There are two types of hyperparameters: <br> - **Discrete Hyperparameters**: Discrete hyperparameters are specified as a [`choice`](./reference-yaml-job-sweep.md#choice) among discrete values. `choice` can be one or more comma-separated values, a `range` object, or any arbitrary `list` object. Advanced discrete hyperparameters can also be specified using a distribution - [`randint`](./reference-yaml-job-sweep.md#randint), [`qlognormal`, `qnormal`](./reference-yaml-job-sweep.md#qlognormal-qnormal), [`qloguniform`, `quniform`](./reference-yaml-job-sweep.md#qloguniform-quniform). For more information, see this [section](./how-to-tune-hyperparameters.md#discrete-hyperparameters). <br> - **Continuous hyperparameters**: Continuous hyperparameters are specified as a distribution over a continuous range of values. Currently supported distributions are - [`lognormal`, `normal`](./reference-yaml-job-sweep.md#lognormal-normal), [`loguniform`](./reference-yaml-job-sweep.md#loguniform), [`uniform`](./reference-yaml-job-sweep.md#uniform). For more information, see this [section](./how-to-tune-hyperparameters.md#continuous-hyperparameters). <br> <br> See [Parameter expressions](./reference-yaml-job-sweep.md#parameter-expressions) for the set of possible expressions to use. | | |
|`task`| const |**Required.** The type of AutoML task. |`image_object_detection`|`image_object_detection`|
34
34
|`primary_metric`| string | The metric that AutoML will optimize for model selection. |`mean_average_precision`|`mean_average_precision`|
35
-
|`training_parameters`| object | Dictionary containing training parameters for the job. Provide an object that has keys as listed in following sections. <br> - [Model Specific Hyperparameters](./reference-automl-images-hyperparameters.md#model-specific-hyperparameters) for yolov5 (if you're using yolov5 for object detection) <br> - [Model agnostic hyperparameters](./reference-automl-images-hyperparameters.md#model-agnostic-hyperparameters) <br> - [Object detection and instance segmentation task specific hyperparameters](./reference-automl-images-hyperparameters.md#object-detection-and-instance-segmentation-task-specific-hyperparameters). <br> <br> For an example, see [Supported model algorithms](./how-to-auto-train-image-models.md?tabs=cli#supported-model-algorithms) section.|||
35
+
|`training_parameters`| object | Dictionary containing training parameters for the job. Provide an object that has keys as listed in following sections. <br> - [Model Specific Hyperparameters](./reference-automl-images-hyperparameters.md#model-specific-hyperparameters) for yolov5 (if you're using yolov5 for object detection) <br> - [Model agnostic hyperparameters](./reference-automl-images-hyperparameters.md#model-agnostic-hyperparameters) <br> - [Object detection and instance segmentation task specific hyperparameters](./reference-automl-images-hyperparameters.md#object-detection-and-instance-segmentation-task-specific-hyperparameters). <br> <br> For an example, see [Supported model architectures](./how-to-auto-train-image-models.md?tabs=cli#supported-model-architectures) section.|||
Copy file name to clipboardExpand all lines: articles/machine-learning/reference-automl-images-hyperparameters.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,11 +22,11 @@ ms.date: 01/18/2022
22
22
23
23
Learn which hyperparameters are available specifically for computer vision tasks in automated ML experiments.
24
24
25
-
With support for computer vision tasks, you can control the model algorithm and sweep hyperparameters. These model algorithms and hyperparameters are passed in as the parameter space for the sweep. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are model-specific or task-specific.
25
+
With support for computer vision tasks, you can control the model architecture and sweep hyperparameters. These model architectures and hyperparameters are passed in as the parameter space for the sweep. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are model-specific or task-specific.
26
26
27
27
## Model-specific hyperparameters
28
28
29
-
This table summarizes hyperparameters specific to the `yolov5`algorithm.
29
+
This table summarizes hyperparameters specific to the `yolov5`architecture.
30
30
31
31
| Parameter name | Description | Default |
32
32
| ------------- |-------------|----|
@@ -98,7 +98,7 @@ The following table summarizes hyperparmeters for image classification (multi-cl
98
98
The following hyperparameters are for object detection and instance segmentation tasks.
99
99
100
100
> [!WARNING]
101
-
> These parameters are not supported with the `yolov5`algorithm. See the [model specific hyperparameters](#model-specific-hyperparameters) section for `yolov5` supported hyperparmeters.
101
+
> These parameters are not supported with the `yolov5`architecture. See the [model specific hyperparameters](#model-specific-hyperparameters) section for `yolov5` supported hyperparmeters.
0 commit comments