You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can control the resources spent on your AutoML Image training job by specifying the `timeout_minutes`, `max_trials`and the `max_concurrent_trials`for the job in limit settings as described in the below example.
336
+
337
+
Parameter | Detail
338
+
-----|----
339
+
`max_trials`| Parameter for maximum number of configurations to sweep. Must be an integer between 1and1000. When exploring just the default hyperparameters for a given model algorithm, set this parameter to 1. default value is1.
340
+
`max_concurrent_trials`| Maximum number of runs that can run concurrently. If not specified, all runs launch in parallel. If specified, must be an integer between 1and100. <br><br>**NOTE:** The number of concurrent runs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency. default value is1.
341
+
`timeout_minutes`| The amount of time in minutes before the experiment terminates. If none specified, default experiment timeout_minutes is seven days (maximum 60 days)
334
342
335
-
You can optionally specify the maximum time budget for your AutoML Vision training job using the `timeout` parameter in the `limits`- the amount of time in minutes before the experiment terminates. If none specified, default experiment timeout is seven days (maximum 60 days). For example,
@@ -429,16 +438,9 @@ You can automatically end poorly performing runs with an early termination polic
429
438
430
439
Learn more about [how to configure the early termination policy for your hyperparameter sweep](how-to-tune-hyperparameters.md#early-termination).
431
440
432
-
### Resources for the sweep
433
-
434
-
You can control the resources spent on your hyperparameter sweep by specifying the `max_trials`and the `max_concurrent_trials`for the sweep.
435
441
> [!NOTE]
436
442
> For a complete sweep configuration sample, please refer to this [tutorial](tutorial-auto-train-image-models.md#hyperparameter-sweeping-for-image-tasks).
437
443
438
-
Parameter | Detail
439
-
-----|----
440
-
`max_trials`| Required parameter for maximum number of configurations to sweep. Must be an integer between 1and1000. When exploring just the default hyperparameters for a given model algorithm, set this parameter to 1.
441
-
`max_concurrent_trials`| Maximum number of runs that can run concurrently. If not specified, all runs launch in parallel. If specified, must be an integer between 1and100. <br><br>**NOTE:** The number of concurrent runs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency.
442
444
443
445
You can configure all the sweep related parameters as shown in the example below.
444
446
@@ -448,9 +450,6 @@ You can configure all the sweep related parameters as shown in the example below
Copy file name to clipboardExpand all lines: articles/machine-learning/tutorial-auto-train-image-models.md
+21-4Lines changed: 21 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -322,13 +322,33 @@ In your AutoML job, you can specify the model algorithms by using `model_name` p
322
322
323
323
In this example, we will train an object detection model with `yolov5` and `fasterrcnn_resnet50_fpn`, both of which are pretrained on COCO, a large-scale object detection, segmentation, and captioning dataset that contains over thousands of labeled images with over 80 label categories.
324
324
325
+
### Job Limits
326
+
327
+
You can control the resources spent on your AutoML Image training job by specifying the `timeout_minutes`, `max_trials` and the `max_concurrent_trials` for the job in limit settings. PLease refer to [detailed description on Job Limits parameters](./how-to-auto-train-image-models.md#job-limits).
You can perform a hyperparameter sweep over a defined search space to find the optimal model.
328
348
329
349
The following code, defines the search space in preparation for the hyperparameter sweep for each defined algorithm, `yolov5` and `fasterrcnn_resnet50_fpn`. In the search space, specify the range of values for `learning_rate`, `optimizer`, `lr_scheduler`, etc., for AutoML to choose from as it attempts to generate a model with the optimal primary metric. If hyperparameter values are not specified, then default values are used for each algorithm.
330
350
331
-
For the tuning settings, use random sampling to pick samples from this parameter space by using the `random` sampling_algorithm. Doing so, tells automated ML to try a total of 10 trials with these different samples, running two trials at a time on our compute target, which was set up using four nodes. The more parameters the search space has, the more trials you need to find optimal models.
351
+
For the tuning settings, use random sampling to pick samples from this parameter space by using the `random` sampling_algorithm. The job limits configured above, tells automated ML to try a total of 10 trials with these different samples, running two trials at a time on our compute target, which was set up using four nodes. The more parameters the search space has, the more trials you need to find optimal models.
332
352
333
353
The Bandit early termination policy is also used. This policy terminates poor performing configurations; that is, those configurations that are not within 20% slack of the best performing configuration, which significantly saves compute resources.
334
354
@@ -338,9 +358,6 @@ The Bandit early termination policy is also used. This policy terminates poor pe
0 commit comments