Skip to content

Commit 380918d

Browse files
committed
addressing PR comments
1 parent 8d66848 commit 380918d

File tree

2 files changed

+9
-13
lines changed

2 files changed

+9
-13
lines changed

articles/machine-learning/how-to-auto-train-image-models.md

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -317,7 +317,13 @@ The primary metric used for model optimization and hyperparameter tuning depends
317317

318318
### Job Limits
319319

320-
You can control the resources spent on your Vision training by specifying the `timeout_minutes`, `max_trials` and the `max_concurrent_trials` for the job in limit settings as described in the below example. [detailed description on Job Limits parameters](./tutorial-auto-train-image-models.md#job-limits).
320+
You can control the resources spent on your AutoML Image training job by specifying the `timeout_minutes`, `max_trials` and the `max_concurrent_trials` for the job in limit settings as described in the below example.
321+
322+
Parameter | Detail
323+
-----|----
324+
`max_trials` | Required parameter for maximum number of configurations to sweep. Must be an integer between 1 and 1000. When exploring just the default hyperparameters for a given model algorithm, set this parameter to 1.
325+
`max_concurrent_trials`| Maximum number of runs that can run concurrently. If not specified, all runs launch in parallel. If specified, must be an integer between 1 and 100. <br><br> **NOTE:** The number of concurrent runs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency.
326+
`timeout_minutes`| The amount of time in minutes before the experiment terminates. If none specified, default experiment timeout_minutes is seven days (maximum 60 days)
321327

322328
# [CLI v2](#tab/CLI-v2)
323329

@@ -375,8 +381,6 @@ You can automatically end poorly performing runs with an early termination polic
375381

376382
Learn more about [how to configure the early termination policy for your hyperparameter sweep](how-to-tune-hyperparameters.md#early-termination).
377383

378-
### Resources for the sweep
379-
380384
> [!NOTE]
381385
> For a complete sweep configuration sample, please refer to this [tutorial](tutorial-auto-train-image-models.md#hyperparameter-sweeping-for-image-tasks).
382386

articles/machine-learning/tutorial-auto-train-image-models.md

Lines changed: 2 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -308,15 +308,7 @@ In this example, we will train an object detection model with `yolov5` and `fast
308308

309309
### Job Limits
310310

311-
You can control the resources spent on your Vision training by specifying the `timeout_minutes`, `max_trials` and the `max_concurrent_trials` for the job in limit settings.
312-
313-
Parameter | Detail
314-
-----|----
315-
`max_trials` | Required parameter for maximum number of configurations to sweep. Must be an integer between 1 and 1000. When exploring just the default hyperparameters for a given model algorithm, set this parameter to 1.
316-
`max_concurrent_trials`| Maximum number of runs that can run concurrently. If not specified, all runs launch in parallel. If specified, must be an integer between 1 and 100. <br><br> **NOTE:** The number of concurrent runs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency.
317-
`timeout_minutes`| The amount of time in minutes before the experiment terminates. If none specified, default experiment timeout_minutes is seven days (maximum 60 days)
318-
319-
311+
You can control the resources spent on your AutoML Image training job by specifying the `timeout_minutes`, `max_trials` and the `max_concurrent_trials` for the job in limit settings.[detailed description on Job Limits parameters](./how-to-auto-train-image-models.md#job-limits).
320312
# [CLI v2](#tab/CLI-v2)
321313

322314
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
@@ -340,7 +332,7 @@ You can perform a hyperparameter sweep over a defined search space to find the o
340332

341333
The following code, defines the search space in preparation for the hyperparameter sweep for each defined algorithm, `yolov5` and `fasterrcnn_resnet50_fpn`. In the search space, specify the range of values for `learning_rate`, `optimizer`, `lr_scheduler`, etc., for AutoML to choose from as it attempts to generate a model with the optimal primary metric. If hyperparameter values are not specified, then default values are used for each algorithm.
342334

343-
For the tuning settings, use random sampling to pick samples from this parameter space by using the `random` sampling_algorithm. Doing so, tells automated ML to try a total of 10 trials with these different samples, running two trials at a time on our compute target, which was set up using four nodes. The more parameters the search space has, the more trials you need to find optimal models.
335+
For the tuning settings, use random sampling to pick samples from this parameter space by using the `random` sampling_algorithm. The job limits configured above, tells automated ML to try a total of 10 trials with these different samples, running two trials at a time on our compute target, which was set up using four nodes. The more parameters the search space has, the more trials you need to find optimal models.
344336

345337
The Bandit early termination policy is also used. This policy terminates poor performing configurations; that is, those configurations that are not within 20% slack of the best performing configuration, which significantly saves compute resources.
346338

0 commit comments

Comments
 (0)