Skip to content

Commit 12caaff

Browse files
Address review pt 2
1 parent f19b7c2 commit 12caaff

7 files changed

+26
-24
lines changed

articles/machine-learning/concept-automl-forecasting-at-scale.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.service: machine-learning
1010
ms.subservice: automl
1111
ms.topic: conceptual
1212
ms.custom: contperf-fy21q1, automl, FY21Q4-aml-seo-hack, sdkv2, event-tier1-build-2022
13-
ms.date: 05/31/2023
13+
ms.date: 08/01/2023
1414
show_latex: true
1515
---
1616

articles/machine-learning/concept-automl-forecasting-deep-learning.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.service: machine-learning
1010
ms.subservice: automl
1111
ms.topic: conceptual
1212
ms.custom: contperf-fy21q1, automl, FY21Q4-aml-seo-hack, sdkv2, event-tier1-build-2022
13-
ms.date: 02/24/2023
13+
ms.date: 08/01/2023
1414
show_latex: true
1515
---
1616

articles/machine-learning/concept-automl-forecasting-evaluation.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.service: machine-learning
1010
ms.subservice: automl
1111
ms.topic: conceptual
1212
ms.custom: contperf-fy21q1, automl, FY21Q4-aml-seo-hack, sdkv2, event-tier1-build-2022
13-
ms.date: 05/17/2023
13+
ms.date: 08/01/2023
1414
show_latex: true
1515
---
1616

@@ -31,7 +31,7 @@ The diagram shows two important inference parameters:
3131
* The **context length**, or the amount of history that the model requires to make a forecast,
3232
* The **forecast horizon**, which is how far ahead in time the forecaster is trained to predict.
3333

34-
Forecasting models generally use some amount of historical information, the context, to make predictions ahead in time up to the forecast horizon. **When the context is part of the training data, AutoML saves what it needs to make forecasts**, so there is no need to explicitly provide it.
34+
Forecasting models usually use some historical information, the context, to make predictions ahead in time up to the forecast horizon. **When the context is part of the training data, AutoML saves what it needs to make forecasts**, so there is no need to explicitly provide it.
3535

3636
There are two other inference scenarios that are more complicated:
3737

articles/machine-learning/how-to-auto-train-forecast.md

Lines changed: 15 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.service: machine-learning
1010
ms.subservice: automl
1111
ms.topic: how-to
1212
ms.custom: contperf-fy21q1, automl, FY21Q4-aml-seo-hack, sdkv2, event-tier1-build-2022, build-2023, devx-track-python
13-
ms.date: 06/19/2023
13+
ms.date: 08/01/2023
1414
show_latex: true
1515
---
1616

@@ -203,7 +203,7 @@ forecasting_job = automl.forecasting(
203203
experiment_name="sdk-v2-automl-forecasting-job",
204204
training_data=my_training_data_input,
205205
target_column_name=target_column_name,
206-
primary_metric="NormalizedRootMeanSquaredError",
206+
primary_metric="normalized_root_mean_squared_error",
207207
n_cross_validations="auto",
208208
)
209209

@@ -259,7 +259,7 @@ Forecasting tasks have many settings that are specific to forecasting. The most
259259

260260
# [Python SDK](#tab/python)
261261

262-
Use the [set_forecast_settings()](/python/api/azure-ai-ml/azure.ai.ml.automl.forecastingjob#azure-ai-ml-automl-forecastingjob-set-forecast-settings) method of a ForecastingJob to configure these settings:
262+
Use the [ForecastingJob](/python/api/azure-ai-ml/azure.ai.ml.automl.forecastingjob#azure-ai-ml-automl-forecastingjob-set-forecast-settings) methods to configure these settings:
263263

264264
```python
265265
# Forecasting specific configuration
@@ -350,7 +350,7 @@ There are two optional settings that control the model space where AutoML search
350350
```python
351351
# Only search ExponentialSmoothing and ElasticNet models
352352
forecasting_job.set_training(
353-
allowed_training_algorithms=["exponential_smoothing", "elastic_net"]
353+
allowed_training_algorithms=["ExponentialSmoothing", "ElasticNet"]
354354
)
355355
```
356356

@@ -386,7 +386,7 @@ forecasting:
386386
# training settings
387387
# Only search ExponentialSmoothing and ElasticNet models
388388
training:
389-
allowed_training_algorithms: ["exponential_smoothing", "elastic_net"]
389+
allowed_training_algorithms: ["ExponentialSmoothing", "ElasticNet"]
390390
# other training settings
391391
```
392392

@@ -409,13 +409,13 @@ forecasting_job.set_training(
409409
# training settings
410410
# Search over all model classes except Prophet
411411
training:
412-
blocked_training_algorithms: ["prophet"]
412+
blocked_training_algorithms: ["Prophet"]
413413
# other training settings
414414
```
415415

416416
---
417417

418-
Now, the job searches over all model classes _except_ Prophet. For a list of forecasting model names that are accepted in `allowed_training_algorithms` and `blocked_training_algorithms`, see the [training properties](reference-automated-ml-forecasting.md#training) reference documentation.
418+
Now, the job searches over all model classes _except_ Prophet. For a list of forecasting model names that are accepted in `allowed_training_algorithms` and `blocked_training_algorithms`, see the [training properties](reference-automated-ml-forecasting.md#training) reference documentation. Either, but not both, of `allowed_training_algorithms` and `blocked_training_algorithms` can be applied to a training run.
419419

420420
#### Enable deep learning
421421

@@ -442,7 +442,7 @@ forecasting_job.set_training(
442442
# training settings
443443
# Include TCNForecaster models in the model search
444444
training:
445-
enable_dnn_training: True
445+
enable_dnn_training: true
446446
# other training settings
447447
```
448448

@@ -741,7 +741,7 @@ returned_job.services["Studio"].endpoint
741741

742742
# [Azure CLI](#tab/cli)
743743

744-
In following CLI command, we assume the job YAML configuration is at the path, `./automl-forecasting-job.yml`:
744+
In following CLI command, we assume the job YAML configuration is in the current working directory at the path, `./automl-forecasting-job.yml`. If you run the command from a different directory, you will need to change the path accordingly.
745745

746746
```azurecli
747747
run_id=$(az ml job create --file automl-forecasting-job.yml)
@@ -819,7 +819,7 @@ def forecasting_train_and_evaluate_factory(
819819
target_column_name,
820820
time_column_name,
821821
forecast_horizon,
822-
primary_metric='NormalizedRootMeanSquaredError',
822+
primary_metric='normalized_root_mean_squared_error',
823823
cv_folds='auto'
824824
):
825825
# Configure the training node of the pipeline
@@ -1039,11 +1039,12 @@ The many models components in AutoML enable you to train and manage millions of
10391039

10401040
### Many models training configuration
10411041

1042-
The many models training component accepts a YAML format configuration file of AutoML training settings. The component applies these settings to each AutoML instance it launches. This YAML file has the same specification as the [Forecasting Job](reference-automated-ml-forecasting.md) plus one additional parameter named `partition_column_names`.
1042+
The many models training component accepts a YAML format configuration file of AutoML training settings. The component applies these settings to each AutoML instance it launches. This YAML file has the same specification as the [Forecasting Job](reference-automated-ml-forecasting.md) plus additional parameters `partition_column_names` and `allow_multi_partitions`.
10431043

10441044
Parameter|Description
10451045
--|--
10461046
| **partition_column_names** | Column names in the data that, when grouped, define the data partitions. Many models launches an independent training job on each partition.
1047+
| **allow_multi_partitions** | An optional flag that allows training one model per partition when each partition contains more than one unique time series. The default value is False.
10471048

10481049
The following sample provides a configuration template:
10491050
```yml
@@ -1063,7 +1064,7 @@ forecasting:
10631064
forecast_horizon: 28
10641065
10651066
training:
1066-
blocked_training_algorithms: ["extreme_random_trees"]
1067+
blocked_training_algorithms: ["ExtremeRandomTrees"]
10671068
10681069
limits:
10691070
timeout_minutes: 15
@@ -1074,6 +1075,7 @@ limits:
10741075
enable_early_termination: true
10751076
10761077
partition_column_names: ["state", "store"]
1078+
allow_multi_partitions: false
10771079
```
10781080

10791081
In subsequent examples, we assume that the configuration is stored at the path, `./automl_settings_mm.yml`.
@@ -1337,7 +1339,7 @@ forecasting:
13371339
forecast_horizon: 28
13381340
13391341
training:
1340-
blocked_training_algorithms: ["extreme_random_trees"]
1342+
blocked_training_algorithms: ["ExtremeRandomTrees"]
13411343
13421344
limits:
13431345
timeout_minutes: 15

articles/machine-learning/how-to-automl-forecasting-faq.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.service: machine-learning
1010
ms.subservice: automl
1111
ms.topic: faq
1212
ms.custom: contperf-fy21q1, automl, FY21Q4-aml-seo-hack, sdkv2, event-tier1-build-2022
13-
ms.date: 01/27/2023
13+
ms.date: 08/01/2023
1414
---
1515

1616
# Frequently asked questions about forecasting in AutoML

articles/machine-learning/how-to-configure-auto-train.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.reviewer: ssalgado
88
services: machine-learning
99
ms.service: machine-learning
1010
ms.subservice: automl
11-
ms.date: 06/19/2023
11+
ms.date: 08/01/2023
1212
ms.topic: how-to
1313
ms.custom: devx-track-python, automl, sdkv2, event-tier1-build-2022, ignite-2022
1414
show_latex: true
@@ -667,7 +667,7 @@ from azure.ai.ml.constants import TabularTrainingMode
667667
668668
# Set the training mode to distributed
669669
classification_job.set_training(
670-
allowed_training_algorithms=["light_gbm"],
670+
allowed_training_algorithms=["LightGBM"],
671671
training_mode=TabularTrainingMode.DISTRIBUTED
672672
)
673673
@@ -683,7 +683,7 @@ classification_job.set_limits(
683683
```yml
684684
# Set the training mode to distributed
685685
training:
686-
allowed_training_algorithms: ["light_gbm"]
686+
allowed_training_algorithms: ["LightGBM"]
687687
training_mode: distributed
688688
689689
# Distribute training across 4 nodes for each trial
@@ -717,7 +717,7 @@ from azure.ai.ml.constants import TabularTrainingMode
717717
# Set the training mode to distributed
718718
forecasting_job.set_training(
719719
enable_dnn_training=True,
720-
allowed_training_algorithms=["tcn_forecaster"],
720+
allowed_training_algorithms=["TCNForecaster"],
721721
training_mode=TabularTrainingMode.DISTRIBUTED
722722
)
723723
@@ -735,7 +735,7 @@ forecasting_job.set_limits(
735735
```yml
736736
# Set the training mode to distributed
737737
training:
738-
allowed_training_algorithms: ["tcn_forecaster"]
738+
allowed_training_algorithms: ["TCNForecaster"]
739739
training_mode: distributed
740740
741741
# Distribute training across 4 nodes

articles/machine-learning/how-to-understand-automated-ml.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.author: magoswam
88
ms.reviewer: ssalgado
99
ms.service: machine-learning
1010
ms.subservice: automl
11-
ms.date: 07/20/2023
11+
ms.date: 08/01/2023
1212
ms.topic: how-to
1313
ms.custom: contperf-fy21q2, automl, event-tier1-build-2022
1414
---

0 commit comments

Comments
 (0)