You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This article introduces concepts related to model inference and evaluation in forecasting tasks. For instructions and examples for training forecasting models in AutoML, see [Set up AutoML to train a time-series forecasting model with SDK and CLI](./how-to-auto-train-forecast.md).
#customer intent: As a data scientist, I want to train time-series forecasting models and understand the options available for training them by using AutoML.
16
16
---
@@ -772,8 +772,6 @@ After the job is submitted, AutoML provisions compute resources, applies featuri
772
772
773
773
## Orchestrate training, inference, and evaluation with components and pipelines
Your ML workflow likely requires more than just training. Inference, or retrieving model predictions on newer data, and evaluation of model accuracy on a test set with known target values are other common tasks that you can orchestrate in Azure Machine Learning along with training jobs. To support inference and evaluation tasks, Azure Machine Learning provides [components](concept-component.md), which are self-contained pieces of code that do one step in an Azure Machine Learning [pipeline](concept-ml-pipelines.md).
778
776
779
777
# [Python SDK](#tab/python)
@@ -1047,8 +1045,6 @@ For more information on rolling evaluation, see [Inference and evaluation of for
The many models components in AutoML enable you to train and manage millions of models in parallel. For more information on many models concepts, see [Many models](concept-automl-forecasting-at-scale.md#many-models).
1053
1049
1054
1050
### Many models training configuration
@@ -1322,12 +1318,13 @@ For a more detailed example, see the [demand forecasting with many models notebo
1322
1318
> [!NOTE]
1323
1319
> The many models training and inference components conditionally partition your data according to the `partition_column_names` setting so that each partition is in its own file. This process can be very slow or fail when data is very large. In this case, we recommend partitioning your data manually before running many models training or inference.
1324
1320
1321
+
> [!NOTE]
1322
+
> The default parallelism limit for a many models run within a subscription is set to 320. If your workload requires a higher limit, please don't hesitate to reach out to us.
The hierarchical time series (HTS) components in AutoML enable you to train a large number of models on data with hierarchical structure. For more information, see the [HTS article section](concept-automl-forecasting-at-scale.md#hierarchical-time-series-forecasting).
1332
1329
1333
1330
### HTS training configuration
@@ -1600,6 +1597,9 @@ For a more detailed example, see the [demand forecasting with hierarchical time
1600
1597
> [!NOTE]
1601
1598
> The HTS training and inference components conditionally partition your data according to the `hierarchy_column_names` setting so that each partition is in its own file. This process can be very slow or fail when data is very large. In this case, we recommend partitioning your data manually before running HTS training or inference.
1602
1599
1600
+
> [!NOTE]
1601
+
> The default parallelism limit for a hierarchical time series run within a subscription is set to 320. If your workload requires a higher limit, please don't hesitate to reach out to us.
1602
+
1603
1603
## Forecast at scale: distributed DNN training
1604
1604
1605
1605
- To learn how distributed training works for forecasting tasks, see [Distributed DNN training](concept-automl-forecasting-at-scale.md#distributed-dnn-training-preview).
0 commit comments