Skip to content

Commit 2bf86a6

Browse files
Merge pull request #285373 from ssalgadodev/patch-147
Update concept-automl-forecasting-at-scale.md
2 parents d7c26ca + 82e4247 commit 2bf86a6

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/machine-learning/concept-automl-forecasting-at-scale.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,15 +10,15 @@ ms.service: azure-machine-learning
1010
ms.subservice: automl
1111
ms.topic: conceptual
1212
ms.custom: automl, sdkv2
13-
ms.date: 08/01/2023
13+
ms.date: 08/21/2024
1414
show_latex: true
1515
---
1616

1717
# Forecasting at scale: many models and distributed training
1818

1919
This article is about training forecasting models on large quantities of historical data. Instructions and examples for training forecasting models in AutoML can be found in our [set up AutoML for time series forecasting](./how-to-auto-train-forecast.md) article.
2020

21-
Time series data can be large due to the number of series in the data, the number of historical observations, or both. **Many models** and hierarchical time series, or **HTS**, are scaling solutions for the former scenario, where the data consists of a large number of time series. In these cases, it can be beneficial for model accuracy and scalability to partition the data into groups and train a large number of independent models in parallel on the groups. Conversely, there are scenarios where one or a small number of high-capacity models is better. **Distributed DNN training** targets this case. We review concepts around these scenarios in the remainder of the article.
21+
Time series data can be large due to the number of series in the data, the number of historical observations, or both. **Many models** and hierarchical time series, or **HTS**, are scaling solutions for the former scenario, where the data consists of a large number of time series. In these cases, it can be beneficial for model accuracy and scalability to partition the data into groups and train a large number of independent models in parallel on the groups. Conversely, there are scenarios where one or a few high-capacity models are better. **Distributed DNN training** targets this case. We review concepts around these scenarios in the remainder of the article.
2222

2323
## Many models
2424

0 commit comments

Comments
 (0)