Skip to content

Commit 867a0ef

Browse files
authored
Update how-to-auto-train-forecast.md
1 parent 60afa1e commit 867a0ef

File tree

1 file changed

+7
-1
lines changed

1 file changed

+7
-1
lines changed

articles/machine-learning/how-to-auto-train-forecast.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1314,7 +1314,13 @@ For a more detailed example, see the [demand forecasting with many models notebo
13141314

13151315
#### Training considerations for a many models run
13161316

1317-
The many models training and inference components conditionally partition your data according to the `partition_column_names` setting so each partition is in its own file. This process can be very slow or fail when data is very large. The recommendation is to partition your data manually before you run many models training or inference.
1317+
- The many models training and inference components conditionally partition your data according to the `partition_column_names` setting. This process results in each partition being in its own file. The process can be very slow or fail when data is very large. The recommendation is to partition your data manually before you run many models training or inference.
1318+
1319+
- During many models training, models are automatically registered in the workspace, and hence manual registration of models are not required. Models are named based on the partition on which they were trained and this is not customizable. Same for tags, these are not customizable, and we use these properties to auto detect models during inference.
1320+
1321+
- Deploying individual model is not at all scalable, and hence we provide `PipelineComponentBatchDeployment` to ease the deployment process. Please refer [demand forecasting with many models notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1k_demand_forecast_pipeline/aml-demand-forecast-mm-pipeline/aml-demand-forecast-mm-pipeline.ipynb) to see this in action.
1322+
1323+
- During inference, appropriate models (latest version) are automatically selected based on the partition sent in the inference data. By default, latest models are selected from an experiment by providing `training_experiment_name` but you can override to select models from a particular training run by also providing `train_run_id`.
13181324

13191325
> [!NOTE]
13201326
> The default parallelism limit for a many models run within a subscription is set to 320. If your workload requires a higher limit, you can contact Microsoft support.

0 commit comments

Comments
 (0)