Skip to content

Commit 4bb3ff4

Browse files
checkpoint
1 parent 85c0f79 commit 4bb3ff4

File tree

1 file changed

+3
-1
lines changed

1 file changed

+3
-1
lines changed

articles/machine-learning/how-to-auto-train-forecast.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -298,7 +298,9 @@ Use the best model iteration to forecast values for data that wasn't used to tra
298298

299299
### Evaluating model accuracy with a rolling forecast
300300

301-
Before you put a model into production, you should evaluate its accuracy on a test set held out from the training data. A best practice procedure is a so-called rolling evaluation which rolls the trained forecaster forward in time over the test set, averaging error metrics over several prediction windows to obtain statistically robust estimates for some set of chosen metrics. Ideally, the test set for the evaluation is long relative to the model's forecast horizon; estimates of forecasting error may otherwise be statistically noisy and, therefore, less reliable.
301+
Before you put a model into production, you should evaluate its accuracy on a test set held out from the training data. A best practice procedure is a so-called rolling evaluation which rolls the trained forecaster forward in time over the test set, averaging error metrics over several prediction windows to obtain statistically robust estimates for some set of chosen metrics. Ideally, the test set for the evaluation is long relative to the model's forecast horizon. Estimates of forecasting error may otherwise be statistically noisy and, therefore, less reliable.
302+
303+
For example, suppose you train a model on historic, daily sales to predict demand up to two weeks (14 days) into the future. If there is sufficient historic data available, you might reserve the final several months to even a year of the data for the test set. The rolling evaluation begins by generating a 14-day-ahead forecast for the first two weeks of the test set. Then, the forecaster is advanced by some number of days into the test set and you generate another 14-day-ahead forecast from the new position. The process continues until you get to the end of the test set. We refer to the current position of the forecaster as the _forecast origin_.
302304

303305
### Prediction into the future
304306

0 commit comments

Comments
 (0)