You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In this article, you learn how to use the designer to create a batch prediction pipeline. Batch prediction lets you continuously score large datasets on-demand. Use the designer to automatically creates a web service that can be triggered from any HTTP library.
18
+
In this article, you learn how to use the designer to create a batch prediction pipeline. Batch prediction lets you continuously score large datasets on-demand using a web service that can be triggered from any HTTP library.
19
19
20
-
In this how-to, you learn the following tasks:
20
+
In this how-to, you learn to do the following tasks:
21
21
22
22
> [!div class="checklist"]
23
-
> * Create a parameterized batch inference pipeline
24
-
> * Manage and run pipelines manually or from a REST endpoint
23
+
> * Create and publish a batch inference pipeline
24
+
> * Consume a pipeline endpoint
25
+
> * Manage endpoint versions
25
26
26
27
To learn how to set up batch scoring services using the SDK, see the accompanying [how-to](how-to-run-batch-predictions.md).
27
28
28
29
## Prerequisites
29
30
30
31
This how-to assumes you already have a training pipeline. For a guided introduction to the designer, complete [part one of the designer tutorial](tutorial-designer-automobile-price-train-score.md).
31
32
32
-
## Open the training pipeline
33
+
## Create a batch inference pipeline
33
34
34
35
Your training pipeline must be run at least once to be able to create an inferencing pipeline.
35
36
36
37
1. Go to the **Designer** tab in your workspace.
37
38
38
-
1. Select the training pipeline you want to use to score data.
39
+
1. Select the training pipeline that trains the model want to use to make prediction.
39
40
40
41
1.**Run** the pipeline.
41
42
42
43

43
44
44
-
## Create a batch inference pipeline
45
-
46
45
Now that the training pipeline has been run, you can create a batch inference pipeline.
47
46
48
47
1. Next to **Run**, select the new dropdown **Create inference pipeline**.
@@ -53,63 +52,68 @@ Now that the training pipeline has been run, you can create a batch inference pi
53
52
54
53
The result is a default batch inference pipeline.
55
54
56
-
##Create a dataset parameter
55
+
### Add a pipeline parameter
57
56
58
-
To create predictions on new data, you can either manually connect a different dataset in this pipeline draft view or create a parameter for your dataset. Parameters let you change the behavior of the batch inferencing process at runtime to use new data.
57
+
To create predictions on new data, you can either manually connect a different dataset in this pipeline draft view or create a parameter for your dataset. Parameters let you change the behavior of the batch inferencing process at runtime.
59
58
60
-
In this section, you will create a dataset parameter to specify a different dataset to make predictions on.
59
+
In this section, you create a dataset parameter to specify a different dataset to make predictions on.
61
60
62
61
1. Select the dataset module.
63
62
64
63
1. A pane will appear to the right of the canvas. At the bottom of the pane, select **Set as pipeline parameter**.
65
64
66
65
Enter a name for the parameter, or accept the default value.
67
66
68
-
## Publish batch inferencing pipeline
67
+
## Publish your batch inferencing pipeline
69
68
70
-
Now you're ready to deploy the inferencig pipeline. This will deploy the pipeline and make it avaialble to use by others.
69
+
Now you're ready to deploy the inferencing pipeline. This will deploy the pipeline and make it available for others to use.
71
70
72
71
1. Select the **Publish** button.
73
72
74
-
1. In the dialog that appears, expand the dropdown for **PipelineEndpoint**, and select **New PipelineEndpoint**.
73
+
1. In the dialog that appears, expand the drop-down for **PipelineEndpoint**, and select **New PipelineEndpoint**.
75
74
76
75
1. Provide an endpoint name and optional description.
77
76
78
-
Near the bottom of the dialog, you can see the `Dataset1`parameter you configured with a default value of the dataset ID used during training.
77
+
Near the bottom of the dialog, you can see the parameter you configured with a default value of the dataset ID used during training.

82
+
83
83
84
-
## Manage endpoints
84
+
## Consume an endpoint
85
+
86
+
Now, you have a published pipeline with a dataset parameter. The pipeline will use the trained model created in the training pipeline to score the dataset you provide as a parameter.
85
87
86
-
You can visually manage any endpoints you create in Azure Machine Learning. In this section, you will setup a manual pipeline run and alter the pipeline parameter you created in the earlier section.
88
+
### Submit a pipeline run
87
89
88
-
1. After deployment is complete, go to the **Endpoints** section.
90
+
In this section, you will set up a manual pipeline run and alter the pipeline parameter to score new data.
91
+
92
+
1. After the deployment is complete, go to the **Endpoints** section.
89
93
90
94
1. Select **Pipeline endpoints**.
91
95
92
-
1. Select the name of the endpoint you just created.
This screen shows all published pipelines published under the specific endpoint. You can also set a new pipeline as the default pipeline for the endpoint here.
102
+
This screen shows all published pipelines published under this endpoint.
99
103
100
-
1. Select the pipeline you just published.
104
+
1. Select the pipeline you published.
101
105
102
-
The pipeline details page shows you detailed run history and connection string information for your pipeline.
106
+
The pipeline details page shows you a detailed run history and connection string information for your pipeline.
103
107
104
108
1. Select **Run** to create a manual run of the pipeline.
In the run setup, you can provide a description for the run, and change the value for any pipeline parameters.
112
+
1. Change the parameter to use a different dataset.
109
113
110
114
1. Select **Run** to run the pipeline.
111
115
112
-
##Consume an endpoint
116
+
### Use the REST endpoint
113
117
114
118
You can find information on how to consume pipeline endpoints and published pipeline in the **Endpoints** section.
115
119
@@ -123,7 +127,7 @@ To make a REST call, you will need an OAuth 2.0 bearer-type authentication heade
123
127
124
128
## Versioning endpoints
125
129
126
-
The designer assigns a version to each pipeline that you publish to an endpoint. You can specify the pipeline version that you want to execute as a parameter in your REST endpoint call. If you don't specify a version number, the default pipeline will execute.
130
+
The designer assigns a version to each subsequent pipeline that you publish to an endpoint. You can specify the pipeline version that you want to execute as a parameter in your REST call. If you don't specify a version number, the designer will use the default pipeline.
127
131
128
132
When you publish a pipeline, you can choose to make it the new default pipeline for that endpoint.
0 commit comments