You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In [part one of this tutorial](tutorial-designer-automobile-price-train-score.md), you trained a linear regression model that predicts car prices. In this second part, you use the Azure Machine Learning designer to deploy the model so that others can use it.
21
21
22
22
> [!NOTE]
23
-
> Designer supports two types of components, classic prebuilt components (v1) and custom components (v2). These two types of components are NOT compatible.
23
+
> Designer supports two types of components: classic prebuilt components (v1) and custom components (v2). These two types of components are NOT compatible.
24
24
>
25
-
>Classic prebuilt components provide prebuilt components majorly for data processing and traditional machine learning tasks like regression and classification. This type of component continues to be supported but will not have any new components added.
25
+
>Classic prebuilt components are intended primarily for data processing and traditional machine learning tasks like regression and classification. This type of component continues to be supported but will not have any new components added.
26
26
>
27
27
>Custom components allow you to wrap your own code as a component. It supports sharing components across workspaces and seamless authoring across Studio, CLI v2, and SDK v2 interfaces.
28
28
>
29
-
>For new projects, we highly suggest you use custom components, which are compatible with Azure Machine Learning V2 and will keep receiving new updates.
29
+
>For new projects, we highly suggest that you use custom components, which are compatible with Azure Machine Learning V2 and will keep receiving new updates.
30
30
>
31
31
>This article applies to classic prebuilt components and isn't compatible with CLI v2 and SDK v2.
32
32
@@ -54,7 +54,7 @@ To deploy your pipeline, you must first convert the training pipeline into a rea
54
54
55
55
### Create a real-time inference pipeline
56
56
57
-
1. Select **Jobs** from the side navigation panel, then open the pipeline job that you created. On the detail page, above the pipeline canvas, select the ellipses **...** then choose **Create inference pipeline** > **Real-time inference pipeline**.
57
+
1. Select **Jobs** from the sidebar menu, then open the pipeline job that you created. On the detail page, above the pipeline canvas, select the ellipses **...** then choose **Create inference pipeline** > **Real-time inference pipeline**.
58
58
59
59
:::image type="content" source="media/tutorial-designer-automobile-price-deploy/create-real-time-inference.png" alt-text="Screenshot of create inference pipeline in pipeline job detail page." lightbox="media/tutorial-designer-automobile-price-deploy/create-real-time-inference.png":::
60
60
@@ -87,9 +87,9 @@ To deploy your pipeline, you must first convert the training pipeline into a rea
87
87
88
88
In the dialog box that appears, you can select from any existing Azure Kubernetes Service (AKS) clusters to deploy your model to. If you don't have an AKS cluster, use the following steps to create one.
89
89
90
-
1. Go to the **Compute** page by selecting **Compute** in the side navigation bar.
90
+
1. Go to the **Compute** page by selecting **Compute** in the sidebar menu.
91
91
92
-
1. On the navigation ribbon, select **Kubernetes Clusters** >**+ New** > **AksCompute**.
92
+
1. On the navigation ribbon, select **Kubernetes Clusters**. Then select**+ New** > **AksCompute**.
93
93
94
94
:::image type="content" source="./media/tutorial-designer-automobile-price-deploy/new-inference-cluster.png" alt-text="Screenshot showing how to get to the new inference cluster pane.":::
95
95
@@ -116,7 +116,7 @@ After your AKS service finishes provisioning, return to the real-time inferencin
116
116
117
117
:::image type="content" source="./media/tutorial-designer-automobile-price-deploy/setup-endpoint.png" alt-text="Screenshot showing how to set up a new real-time endpoint.":::
118
118
119
-
You can also change the **Advanced**setting for your real-time endpoint.
119
+
You can also change the **Advanced**settings for your real-time endpoint.
120
120
121
121
|Advanced setting|Description|
122
122
|---|---|
@@ -144,7 +144,7 @@ After your AKS service finishes provisioning, return to the real-time inferencin
144
144
145
145
After deployment finishes, you can view your real-time endpoint by going to the **Endpoints** page.
146
146
147
-
1. Select **Endpoints** on the side navigation bar, then select the endpoint you deployed.
147
+
1. Select **Endpoints** on the sidebar menu, then select the endpoint you deployed.
148
148
149
149
- In the **Details** tab, you can see more information such as the REST URI, Swagger definition, status, and tags.
150
150
@@ -172,7 +172,7 @@ You can update the online endpoint with new model trained in the designer. On th
172
172
173
173
Input a name and select **File** type.
174
174
175
-
:::image type="content" source="./media/how-to-run-batch-predictions-designer/register-train-model-as-dataset-2.png" alt-text="Screenshot of register as a data asset with new data asset selected.":::
175
+
:::image type="content" source="./media/how-to-run-batch-predictions-designer/register-train-model-as-dataset-2.png" alt-text="Screenshot of register as a data asset with new data asset selected." lightbox="./media/how-to-run-batch-predictions-designer/register-train-model-as-dataset-2.png":::
176
176
177
177
1. After the dataset registers successfully, open your inference pipeline draft, or clone the previous inference pipeline job into a new draft. In the inference pipeline draft, replace the previous trained model shown as **MD-xxxx** node connected to the **Score Model** component with the newly registered dataset.
178
178
@@ -188,7 +188,7 @@ You can update the online endpoint with new model trained in the designer. On th
188
188
189
189
1. After modifying your inference pipeline with the newly trained model or transformation, submit it. When the job is completed, deploy it to the existing online endpoint deployed previously.
190
190
191
-
:::image type="content" source="./media/tutorial-designer-automobile-price-deploy/deploy-to-existing-endpoint.png" alt-text="Screenshot showing how to replace existing real-time endpoint.":::
191
+
:::image type="content" source="./media/tutorial-designer-automobile-price-deploy/deploy-to-existing-endpoint.png" alt-text="Screenshot showing how to replace existing real-time endpoint." lightbox="./media/tutorial-designer-automobile-price-deploy/deploy-to-existing-endpoint.png":::
Copy file name to clipboardExpand all lines: articles/machine-learning/v1/tutorial-designer-automobile-price-train-score.md
+13-13Lines changed: 13 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,13 +22,13 @@ This article explains how to use the Azure Machine Learning designer to train a
22
22
To learn more about the designer, see [What is Azure Machine Learning designer?](concept-designer.md)
23
23
24
24
> [!NOTE]
25
-
> Designer supports two types of components, classic prebuilt components (v1) and custom components (v2). These two types of components are NOT compatible.
25
+
> Designer supports two types of components: classic prebuilt components (v1) and custom components (v2). These two types of components are NOT compatible.
26
26
>
27
-
>Classic prebuilt components provide prebuilt components majorly for data processing and traditional machine learning tasks like regression and classification. This type of component continues to be supported but will not have any new components added.
27
+
>Classic prebuilt components are intended primarily for data processing and traditional machine learning tasks like regression and classification. This type of component continues to be supported but will not have any new components added.
28
28
>
29
29
>Custom components allow you to wrap your own code as a component. It supports sharing components across workspaces and seamless authoring across Studio, CLI v2, and SDK v2 interfaces.
30
30
>
31
-
>For new projects, we highly suggest you use custom components, which are compatible with Azure Machine Learning V2 and will keep receiving new updates.
31
+
>For new projects, we highly suggest that you use custom components, which are compatible with Azure Machine Learning V2 and will keep receiving new updates.
32
32
>
33
33
>This article applies to classic prebuilt components and isn't compatible with CLI v2 and SDK v2.
34
34
@@ -66,7 +66,7 @@ You need an Azure Machine Learning workspace to use the designer. The workspace
66
66
67
67
1. Sign in to the [Azure Machine Learning designer](https://ml.azure.com), and select the workspace you want to use.
68
68
69
-
1. Select **Designer** from the sidebar menu. On the **Classic prebuilt** tab, select**Create a new pipeline using classic prebuilt components**.
69
+
1. Select **Designer** from the sidebar menu. Under **Classic prebuilt**, choose**Create a new pipeline using classic prebuilt components**.
70
70
71
71
:::image type="content" source="./media/tutorial-designer-automobile-price-train-score/launch-designer.png" alt-text="Screenshot of the visual workspace showing how to access the designer." lightbox="./media/tutorial-designer-automobile-price-train-score/launch-designer.png":::
72
72
@@ -78,7 +78,7 @@ You need an Azure Machine Learning workspace to use the designer. The workspace
78
78
79
79
There are several sample datasets included in the designer for you to experiment with. For this tutorial, use **Automobile price data (Raw)**.
80
80
81
-
1. To the left of the pipeline canvas is a palette of datasets and components. Select **Component**-> **Sample data**.
81
+
1. To the left of the pipeline canvas is a palette of datasets and components. Select **Component** > **Sample data**.
82
82
83
83
1. Select the dataset **Automobile price data (Raw)**, and drag it onto the canvas.
84
84
@@ -100,7 +100,7 @@ Datasets typically require some preprocessing before analysis. You might have no
100
100
101
101
### Remove a column
102
102
103
-
When you train a model, you have to do something about the data that's missing. In this dataset, the **normalized-losses** column is missing many values, so you exclude that column from the model altogether.
103
+
When you train a model, you must do something about the data that's missing. In this dataset, the **normalized-losses** column is missing many values, so you exclude that column from the model altogether.
104
104
105
105
1. In the datasets and component palette to the left of the canvas, select **Component** and search for the **Select Columns in Dataset** component.
106
106
@@ -154,9 +154,9 @@ Your dataset still has missing values after you remove the **normalized-losses**
154
154
155
155
1. Select **Edit column** to the right of the pane.
156
156
157
-
1. In the **Columns to be cleaned** window that appears, expand the drop-down menu next to **Include**. Select,**All columns**
157
+
1. In the **Columns to be cleaned** window that appears, expand the drop-down menu next to **Include**. Select **All columns**.
158
158
159
-
1. Select **Save**
159
+
1. Select **Save**.
160
160
161
161
1. In the **Clean Missing Data** component details pane, under **Cleaning mode**, select **Remove entire row**.
162
162
@@ -191,7 +191,7 @@ Splitting data is a common task in machine learning. You'll split your data into
191
191
192
192
1. Select the arrow icon under **Pipeline interface** to the right of the canvas to open the component details pane. Alternatively, you can double-click the **Split Data** component to open the details pane.
193
193
194
-
1. In the **Split Data** details pane, set the **Fraction of rows in the first output dataset** to 0.7.
194
+
1. In the **Split Data** details pane, set the **Fraction of rows in the first output dataset** to *0.7*.
195
195
196
196
This option splits 70 percent of the data to train the model and 30 percent for testing it. The 70 percent dataset is accessible through the left output port. The remaining data is available through the right output port.
197
197
@@ -271,13 +271,13 @@ Use the **Evaluate Model** component to evaluate how well your model scored the
In **Basics** step, you can configure the experiment, job display name, job description, etc.
274
+
In **Basics**, you can configure the experiment, job display name, job description, etc.
275
275
276
-
In **Inputs & Outputs** step, you can assign value to the inputs and outputs that are promoted to pipeline level. It's empty in this example because we didn't promote any input or output to pipeline level.
276
+
In **Inputs & Outputs**, you can assign value to the inputs and outputs that are promoted to pipeline level. It's empty in this example because we didn't promote any input or output to pipeline level.
277
277
278
-
In **Runtime settings**, you can configure the default datastore and default compute to the pipeline. It's the default datastore and compute for all components in the pipeline. However, if you set a different compute or datastore for a component explicitly, the system respects the component-level setting. Otherwise, it uses the default.
278
+
In **Runtime settings**, you can configure the default datastore and default compute to the pipeline. It's the default datastore and compute for all components in the pipeline. However, if you set a different compute or datastore for a component explicitly, the system respects the component-level setting. Otherwise, it uses the default.
279
279
280
-
The **Review + Submit** step is the last step to review all settings before submit. The wizard remembers your last configuration if you ever submit the pipeline.
280
+
The **Review + Submit** step is the last step to review all settings before submit. The wizard remembers your last configuration if you ever submit the pipeline.
281
281
282
282
After submitting the pipeline job, there is a message on the top with a link to the job detail. You can select this link to review the job details.
0 commit comments