Skip to content

Commit 89b8c28

Browse files
committed
Minor edits
1 parent cf679ca commit 89b8c28

File tree

3 files changed

+24
-24
lines changed

3 files changed

+24
-24
lines changed
-1.15 KB
Loading

articles/machine-learning/v1/tutorial-designer-automobile-price-deploy.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -13,20 +13,20 @@ ms.date: 06/09/2025
1313
ms.custom: UpdateFrequency5, designer
1414
---
1515

16-
# Tutorial: Use the designer to deploy a machine learning model
16+
# Tutorial: Deploy a machine learning model using designer
1717

1818
[!INCLUDE [v1 deprecation](../includes/sdk-v1-deprecation.md)]
1919

2020
In [part one of this tutorial](tutorial-designer-automobile-price-train-score.md), you trained a linear regression model that predicts car prices. In this second part, you use the Azure Machine Learning designer to deploy the model so that others can use it.
2121

2222
> [!NOTE]
23-
> Designer supports two types of components, classic prebuilt components (v1) and custom components (v2). These two types of components are NOT compatible.
23+
> Designer supports two types of components: classic prebuilt components (v1) and custom components (v2). These two types of components are NOT compatible.
2424
>
25-
>Classic prebuilt components provide prebuilt components majorly for data processing and traditional machine learning tasks like regression and classification. This type of component continues to be supported but will not have any new components added.
25+
>Classic prebuilt components are intended primarily for data processing and traditional machine learning tasks like regression and classification. This type of component continues to be supported but will not have any new components added.
2626
>
2727
>Custom components allow you to wrap your own code as a component. It supports sharing components across workspaces and seamless authoring across Studio, CLI v2, and SDK v2 interfaces.
2828
>
29-
>For new projects, we highly suggest you use custom components, which are compatible with Azure Machine Learning V2 and will keep receiving new updates.
29+
>For new projects, we highly suggest that you use custom components, which are compatible with Azure Machine Learning V2 and will keep receiving new updates.
3030
>
3131
>This article applies to classic prebuilt components and isn't compatible with CLI v2 and SDK v2.
3232
@@ -54,7 +54,7 @@ To deploy your pipeline, you must first convert the training pipeline into a rea
5454
5555
### Create a real-time inference pipeline
5656

57-
1. Select **Jobs** from the side navigation panel, then open the pipeline job that you created. On the detail page, above the pipeline canvas, select the ellipses **...** then choose **Create inference pipeline** > **Real-time inference pipeline**.
57+
1. Select **Jobs** from the sidebar menu, then open the pipeline job that you created. On the detail page, above the pipeline canvas, select the ellipses **...** then choose **Create inference pipeline** > **Real-time inference pipeline**.
5858

5959
:::image type="content" source="media/tutorial-designer-automobile-price-deploy/create-real-time-inference.png" alt-text="Screenshot of create inference pipeline in pipeline job detail page." lightbox="media/tutorial-designer-automobile-price-deploy/create-real-time-inference.png":::
6060

@@ -87,9 +87,9 @@ To deploy your pipeline, you must first convert the training pipeline into a rea
8787

8888
In the dialog box that appears, you can select from any existing Azure Kubernetes Service (AKS) clusters to deploy your model to. If you don't have an AKS cluster, use the following steps to create one.
8989

90-
1. Go to the **Compute** page by selecting **Compute** in the side navigation bar.
90+
1. Go to the **Compute** page by selecting **Compute** in the sidebar menu.
9191

92-
1. On the navigation ribbon, select **Kubernetes Clusters** > **+ New** > **AksCompute**.
92+
1. On the navigation ribbon, select **Kubernetes Clusters**. Then select **+ New** > **AksCompute**.
9393

9494
:::image type="content" source="./media/tutorial-designer-automobile-price-deploy/new-inference-cluster.png" alt-text="Screenshot showing how to get to the new inference cluster pane.":::
9595

@@ -116,7 +116,7 @@ After your AKS service finishes provisioning, return to the real-time inferencin
116116

117117
:::image type="content" source="./media/tutorial-designer-automobile-price-deploy/setup-endpoint.png" alt-text="Screenshot showing how to set up a new real-time endpoint.":::
118118

119-
You can also change the **Advanced** setting for your real-time endpoint.
119+
You can also change the **Advanced** settings for your real-time endpoint.
120120

121121
|Advanced setting|Description|
122122
|---|---|
@@ -144,7 +144,7 @@ After your AKS service finishes provisioning, return to the real-time inferencin
144144

145145
After deployment finishes, you can view your real-time endpoint by going to the **Endpoints** page.
146146

147-
1. Select **Endpoints** on the side navigation bar, then select the endpoint you deployed.
147+
1. Select **Endpoints** on the sidebar menu, then select the endpoint you deployed.
148148

149149
- In the **Details** tab, you can see more information such as the REST URI, Swagger definition, status, and tags.
150150

@@ -172,7 +172,7 @@ You can update the online endpoint with new model trained in the designer. On th
172172

173173
Input a name and select **File** type.
174174

175-
:::image type="content" source="./media/how-to-run-batch-predictions-designer/register-train-model-as-dataset-2.png" alt-text="Screenshot of register as a data asset with new data asset selected.":::
175+
:::image type="content" source="./media/how-to-run-batch-predictions-designer/register-train-model-as-dataset-2.png" alt-text="Screenshot of register as a data asset with new data asset selected." lightbox="./media/how-to-run-batch-predictions-designer/register-train-model-as-dataset-2.png":::
176176

177177
1. After the dataset registers successfully, open your inference pipeline draft, or clone the previous inference pipeline job into a new draft. In the inference pipeline draft, replace the previous trained model shown as **MD-xxxx** node connected to the **Score Model** component with the newly registered dataset.
178178

@@ -188,7 +188,7 @@ You can update the online endpoint with new model trained in the designer. On th
188188

189189
1. After modifying your inference pipeline with the newly trained model or transformation, submit it. When the job is completed, deploy it to the existing online endpoint deployed previously.
190190

191-
:::image type="content" source="./media/tutorial-designer-automobile-price-deploy/deploy-to-existing-endpoint.png" alt-text="Screenshot showing how to replace existing real-time endpoint.":::
191+
:::image type="content" source="./media/tutorial-designer-automobile-price-deploy/deploy-to-existing-endpoint.png" alt-text="Screenshot showing how to replace existing real-time endpoint." lightbox="./media/tutorial-designer-automobile-price-deploy/deploy-to-existing-endpoint.png":::
192192

193193
## Limitations
194194

articles/machine-learning/v1/tutorial-designer-automobile-price-train-score.md

Lines changed: 13 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -22,13 +22,13 @@ This article explains how to use the Azure Machine Learning designer to train a
2222
To learn more about the designer, see [What is Azure Machine Learning designer?](concept-designer.md)
2323

2424
> [!NOTE]
25-
> Designer supports two types of components, classic prebuilt components (v1) and custom components (v2). These two types of components are NOT compatible.
25+
> Designer supports two types of components: classic prebuilt components (v1) and custom components (v2). These two types of components are NOT compatible.
2626
>
27-
>Classic prebuilt components provide prebuilt components majorly for data processing and traditional machine learning tasks like regression and classification. This type of component continues to be supported but will not have any new components added.
27+
>Classic prebuilt components are intended primarily for data processing and traditional machine learning tasks like regression and classification. This type of component continues to be supported but will not have any new components added.
2828
>
2929
>Custom components allow you to wrap your own code as a component. It supports sharing components across workspaces and seamless authoring across Studio, CLI v2, and SDK v2 interfaces.
3030
>
31-
>For new projects, we highly suggest you use custom components, which are compatible with Azure Machine Learning V2 and will keep receiving new updates.
31+
>For new projects, we highly suggest that you use custom components, which are compatible with Azure Machine Learning V2 and will keep receiving new updates.
3232
>
3333
>This article applies to classic prebuilt components and isn't compatible with CLI v2 and SDK v2.
3434
@@ -66,7 +66,7 @@ You need an Azure Machine Learning workspace to use the designer. The workspace
6666

6767
1. Sign in to the [Azure Machine Learning designer](https://ml.azure.com), and select the workspace you want to use.
6868

69-
1. Select **Designer** from the sidebar menu. On the **Classic prebuilt** tab, select **Create a new pipeline using classic prebuilt components**.
69+
1. Select **Designer** from the sidebar menu. Under **Classic prebuilt**, choose **Create a new pipeline using classic prebuilt components**.
7070

7171
:::image type="content" source="./media/tutorial-designer-automobile-price-train-score/launch-designer.png" alt-text="Screenshot of the visual workspace showing how to access the designer." lightbox="./media/tutorial-designer-automobile-price-train-score/launch-designer.png":::
7272

@@ -78,7 +78,7 @@ You need an Azure Machine Learning workspace to use the designer. The workspace
7878

7979
There are several sample datasets included in the designer for you to experiment with. For this tutorial, use **Automobile price data (Raw)**.
8080

81-
1. To the left of the pipeline canvas is a palette of datasets and components. Select **Component** -> **Sample data**.
81+
1. To the left of the pipeline canvas is a palette of datasets and components. Select **Component** > **Sample data**.
8282

8383
1. Select the dataset **Automobile price data (Raw)**, and drag it onto the canvas.
8484

@@ -100,7 +100,7 @@ Datasets typically require some preprocessing before analysis. You might have no
100100

101101
### Remove a column
102102

103-
When you train a model, you have to do something about the data that's missing. In this dataset, the **normalized-losses** column is missing many values, so you exclude that column from the model altogether.
103+
When you train a model, you must do something about the data that's missing. In this dataset, the **normalized-losses** column is missing many values, so you exclude that column from the model altogether.
104104

105105
1. In the datasets and component palette to the left of the canvas, select **Component** and search for the **Select Columns in Dataset** component.
106106

@@ -154,9 +154,9 @@ Your dataset still has missing values after you remove the **normalized-losses**
154154

155155
1. Select **Edit column** to the right of the pane.
156156

157-
1. In the **Columns to be cleaned** window that appears, expand the drop-down menu next to **Include**. Select, **All columns**
157+
1. In the **Columns to be cleaned** window that appears, expand the drop-down menu next to **Include**. Select **All columns**.
158158

159-
1. Select **Save**
159+
1. Select **Save**.
160160

161161
1. In the **Clean Missing Data** component details pane, under **Cleaning mode**, select **Remove entire row**.
162162

@@ -191,7 +191,7 @@ Splitting data is a common task in machine learning. You'll split your data into
191191

192192
1. Select the arrow icon under **Pipeline interface** to the right of the canvas to open the component details pane. Alternatively, you can double-click the **Split Data** component to open the details pane.
193193

194-
1. In the **Split Data** details pane, set the **Fraction of rows in the first output dataset** to 0.7.
194+
1. In the **Split Data** details pane, set the **Fraction of rows in the first output dataset** to *0.7*.
195195

196196
This option splits 70 percent of the data to train the model and 30 percent for testing it. The 70 percent dataset is accessible through the left output port. The remaining data is available through the right output port.
197197

@@ -271,13 +271,13 @@ Use the **Evaluate Model** component to evaluate how well your model scored the
271271

272272
:::image type="content" source="./media/tutorial-designer-automobile-price-train-score/submission-wizard.png" alt-text="Screenshot showing submission wizard." lightbox ="./media/tutorial-designer-automobile-price-train-score/submission-wizard.png":::
273273

274-
In **Basics** step, you can configure the experiment, job display name, job description, etc.
274+
In **Basics**, you can configure the experiment, job display name, job description, etc.
275275

276-
In **Inputs & Outputs** step, you can assign value to the inputs and outputs that are promoted to pipeline level. It's empty in this example because we didn't promote any input or output to pipeline level.
276+
In **Inputs & Outputs**, you can assign value to the inputs and outputs that are promoted to pipeline level. It's empty in this example because we didn't promote any input or output to pipeline level.
277277

278-
In **Runtime settings**, you can configure the default datastore and default compute to the pipeline. It's the default datastore and compute for all components in the pipeline. However, if you set a different compute or datastore for a component explicitly, the system respects the component-level setting. Otherwise, it uses the default.
278+
In **Runtime settings**, you can configure the default datastore and default compute to the pipeline. It's the default datastore and compute for all components in the pipeline. However, if you set a different compute or datastore for a component explicitly, the system respects the component-level setting. Otherwise, it uses the default.
279279

280-
The **Review + Submit** step is the last step to review all settings before submit. The wizard remembers your last configuration if you ever submit the pipeline.
280+
The **Review + Submit** step is the last step to review all settings before submit. The wizard remembers your last configuration if you ever submit the pipeline.
281281

282282
After submitting the pipeline job, there is a message on the top with a link to the job detail. You can select this link to review the job details.
283283

0 commit comments

Comments
 (0)