Skip to content

Commit d12f537

Browse files
authored
Merge pull request #7019 from s-polly/stp_ML-freshness_9-10
ML Freshness pass
2 parents 68825a4 + 88418dd commit d12f537

File tree

4 files changed

+75
-64
lines changed

4 files changed

+75
-64
lines changed

articles/machine-learning/how-to-create-attach-compute-cluster.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ ms.custom: devx-track-azurecli, cliv2, sdkv2, build-2023
1010
ms.author: scottpolly
1111
author: s-polly
1212
ms.reviewer: vijetaj
13-
ms.date: 05/03/2024
13+
ms.date: 09/11/2025
1414
---
1515

1616
# Create an Azure Machine Learning compute cluster
@@ -255,7 +255,7 @@ For information on how to configure a managed identity with your compute cluster
255255

256256
## Troubleshooting
257257

258-
There's a chance that some users who created their Azure Machine Learning workspace from the Azure portal before the GA release might not be able to create AmlCompute in that workspace. You can either raise a support request against the service or create a new workspace through the portal or the SDK to unblock yourself immediately.
258+
There's a chance that some users who created their Azure Machine Learning workspace from the Azure portal before the GA release might not be able to create compute in that workspace. You can either raise a support request against the service or create a new workspace through the portal or the SDK to unblock yourself immediately.
259259

260260
[!INCLUDE [retiring vms](./includes/retiring-vms.md)]
261261

articles/machine-learning/how-to-schedule-pipeline-job.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.subservice: mlops
88
ms.author: lagayhar
99
author: lgayhardt
1010
ms.reviewer: keli19
11-
ms.date: 09/09/2024
11+
ms.date: 09/11/2025
1212
ms.topic: how-to
1313
---
1414

@@ -56,7 +56,7 @@ This article shows you how to create, retrieve, update, and deactivate schedules
5656

5757
## Create a schedule
5858

59-
When you have a pipeline job with satisfying performance and outputs, you can set up a schedule to automatically trigger the job on a regular basis. To do so, you must create a schedule that associates the job with a trigger. The trigger can be either a `recurrence` pattern or a `cron` expression that specifies the interval and frequency to run the job.
59+
When you have a pipeline job with satisfying performance and outputs, you can set up a schedule to automatically trigger the job regularly. To do so, you must create a schedule that associates the job with a trigger. The trigger can be either a `recurrence` pattern or a `cron` expression that specifies the interval and frequency to run the job.
6060

6161
In both cases, you need to define a pipeline job first, either inline or by specifying an existing pipeline job. You can define pipelines in YAML and run them from the CLI, author pipelines inline in Python, or compose pipelines in Azure Machine Learning studio. You can create pipeline jobs locally or from existing jobs in the workspace.
6262

@@ -455,7 +455,7 @@ You can also apply [Azure CLI JMESPath query](/cli/azure/query-azure-cli) to que
455455
456456
---
457457

458-
## Role-based access control (RBAC) support
458+
## Role-based access controls (RBAC) support
459459

460460
Because schedules are used for production, it's important to reduce the possibility and impact of misoperation. Workspace admins can restrict access to schedule creation and management in a workspace.
461461

@@ -470,7 +470,7 @@ Admins can configure the following action rules related to schedules in the Azur
470470
## Cost considerations
471471

472472
Schedules are billed based on the number of schedules. Each schedule creates a logic app that Azure Machine Learning hosts on behalf of (HOBO) the user.
473-
Therefore the logic app cannot be shown as a resource under the user's subscription in Azure portal.
473+
Therefore the logic app can't be shown as a resource under the user's subscription in Azure portal.
474474

475475
On the other hand, the logic app charges back to the user's Azure subscription. HOBO resource costs are billed using the same meter emitted by the original resource provider. Charges appear under the host resource, which is the Azure Machine Learning workspace.
476476

articles/machine-learning/tutorial-deploy-model.md

Lines changed: 30 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.topic: tutorial
99
author: s-polly
1010
ms.author: scottpolly
1111
ms.reviewer: sehan
12-
ms.date: 09/13/2024
12+
ms.date: 09/10/2025
1313
ms.custom:
1414
- mlops
1515
- devx-track-python #add more custom tags
@@ -20,7 +20,8 @@ ms.custom:
2020
# Deploy a model as an online endpoint
2121

2222
[!INCLUDE [sdk v2](includes/machine-learning-sdk-v2.md)]
23-
Learn to deploy a model to an online endpoint, using Azure Machine Learning Python SDK v2.
23+
24+
Learn to deploy a model to an online endpoint using Azure Machine Learning Python SDK v2.
2425

2526
In this tutorial, you deploy and use a model that predicts the likelihood of a customer defaulting on a credit card payment.
2627

@@ -38,7 +39,7 @@ The steps you take are:
3839
> * Get details of the second deployment
3940
> * Roll out the new deployment and delete the first one
4041
41-
This video shows how to get started in Azure Machine Learning studio so that you can follow the steps in the tutorial. The video shows how to create a notebook, create a compute instance, and clone the notebook. The steps are also described in the following sections.
42+
This video shows how to get started in Azure Machine Learning studio so you can follow the steps in the tutorial. The video shows how to create a notebook, create a compute instance, and clone the notebook. The steps are also described in the following sections.
4243

4344
> [!VIDEO https://learn-video.azurefd.net/vod/player?id=7d0e09a5-c319-4e6a-85e2-c9601a0fba68]
4445
@@ -57,20 +58,20 @@ This video shows how to get started in Azure Machine Learning studio so that you
5758
[!INCLUDE [notebook set kernel](includes/prereq-set-kernel.md)]
5859

5960
> [!NOTE]
60-
>- Serverless Spark Compute doesn't have `Python 3.10 - SDK v2` installed by default. We recommend that users create a compute instance and select it before proceeding with the tutorial.
61+
> Serverless Spark Compute doesn't have `Python 3.10 - SDK v2` installed by default. We recommend that you create a compute instance and select it before proceeding with the tutorial.
6162
6263
<!-- nbstart https://raw.githubusercontent.com/Azure/azureml-examples/main/tutorials/get-started-notebooks/deploy-model.ipynb -->
6364

6465

6566
## Create handle to workspace
6667

67-
Before you dive in the code, you need a way to reference your workspace. Create `ml_client` for a handle to the workspace and use the `ml_client` to manage resources and jobs.
68+
Before you dive into the code, you need a way to reference your workspace. Create `ml_client` for a handle to the workspace and use the `ml_client` to manage resources and jobs.
6869

6970
In the next cell, enter your Subscription ID, Resource Group name, and Workspace name. To find these values:
7071

7172
1. In the upper right Azure Machine Learning studio toolbar, select your workspace name.
7273
1. Copy the value for workspace, resource group, and subscription ID into the code.
73-
1. You need to copy one value, close the area and paste, then come back for the next one.
74+
1. You need to copy one value, close the area, paste, then come back for the next one.
7475

7576

7677
```python
@@ -99,7 +100,7 @@ If you already completed the earlier training tutorial, [Train a model](tutorial
99100

100101
If you didn't complete the training tutorial, you need to register the model. Registering your model before deployment is a recommended best practice.
101102

102-
The following code specifies the `path` (where to upload files from) inline. If you [cloned the tutorials folder](quickstart-create-resources.md#learn-from-sample-notebooks), then run the following code as-is. Otherwise, download the files and metadata for the model from the [credit_defaults_model folder](https://github.com/Azure/azureml-examples/tree/main/tutorials/get-started-notebooks/deploy/credit_defaults_model). Save the files you downloaded into a local version of the *credit_defaults_model* folder on your computer and update the path in the following code to the location of the downloaded files.
103+
The following code specifies the `path` (where to upload files from) inline. If you [cloned the tutorials folder](quickstart-create-resources.md#learn-from-sample-notebooks), run the following code as-is. Otherwise, download the files and metadata for the model from the [credit_defaults_model folder](https://github.com/Azure/azureml-examples/tree/main/tutorials/get-started-notebooks/deploy/credit_defaults_model). Save the files you downloaded into a local version of the *credit_defaults_model* folder on your computer and update the path in the following code to the location of the downloaded files.
103104

104105
The SDK automatically uploads the files and registers the model.
105106

@@ -148,26 +149,25 @@ Now that you have a registered model, you can create an endpoint and deployment.
148149

149150
## Endpoints and deployments
150151

151-
After you train a machine learning model, you need to deploy it so that others can use it for inferencing. For this purpose, Azure Machine Learning allows you to create **endpoints** and add **deployments** to them.
152+
After you train a machine learning model, you need to deploy it so others can use it for inferencing. For this purpose, Azure Machine Learning allows you to create **endpoints** and add **deployments** to them.
152153

153154
An **endpoint**, in this context, is an HTTPS path that provides an interface for clients to send requests (input data) to a trained model and receive the inferencing (scoring) results from the model. An endpoint provides:
154155

155156
- Authentication using "key or token" based auth
156157
- [TLS(SSL)](https://simple.wikipedia.org/wiki/Transport_Layer_Security) termination
157158
- A stable scoring URI (endpoint-name.region.inference.ml.azure.com)
158159

159-
160160
A **deployment** is a set of resources required for hosting the model that does the actual inferencing.
161161

162162
A single endpoint can contain multiple deployments. Endpoints and deployments are independent Azure Resource Manager resources that appear in the Azure portal.
163163

164-
Azure Machine Learning allows you to implement [online endpoints](concept-endpoints-online.md) for real-time inferencing on client data, and [batch endpoints](concept-endpoints-batch.md) for inferencing on large volumes of data over a period of time.
164+
Azure Machine Learning allows you to implement [online endpoints](concept-endpoints-online.md) for real-time inferencing on client data and [batch endpoints](concept-endpoints-batch.md) for inferencing on large volumes of data over a period of time.
165165

166166
In this tutorial, you go through the steps of implementing a _managed online endpoint_. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way that frees you from the overhead of setting up and managing the underlying deployment infrastructure.
167167

168168
## Create an online endpoint
169169

170-
Now that you have a registered model, it's time to create your online endpoint. The endpoint name needs to be unique in the entire Azure region. For this tutorial, you create a unique name using a universally unique identifier [`UUID`](https://en.wikipedia.org/wiki/Universally_unique_identifier). For more information on the endpoint naming rules, see [endpoint limits](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints).
170+
Now that you have a registered model, it's time to create your online endpoint. The endpoint name needs to be unique in the entire Azure region. For this tutorial, you create a unique name using a universally unique identifier [`UUID`](https://en.wikipedia.org/wiki/Universally_unique_identifier). For more information on endpoint naming rules, see [endpoint limits](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints).
171171

172172

173173
```python
@@ -177,12 +177,12 @@ import uuid
177177
online_endpoint_name = "credit-endpoint-" + str(uuid.uuid4())[:8]
178178
```
179179

180-
First, define the endpoint, using the `ManagedOnlineEndpoint` class.
180+
First, define the endpoint using the `ManagedOnlineEndpoint` class.
181181

182182

183183

184184
> [!TIP]
185-
> * `auth_mode` : Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate clients for online endpoints](how-to-authenticate-online-endpoint.md).
185+
> * `auth_mode`: Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate clients for online endpoints](how-to-authenticate-online-endpoint.md).
186186
>
187187
> * Optionally, you can add a description and tags to your endpoint.
188188
@@ -201,7 +201,7 @@ endpoint = ManagedOnlineEndpoint(
201201
)
202202
```
203203

204-
Using the `MLClient` created earlier, create the endpoint in the workspace. This command starts the endpoint creation and returns a confirmation response while the endpoint creation continues.
204+
Using the `MLClient` created earlier, create the endpoint in the workspace. This command starts the endpoint creation and returns a confirmation response while endpoint creation continues.
205205

206206
> [!NOTE]
207207
> Expect the endpoint creation to take approximately 2 minutes.
@@ -233,15 +233,15 @@ The key aspects of a deployment include:
233233
- `endpoint_name` - Name of the endpoint that will contain the deployment.
234234
- `model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
235235
- `environment` - The environment to use for the deployment (or to run the model). This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. The environment can be a Docker image with Conda dependencies or a Dockerfile.
236-
- `code_configuration` - the configuration for the source code and scoring script.
237-
- `path`- Path to the source code directory for scoring the model.
236+
- `code_configuration` - The configuration for the source code and scoring script.
237+
- `path` - Path to the source code directory for scoring the model.
238238
- `scoring_script` - Relative path to the scoring file in the source code directory. This script executes the model on a given input request. For an example of a scoring script, see [Understand the scoring script](how-to-deploy-online-endpoints.md#understand-the-scoring-script) in the "Deploy an ML model with an online endpoint" article.
239239
- `instance_type` - The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md).
240240
- `instance_count` - The number of instances to use for the deployment.
241241

242242
### Deployment using an MLflow model
243243

244-
Azure Machine Learning supports no-code deployment of a model created and logged with MLflow. This means that you don't have to provide a scoring script or an environment during model deployment, as the scoring script and environment are automatically generated when training an MLflow model. If you were using a custom model, though, you'd have to specify the environment and scoring script during deployment.
244+
Azure Machine Learning supports no-code deployment of a model created and logged with MLflow. This means you don't have to provide a scoring script or an environment during model deployment, as the scoring script and environment are automatically generated when training an MLflow model. If you were using a custom model, though, you'd have to specify the environment and scoring script during deployment.
245245

246246
> [!IMPORTANT]
247247
> If you typically deploy models using scoring scripts and custom environments and want to achieve the same functionality using MLflow models, we recommend reading [Guidelines for deploying MLflow models](how-to-deploy-mlflow-models.md).
@@ -251,7 +251,7 @@ Azure Machine Learning supports no-code deployment of a model created and logged
251251
Begin by creating a single deployment that handles 100% of the incoming traffic. Choose an arbitrary color name (*blue*) for the deployment. To create the deployment for the endpoint, use the `ManagedOnlineDeployment` class.
252252

253253
> [!NOTE]
254-
> No need to specify an environment or scoring script as the model to deploy is an MLflow model.
254+
> You don't need to specify an environment or scoring script as the model to deploy is an MLflow model.
255255
256256

257257
```python
@@ -261,7 +261,7 @@ from azure.ai.ml.entities import ManagedOnlineDeployment
261261
model = ml_client.models.get(name=registered_model_name, version=latest_model_version)
262262

263263
# define an online deployment
264-
# if you run into an out of quota error, change the instance_type to a comparable VM that is available.\
264+
# if you run into an out of quota error, change the instance_type to a comparable VM that is available.
265265
# Learn more on https://azure.microsoft.com/en-us/pricing/details/machine-learning/.
266266
blue_deployment = ManagedOnlineDeployment(
267267
name="blue",
@@ -272,7 +272,7 @@ blue_deployment = ManagedOnlineDeployment(
272272
)
273273
```
274274

275-
Using the `MLClient` created earlier, now create the deployment in the workspace. This command starts the deployment creation and returns a confirmation response while the deployment creation continues.
275+
Using the `MLClient` created earlier, create the deployment in the workspace. This command starts the deployment creation and returns a confirmation response while deployment creation continues.
276276

277277

278278
```python
@@ -288,6 +288,7 @@ ml_client.online_endpoints.begin_create_or_update(endpoint).result()
288288
```
289289

290290
## Check the status of the endpoint
291+
291292
You can check the status of the endpoint to see whether the model was deployed without error:
292293

293294

@@ -312,7 +313,7 @@ print(endpoint.scoring_uri)
312313

313314
## Test the endpoint with sample data
314315

315-
Now that the model is deployed to the endpoint, you can run inference with it. Begin by creating a sample request file that follows the design expected in the run method found in the scoring script.
316+
Now that the model is deployed to the endpoint, you can run inference with it. Start by creating a sample request file that follows the design expected in the run method found in the scoring script.
316317

317318

318319
```python
@@ -323,7 +324,7 @@ deploy_dir = "./deploy"
323324
os.makedirs(deploy_dir, exist_ok=True)
324325
```
325326

326-
Now, create the file in the deploy directory. The following code cell uses IPython magic to write the file into the directory you just created.
327+
Now create the file in the deploy directory. The following code cell uses IPython magic to write the file into the directory you created.
327328

328329

329330
```python
@@ -359,7 +360,8 @@ ml_client.online_endpoints.invoke(
359360
```
360361

361362
## Get logs of the deployment
362-
Check the logs to see whether the endpoint/deployment were invoked successfully.
363+
364+
Check the logs to see whether the endpoint/deployment was invoked successfully.
363365
If you face errors, see [Troubleshooting online endpoints deployment](how-to-troubleshoot-online-endpoints.md).
364366

365367

@@ -381,7 +383,7 @@ In this example, you deploy the same model version, using a more powerful comput
381383
model = ml_client.models.get(name=registered_model_name, version=latest_model_version)
382384

383385
# define an online deployment using a more powerful instance type
384-
# if you run into an out of quota error, change the instance_type to a comparable VM that is available.\
386+
# if you run into an out of quota error, change the instance_type to a comparable VM that is available.
385387
# Learn more on https://azure.microsoft.com/en-us/pricing/details/machine-learning/.
386388
green_deployment = ManagedOnlineDeployment(
387389
name="green",
@@ -415,6 +417,7 @@ ml_client.online_deployments.begin_create_or_update(green_deployment).result()
415417
```
416418

417419
## Update traffic allocation for deployments
420+
418421
You can split production traffic between deployments. You might first want to test the `green` deployment with sample data, just like you did for the `blue` deployment. Once you've tested your green deployment, allocate a small percentage of traffic to it.
419422

420423

@@ -457,6 +460,7 @@ If you open the metrics for the online endpoint, you can set up the page to see
457460
For more information on how to view online endpoint metrics, see [Monitor online endpoints](how-to-monitor-online-endpoints.md#use-metrics).
458461

459462
## Send all traffic to the new deployment
463+
460464
Once you're fully satisfied with your `green` deployment, switch all traffic to it.
461465

462466

@@ -466,6 +470,7 @@ ml_client.begin_create_or_update(endpoint).result()
466470
```
467471

468472
## Delete the old deployment
473+
469474
Remove the old (blue) deployment:
470475

471476

@@ -477,7 +482,7 @@ ml_client.online_deployments.begin_delete(
477482

478483
## Clean up resources
479484

480-
If you aren't going use the endpoint and deployment after completing this tutorial, you should delete them.
485+
If you aren't going to use the endpoint and deployment after completing this tutorial, you should delete them.
481486

482487
> [!NOTE]
483488
> Expect the complete deletion to take approximately 20 minutes.

0 commit comments

Comments
 (0)