You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Create an Azure Machine Learning compute cluster
@@ -255,7 +255,7 @@ For information on how to configure a managed identity with your compute cluster
255
255
256
256
## Troubleshooting
257
257
258
-
There's a chance that some users who created their Azure Machine Learning workspace from the Azure portal before the GA release might not be able to create AmlCompute in that workspace. You can either raise a support request against the service or create a new workspace through the portal or the SDK to unblock yourself immediately.
258
+
There's a chance that some users who created their Azure Machine Learning workspace from the Azure portal before the GA release might not be able to create compute in that workspace. You can either raise a support request against the service or create a new workspace through the portal or the SDK to unblock yourself immediately.
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-schedule-pipeline-job.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ ms.subservice: mlops
8
8
ms.author: lagayhar
9
9
author: lgayhardt
10
10
ms.reviewer: keli19
11
-
ms.date: 09/09/2024
11
+
ms.date: 09/11/2025
12
12
ms.topic: how-to
13
13
---
14
14
@@ -56,7 +56,7 @@ This article shows you how to create, retrieve, update, and deactivate schedules
56
56
57
57
## Create a schedule
58
58
59
-
When you have a pipeline job with satisfying performance and outputs, you can set up a schedule to automatically trigger the job on a regular basis. To do so, you must create a schedule that associates the job with a trigger. The trigger can be either a `recurrence` pattern or a `cron` expression that specifies the interval and frequency to run the job.
59
+
When you have a pipeline job with satisfying performance and outputs, you can set up a schedule to automatically trigger the job regularly. To do so, you must create a schedule that associates the job with a trigger. The trigger can be either a `recurrence` pattern or a `cron` expression that specifies the interval and frequency to run the job.
60
60
61
61
In both cases, you need to define a pipeline job first, either inline or by specifying an existing pipeline job. You can define pipelines in YAML and run them from the CLI, author pipelines inline in Python, or compose pipelines in Azure Machine Learning studio. You can create pipeline jobs locally or from existing jobs in the workspace.
62
62
@@ -455,7 +455,7 @@ You can also apply [Azure CLI JMESPath query](/cli/azure/query-azure-cli) to que
455
455
456
456
---
457
457
458
-
## Role-based access control (RBAC) support
458
+
## Role-based access controls (RBAC) support
459
459
460
460
Because schedules are used for production, it's important to reduce the possibility and impact of misoperation. Workspace admins can restrict access to schedule creation and management in a workspace.
461
461
@@ -470,7 +470,7 @@ Admins can configure the following action rules related to schedules in the Azur
470
470
## Cost considerations
471
471
472
472
Schedules are billed based on the number of schedules. Each schedule creates a logic app that Azure Machine Learning hosts on behalf of (HOBO) the user.
473
-
Therefore the logic app cannot be shown as a resource under the user's subscription in Azure portal.
473
+
Therefore the logic app can't be shown as a resource under the user's subscription in Azure portal.
474
474
475
475
On the other hand, the logic app charges back to the user's Azure subscription. HOBO resource costs are billed using the same meter emitted by the original resource provider. Charges appear under the host resource, which is the Azure Machine Learning workspace.
Learn to deploy a model to an online endpoint, using Azure Machine Learning Python SDK v2.
23
+
24
+
Learn to deploy a model to an online endpoint using Azure Machine Learning Python SDK v2.
24
25
25
26
In this tutorial, you deploy and use a model that predicts the likelihood of a customer defaulting on a credit card payment.
26
27
@@ -38,7 +39,7 @@ The steps you take are:
38
39
> * Get details of the second deployment
39
40
> * Roll out the new deployment and delete the first one
40
41
41
-
This video shows how to get started in Azure Machine Learning studio so that you can follow the steps in the tutorial. The video shows how to create a notebook, create a compute instance, and clone the notebook. The steps are also described in the following sections.
42
+
This video shows how to get started in Azure Machine Learning studio so you can follow the steps in the tutorial. The video shows how to create a notebook, create a compute instance, and clone the notebook. The steps are also described in the following sections.
@@ -57,20 +58,20 @@ This video shows how to get started in Azure Machine Learning studio so that you
57
58
[!INCLUDE [notebook set kernel](includes/prereq-set-kernel.md)]
58
59
59
60
> [!NOTE]
60
-
>- Serverless Spark Compute doesn't have `Python 3.10 - SDK v2` installed by default. We recommend that users create a compute instance and select it before proceeding with the tutorial.
61
+
> Serverless Spark Compute doesn't have `Python 3.10 - SDK v2` installed by default. We recommend that you create a compute instance and select it before proceeding with the tutorial.
Before you dive in the code, you need a way to reference your workspace. Create `ml_client` for a handle to the workspace and use the `ml_client` to manage resources and jobs.
68
+
Before you dive into the code, you need a way to reference your workspace. Create `ml_client` for a handle to the workspace and use the `ml_client` to manage resources and jobs.
68
69
69
70
In the next cell, enter your Subscription ID, Resource Group name, and Workspace name. To find these values:
70
71
71
72
1. In the upper right Azure Machine Learning studio toolbar, select your workspace name.
72
73
1. Copy the value for workspace, resource group, and subscription ID into the code.
73
-
1. You need to copy one value, close the area and paste, then come back for the next one.
74
+
1. You need to copy one value, close the area, paste, then come back for the next one.
74
75
75
76
76
77
```python
@@ -99,7 +100,7 @@ If you already completed the earlier training tutorial, [Train a model](tutorial
99
100
100
101
If you didn't complete the training tutorial, you need to register the model. Registering your model before deployment is a recommended best practice.
101
102
102
-
The following code specifies the `path` (where to upload files from) inline. If you [cloned the tutorials folder](quickstart-create-resources.md#learn-from-sample-notebooks), then run the following code as-is. Otherwise, download the files and metadata for the model from the [credit_defaults_model folder](https://github.com/Azure/azureml-examples/tree/main/tutorials/get-started-notebooks/deploy/credit_defaults_model). Save the files you downloaded into a local version of the *credit_defaults_model* folder on your computer and update the path in the following code to the location of the downloaded files.
103
+
The following code specifies the `path` (where to upload files from) inline. If you [cloned the tutorials folder](quickstart-create-resources.md#learn-from-sample-notebooks), run the following code as-is. Otherwise, download the files and metadata for the model from the [credit_defaults_model folder](https://github.com/Azure/azureml-examples/tree/main/tutorials/get-started-notebooks/deploy/credit_defaults_model). Save the files you downloaded into a local version of the *credit_defaults_model* folder on your computer and update the path in the following code to the location of the downloaded files.
103
104
104
105
The SDK automatically uploads the files and registers the model.
105
106
@@ -148,26 +149,25 @@ Now that you have a registered model, you can create an endpoint and deployment.
148
149
149
150
## Endpoints and deployments
150
151
151
-
After you train a machine learning model, you need to deploy it so that others can use it for inferencing. For this purpose, Azure Machine Learning allows you to create **endpoints** and add **deployments** to them.
152
+
After you train a machine learning model, you need to deploy it so others can use it for inferencing. For this purpose, Azure Machine Learning allows you to create **endpoints** and add **deployments** to them.
152
153
153
154
An **endpoint**, in this context, is an HTTPS path that provides an interface for clients to send requests (input data) to a trained model and receive the inferencing (scoring) results from the model. An endpoint provides:
- A stable scoring URI (endpoint-name.region.inference.ml.azure.com)
158
159
159
-
160
160
A **deployment** is a set of resources required for hosting the model that does the actual inferencing.
161
161
162
162
A single endpoint can contain multiple deployments. Endpoints and deployments are independent Azure Resource Manager resources that appear in the Azure portal.
163
163
164
-
Azure Machine Learning allows you to implement [online endpoints](concept-endpoints-online.md) for real-time inferencing on client data, and [batch endpoints](concept-endpoints-batch.md) for inferencing on large volumes of data over a period of time.
164
+
Azure Machine Learning allows you to implement [online endpoints](concept-endpoints-online.md) for real-time inferencing on client data and [batch endpoints](concept-endpoints-batch.md) for inferencing on large volumes of data over a period of time.
165
165
166
166
In this tutorial, you go through the steps of implementing a _managed online endpoint_. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way that frees you from the overhead of setting up and managing the underlying deployment infrastructure.
167
167
168
168
## Create an online endpoint
169
169
170
-
Now that you have a registered model, it's time to create your online endpoint. The endpoint name needs to be unique in the entire Azure region. For this tutorial, you create a unique name using a universally unique identifier [`UUID`](https://en.wikipedia.org/wiki/Universally_unique_identifier). For more information on the endpoint naming rules, see [endpoint limits](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints).
170
+
Now that you have a registered model, it's time to create your online endpoint. The endpoint name needs to be unique in the entire Azure region. For this tutorial, you create a unique name using a universally unique identifier [`UUID`](https://en.wikipedia.org/wiki/Universally_unique_identifier). For more information on endpoint naming rules, see [endpoint limits](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints).
First, define the endpoint, using the `ManagedOnlineEndpoint` class.
180
+
First, define the endpoint using the `ManagedOnlineEndpoint` class.
181
181
182
182
183
183
184
184
> [!TIP]
185
-
> *`auth_mode`: Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate clients for online endpoints](how-to-authenticate-online-endpoint.md).
185
+
> *`auth_mode`: Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate clients for online endpoints](how-to-authenticate-online-endpoint.md).
186
186
>
187
187
> * Optionally, you can add a description and tags to your endpoint.
Using the `MLClient` created earlier, create the endpoint in the workspace. This command starts the endpoint creation and returns a confirmation response while the endpoint creation continues.
204
+
Using the `MLClient` created earlier, create the endpoint in the workspace. This command starts the endpoint creation and returns a confirmation response while endpoint creation continues.
205
205
206
206
> [!NOTE]
207
207
> Expect the endpoint creation to take approximately 2 minutes.
@@ -233,15 +233,15 @@ The key aspects of a deployment include:
233
233
-`endpoint_name` - Name of the endpoint that will contain the deployment.
234
234
-`model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
235
235
-`environment` - The environment to use for the deployment (or to run the model). This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. The environment can be a Docker image with Conda dependencies or a Dockerfile.
236
-
-`code_configuration` - the configuration for the source code and scoring script.
237
-
-`path`- Path to the source code directory for scoring the model.
236
+
-`code_configuration` - The configuration for the source code and scoring script.
237
+
-`path`- Path to the source code directory for scoring the model.
238
238
-`scoring_script` - Relative path to the scoring file in the source code directory. This script executes the model on a given input request. For an example of a scoring script, see [Understand the scoring script](how-to-deploy-online-endpoints.md#understand-the-scoring-script) in the "Deploy an ML model with an online endpoint" article.
239
239
-`instance_type` - The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md).
240
240
-`instance_count` - The number of instances to use for the deployment.
241
241
242
242
### Deployment using an MLflow model
243
243
244
-
Azure Machine Learning supports no-code deployment of a model created and logged with MLflow. This means that you don't have to provide a scoring script or an environment during model deployment, as the scoring script and environment are automatically generated when training an MLflow model. If you were using a custom model, though, you'd have to specify the environment and scoring script during deployment.
244
+
Azure Machine Learning supports no-code deployment of a model created and logged with MLflow. This means you don't have to provide a scoring script or an environment during model deployment, as the scoring script and environment are automatically generated when training an MLflow model. If you were using a custom model, though, you'd have to specify the environment and scoring script during deployment.
245
245
246
246
> [!IMPORTANT]
247
247
> If you typically deploy models using scoring scripts and custom environments and want to achieve the same functionality using MLflow models, we recommend reading [Guidelines for deploying MLflow models](how-to-deploy-mlflow-models.md).
@@ -251,7 +251,7 @@ Azure Machine Learning supports no-code deployment of a model created and logged
251
251
Begin by creating a single deployment that handles 100% of the incoming traffic. Choose an arbitrary color name (*blue*) for the deployment. To create the deployment for the endpoint, use the `ManagedOnlineDeployment` class.
252
252
253
253
> [!NOTE]
254
-
> No need to specify an environment or scoring script as the model to deploy is an MLflow model.
254
+
> You don't need to specify an environment or scoring script as the model to deploy is an MLflow model.
255
255
256
256
257
257
```python
@@ -261,7 +261,7 @@ from azure.ai.ml.entities import ManagedOnlineDeployment
261
261
model = ml_client.models.get(name=registered_model_name, version=latest_model_version)
262
262
263
263
# define an online deployment
264
-
# if you run into an out of quota error, change the instance_type to a comparable VM that is available.\
264
+
# if you run into an out of quota error, change the instance_type to a comparable VM that is available.
265
265
# Learn more on https://azure.microsoft.com/en-us/pricing/details/machine-learning/.
Using the `MLClient` created earlier, now create the deployment in the workspace. This command starts the deployment creation and returns a confirmation response while the deployment creation continues.
275
+
Using the `MLClient` created earlier, create the deployment in the workspace. This command starts the deployment creation and returns a confirmation response while deployment creation continues.
You can check the status of the endpoint to see whether the model was deployed without error:
292
293
293
294
@@ -312,7 +313,7 @@ print(endpoint.scoring_uri)
312
313
313
314
## Test the endpoint with sample data
314
315
315
-
Now that the model is deployed to the endpoint, you can run inference with it. Begin by creating a sample request file that follows the design expected in the run method found in the scoring script.
316
+
Now that the model is deployed to the endpoint, you can run inference with it. Start by creating a sample request file that follows the design expected in the run method found in the scoring script.
316
317
317
318
318
319
```python
@@ -323,7 +324,7 @@ deploy_dir = "./deploy"
323
324
os.makedirs(deploy_dir, exist_ok=True)
324
325
```
325
326
326
-
Now, create the file in the deploy directory. The following code cell uses IPython magic to write the file into the directory you just created.
327
+
Now create the file in the deploy directory. The following code cell uses IPython magic to write the file into the directory you created.
You can split production traffic between deployments. You might first want to test the `green` deployment with sample data, just like you did for the `blue` deployment. Once you've tested your green deployment, allocate a small percentage of traffic to it.
419
422
420
423
@@ -457,6 +460,7 @@ If you open the metrics for the online endpoint, you can set up the page to see
457
460
For more information on how to view online endpoint metrics, see [Monitor online endpoints](how-to-monitor-online-endpoints.md#use-metrics).
458
461
459
462
## Send all traffic to the new deployment
463
+
460
464
Once you're fully satisfied with your `green` deployment, switch all traffic to it.
0 commit comments