Skip to content

Commit bb289ff

Browse files
authored
Merge pull request #199288 from Blackmist/endpoint-sdk-v2
Endpoint sdk v2
2 parents 2009cb8 + f4aa9b1 commit bb289ff

File tree

4 files changed

+853
-7
lines changed

4 files changed

+853
-7
lines changed
Lines changed: 313 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,313 @@
1+
---
2+
title: Deploy machine learning models to managed online endpoint using Python SDK v2 (preview).
3+
titleSuffix: Azure Machine Learning
4+
description: Learn to deploy your machine learning model to Azure using Python SDK v2 (preview).
5+
services: machine-learning
6+
ms.service: machine-learning
7+
ms.subservice: mlops
8+
ms.author: ssambare
9+
ms.reviewer: larryfr
10+
author: shivanissambare
11+
ms.date: 05/25/2022
12+
ms.topic: how-to
13+
ms.custom: how-to, devplatv2, sdkv2, deployment
14+
---
15+
16+
# Deploy and score a machine learning model with managed online endpoint using Python SDK v2 (preview)
17+
18+
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
19+
20+
> [!IMPORTANT]
21+
> SDK v2 is currently in public preview.
22+
> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
23+
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
24+
25+
In this article, you learn how to deploy your machine learning model to managed online endpoint and get predictions. You'll begin by deploying a model on your local machine to debug any errors, and then you'll deploy and test it in Azure.
26+
27+
## Prerequisites
28+
29+
* If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/) today.
30+
* The [Azure Machine Learning SDK v2 for Python](/python/api/overview/azure/ml/installv2).
31+
* You must have an Azure resource group, and you (or the service principal you use) must have Contributor access to it.
32+
* You must have an Azure Machine Learning workspace.
33+
* To deploy locally, you must install [Docker Engine](https://docs.docker.com/engine/) on your local computer. We highly recommend this option, so it's easier to debug issues.
34+
35+
### Clone examples repository
36+
37+
To run the training examples, first clone the examples repository and change into the `sdk` directory:
38+
39+
```bash
40+
git clone --depth 1 https://github.com/Azure/azureml-examples
41+
cd azureml-examples/sdk
42+
```
43+
44+
> [!TIP]
45+
> Use `--depth 1` to clone only the latest commit to the repository, which reduces time to complete the operation.
46+
47+
## Connect to Azure machine learning workspace
48+
49+
The [workspace](concept-workspace.md) is the top-level resource for Azure Machine Learning, providing a centralized place to work with all the artifacts you create when you use Azure Machine Learning. In this section, we'll connect to the workspace in which you'll perform deployment tasks.
50+
51+
1. Import the required libraries:
52+
53+
```python
54+
# import required libraries
55+
from azure.ai.ml import MLClient
56+
from azure.ai.ml.entities import (
57+
ManagedOnlineEndpoint,
58+
ManagedOnlineDeployment,
59+
Model,
60+
Environment,
61+
CodeConfiguration,
62+
)
63+
from azure.identity import DefaultAzureCredential
64+
```
65+
66+
1. Configure workspace details and get a handle to the workspace:
67+
68+
To connect to a workspace, we need identifier parameters - a subscription, resource group and workspace name. We'll use these details in the `MLClient` from `azure.ai.ml` to get a handle to the required Azure Machine Learning workspace. This example uses the [default Azure authentication](/python/api/azure-identity/azure.identity.defaultazurecredential).
69+
70+
```python
71+
# enter details of your AML workspace
72+
subscription_id = "<SUBSCRIPTION_ID>"
73+
resource_group = "<RESOURCE_GROUP>"
74+
workspace = "<AML_WORKSPACE_NAME>"
75+
```
76+
77+
```python
78+
# get a handle to the workspace
79+
ml_client = MLClient(
80+
DefaultAzureCredential(), subscription_id, resource_group, workspace
81+
)
82+
```
83+
84+
## Create local endpoint and deployment
85+
86+
> [!NOTE]
87+
> To deploy locally, [Docker Engine](https://docs.docker.com/engine/install/) must be installed.
88+
> Docker Engine must be running. Docker Engine typically starts when the computer starts. If it doesn't, you can [troubleshoot Docker Engine](https://docs.docker.com/config/daemon/#start-the-daemon-manually).
89+
90+
1. Create local endpoint:
91+
92+
The goal of a local endpoint deployment is to validate and debug your code and configuration before you deploy to Azure. Local deployment has the following limitations:
93+
94+
* Local endpoints don't support traffic rules, authentication, or probe settings.
95+
* Local endpoints support only one deployment per endpoint.
96+
97+
```python
98+
# Creating a local endpoint
99+
import datetime
100+
101+
local_endpoint_name = "local-" + datetime.datetime.now().strftime("%m%d%H%M%f")
102+
103+
# create an online endpoint
104+
endpoint = ManagedOnlineEndpoint(
105+
name=local_endpoint_name, description="this is a sample local endpoint"
106+
)
107+
```
108+
109+
```python
110+
ml_client.online_endpoints.begin_create_or_update(endpoint, local=True)
111+
```
112+
113+
1. Create local deployment:
114+
115+
The example contains all the files needed to deploy a model on an online endpoint. To deploy a model, you must have:
116+
117+
* Model files (or the name and version of a model that's already registered in your workspace). In the example, we have a scikit-learn model that does regression.
118+
* The code that's required to score the model. In this case, we have a score.py file.
119+
* An environment in which your model runs. As you'll see, the environment might be a Docker image with Conda dependencies, or it might be a Dockerfile.
120+
* Settings to specify the instance type and scaling capacity.
121+
122+
**Key aspects of deployment**
123+
* `name` - Name of the deployment.
124+
* `endpoint_name` - Name of the endpoint to create the deployment under.
125+
* `model` - The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification.
126+
* `environment` - The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification.
127+
* `code_configuration` - the configuration for the source code and scoring script
128+
* `path`- Path to the source code directory for scoring the model
129+
* `scoring_script` - Relative path to the scoring file in the source code directory
130+
* `instance_type` - The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md).
131+
* `instance_count` - The number of instances to use for the deployment
132+
133+
```python
134+
model = Model(path="../model-1/model/sklearn_regression_model.pkl")
135+
env = Environment(
136+
conda_file="../model-1/environment/conda.yml",
137+
image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
138+
)
139+
140+
blue_deployment = ManagedOnlineDeployment(
141+
name="blue",
142+
endpoint_name=local_endpoint_name,
143+
model=model,
144+
environment=env,
145+
code_configuration=CodeConfiguration(
146+
code="../model-1/onlinescoring", scoring_script="score.py"
147+
),
148+
instance_type="Standard_F2s_v2",
149+
instance_count=1,
150+
)
151+
```
152+
153+
```python
154+
ml_client.online_deployments.begin_create_or_update(
155+
deployment=blue_deployment, local=True
156+
)
157+
```
158+
159+
## Verify the local deployment succeeded
160+
161+
1. Check the status to see whether the model was deployed without error:
162+
163+
```python
164+
ml_client.online_endpoints.get(name=local_endpoint_name, local=True)
165+
```
166+
167+
1. Get logs:
168+
169+
```python
170+
ml_client.online_deployments.get_logs(
171+
name="blue", endpoint_name=local_endpoint_name, local=True, lines=50
172+
)
173+
```
174+
175+
## Invoke the local endpoint
176+
177+
Invoke the endpoint to score the model by using the convenience command invoke and passing query parameters that are stored in a JSON file
178+
179+
```python
180+
ml_client.online_endpoints.invoke(
181+
endpoint_name=local_endpoint_name,
182+
request_file="../model-1/sample-request.json",
183+
local=True,
184+
)
185+
```
186+
187+
## Deploy your online endpoint to Azure
188+
189+
Next, deploy your online endpoint to Azure.
190+
191+
1. Configure online endpoint:
192+
193+
> [!TIP]
194+
> * `endpoint_name`: The name of the endpoint. It must be unique in the Azure region. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
195+
> * `auth_mode` : Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md).
196+
> * Optionally, you can add description, tags to your endpoint.
197+
198+
```python
199+
# Creating a unique endpoint name with current datetime to avoid conflicts
200+
import datetime
201+
202+
online_endpoint_name = "endpoint-" + datetime.datetime.now().strftime("%m%d%H%M%f")
203+
204+
# create an online endpoint
205+
endpoint = ManagedOnlineEndpoint(
206+
name=online_endpoint_name,
207+
description="this is a sample online endpoint",
208+
auth_mode="key",
209+
tags={"foo": "bar"},
210+
)
211+
```
212+
213+
1. Create the endpoint:
214+
215+
Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
216+
217+
```python
218+
ml_client.begin_create_or_update(endpoint)
219+
```
220+
221+
1. Configure online deployment:
222+
223+
A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `ManagedOnlineDeployment` class.
224+
225+
```python
226+
model = Model(path="../model-1/model/sklearn_regression_model.pkl")
227+
env = Environment(
228+
conda_file="../model-1/environment/conda.yml",
229+
image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
230+
)
231+
232+
blue_deployment = ManagedOnlineDeployment(
233+
name="blue",
234+
endpoint_name=online_endpoint_name,
235+
model=model,
236+
environment=env,
237+
code_configuration=CodeConfiguration(
238+
code="../model-1/onlinescoring", scoring_script="score.py"
239+
),
240+
instance_type="Standard_F2s_v2",
241+
instance_count=1,
242+
)
243+
```
244+
245+
1. Create the deployment:
246+
247+
Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
248+
249+
```python
250+
ml_client.begin_create_or_update(blue_deployment)
251+
```
252+
253+
```python
254+
# blue deployment takes 100 traffic
255+
endpoint.traffic = {"blue": 100}
256+
ml_client.begin_create_or_update(endpoint)
257+
```
258+
259+
## Test the endpoint with sample data
260+
261+
Using the `MLClient` created earlier, we'll get a handle to the endpoint. The endpoint can be invoked using the `invoke` command with the following parameters:
262+
263+
* `endpoint_name` - Name of the endpoint
264+
* `request_file` - File with request data
265+
* `deployment_name` - Name of the specific deployment to test in an endpoint
266+
267+
We'll send a sample request using a [json](https://github.com/Azure/azureml-examples/blob/main/sdk/endpoints/online/model-1/sample-request.json) file.
268+
269+
```python
270+
# test the blue deployment with some sample data
271+
ml_client.online_endpoints.invoke(
272+
endpoint_name=online_endpoint_name,
273+
deployment_name="blue",
274+
request_file="../model-1/sample-request.json",
275+
)
276+
```
277+
278+
## Managing endpoints and deployments
279+
280+
1. Get details of the endpoint:
281+
282+
```python
283+
# Get the details for online endpoint
284+
endpoint = ml_client.online_endpoints.get(name=online_endpoint_name)
285+
286+
# existing traffic details
287+
print(endpoint.traffic)
288+
289+
# Get the scoring URI
290+
print(endpoint.scoring_uri)
291+
```
292+
293+
1. Get the logs for the new deployment:
294+
295+
Get the logs for the green deployment and verify as needed
296+
297+
```python
298+
ml_client.online_deployments.get_logs(
299+
name="blue", endpoint_name=online_endpoint_name, lines=50
300+
)
301+
```
302+
303+
## Delete the endpoint
304+
305+
```python
306+
ml_client.online_endpoints.begin_delete(name=online_endpoint_name)
307+
```
308+
309+
## Next steps
310+
311+
Try these next steps to learn how to use the Azure Machine Learning SDK (v2) for Python:
312+
* [Managed online endpoint safe rollout](how-to-safely-rollout-managed-endpoints-sdk-v2.md)
313+
* Explore online endpoint samples - [https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints](https://github.com/Azure/azureml-examples/tree/main/sdk/endpoints)

0 commit comments

Comments
 (0)