Skip to content

Commit cd8ce73

Browse files
authored
Merge pull request #209081 from shohei1029/main
Added SDKv2 examples
2 parents 7810287 + 189b3fe commit cd8ce73

File tree

3 files changed

+495
-49
lines changed

3 files changed

+495
-49
lines changed

articles/machine-learning/how-to-deploy-automl-endpoint.md

Lines changed: 141 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -10,21 +10,28 @@ ms.reviewer: larryfr
1010
author: dem108
1111
ms.date: 05/11/2022
1212
ms.topic: how-to
13-
ms.custom: how-to, devplatv2, devx-track-azurecli, cliv2, event-tier1-build-2022
13+
ms.custom: how-to, devplatv2, devx-track-azurecli, cliv2, event-tier1-build-2022, sdkv2
1414
ms.devlang: azurecli
1515
---
1616

1717
# How to deploy an AutoML model to an online endpoint
1818

1919
[!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
2020

21+
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
22+
23+
> [!IMPORTANT]
24+
> SDK v2 is currently in public preview.
25+
> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
26+
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
2127
2228
In this article, you'll learn how to deploy an AutoML-trained machine learning model to an online (real-time inference) endpoint. Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of developing a machine learning model. For more, see [What is automated machine learning (AutoML)?](concept-automated-ml.md).
2329

2430
In this article you'll know how to deploy AutoML trained machine learning model to online endpoints using:
2531

2632
- Azure Machine Learning studio
27-
- Azure Machine Learning CLI (v2))
33+
- Azure Machine Learning CLI v2
34+
- Azure Machine Learning Python SDK v2
2835

2936
## Prerequisites
3037

@@ -156,10 +163,140 @@ You'll need to modify this file to use the files you downloaded from the AutoML
156163
az ml online-deployment create -f automl_deployment.yml
157164
```
158165
159-
---
160-
161166
After you create a deployment, you can score it as described in [Invoke the endpoint to score data by using your model](how-to-deploy-managed-online-endpoints.md#invoke-the-endpoint-to-score-data-by-using-your-model).
162167
168+
169+
# [Python](#tab/python)
170+
171+
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
172+
173+
## Configure the Python SDK
174+
175+
If you haven't installed Python SDK v2 yet, please install with this command:
176+
177+
```azurecli
178+
pip install --pre azure-ai-ml
179+
```
180+
181+
For more information, see [Install the Azure Machine Learning SDK v2 for Python](/python/api/overview/azure/ml/installv2).
182+
183+
## Put the scoring file in its own directory
184+
185+
Create a directory called `src/` and place the scoring file you downloaded into it. This directory is uploaded to Azure and contains all the source code necessary to do inference. For an AutoML model, there's just the single scoring file.
186+
187+
## Connect to Azure Machine Learning workspace
188+
189+
1. Import the required libraries:
190+
191+
```python
192+
# import required libraries
193+
from azure.ai.ml import MLClient
194+
from azure.ai.ml.entities import (
195+
ManagedOnlineEndpoint,
196+
ManagedOnlineDeployment,
197+
Model,
198+
Environment,
199+
CodeConfiguration,
200+
)
201+
from azure.identity import DefaultAzureCredential
202+
```
203+
204+
1. Configure workspace details and get a handle to the workspace:
205+
206+
```python
207+
# enter details of your AzureML workspace
208+
subscription_id = "<SUBSCRIPTION_ID>"
209+
resource_group = "<RESOURCE_GROUP>"
210+
workspace = "<AZUREML_WORKSPACE_NAME>"
211+
```
212+
213+
```python
214+
# get a handle to the workspace
215+
ml_client = MLClient(
216+
DefaultAzureCredential(), subscription_id, resource_group, workspace
217+
)
218+
```
219+
220+
## Create the endpoint and deployment
221+
222+
Next, we'll create the managed online endpoints and deployments.
223+
224+
1. Configure online endpoint:
225+
226+
> [!TIP]
227+
> * `name`: The name of the endpoint. It must be unique in the Azure region. The name for an endpoint must start with an upper- or lowercase letter and only consist of '-'s and alphanumeric characters. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
228+
> * `auth_mode` : Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md).
229+
230+
231+
```python
232+
# Creating a unique endpoint name with current datetime to avoid conflicts
233+
import datetime
234+
235+
online_endpoint_name = "endpoint-" + datetime.datetime.now().strftime("%m%d%H%M%f")
236+
237+
# create an online endpoint
238+
endpoint = ManagedOnlineEndpoint(
239+
name=online_endpoint_name,
240+
description="this is a sample online endpoint",
241+
auth_mode="key",
242+
)
243+
```
244+
245+
1. Create the endpoint:
246+
247+
Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
248+
249+
```python
250+
ml_client.begin_create_or_update(endpoint)
251+
```
252+
253+
1. Configure online deployment:
254+
255+
A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `ManagedOnlineDeployment` class.
256+
257+
```python
258+
model = Model(path="./src/model.pkl")
259+
env = Environment(
260+
conda_file="./src/conda_env_v_1_0_0.yml",
261+
image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:20210727.v1",
262+
)
263+
264+
blue_deployment = ManagedOnlineDeployment(
265+
name="blue",
266+
endpoint_name=online_endpoint_name,
267+
model=model,
268+
environment=env,
269+
code_configuration=CodeConfiguration(
270+
code="./src", scoring_script="scoring_file_v_2_0_0.py"
271+
),
272+
instance_type="Standard_DS2_v2",
273+
instance_count=1,
274+
)
275+
```
276+
277+
In the above example, we assume the files you downloaded from the AutoML Models page are in the `src` directory. You can modify the parameters in the code to suit your situation.
278+
279+
| Parameter | Change to |
280+
| --- | --- |
281+
| `model:path` | The path to the `model.pkl` file you downloaded. |
282+
| `code_configuration:code:path` | The directory in which you placed the scoring file. |
283+
| `code_configuration:scoring_script` | The name of the Python scoring file (`scoring_file_<VERSION>.py`). |
284+
| `environment:conda_file` | A file URL for the downloaded conda environment file (`conda_env_<VERSION>.yml`). |
285+
286+
1. Create the deployment:
287+
288+
Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
289+
290+
```python
291+
ml_client.begin_create_or_update(blue_deployment)
292+
```
293+
294+
After you create a deployment, you can score it as described in [Test the endpoint with sample data](how-to-deploy-managed-online-endpoint-sdk-v2.md#test-the-endpoint-with-sample-data).
295+
296+
You can learn to deploy to managed online endpoints with SDK more in [Deploy machine learning models to managed online endpoint using Python SDK v2](how-to-deploy-managed-online-endpoint-sdk-v2.md).
297+
298+
---
299+
163300
## Next steps
164301

165302
- [Troubleshooting online endpoints deployment](how-to-troubleshoot-managed-online-endpoints.md)

0 commit comments

Comments
 (0)