You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities.
26
+
> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
21
27
22
28
In this article, you'll learn how to deploy an AutoML-trained machine learning model to an online (real-time inference) endpoint. Automated machine learning, also referred to as automated ML or AutoML, is the process of automating the time-consuming, iterative tasks of developing a machine learning model. For more, see [What is automated machine learning (AutoML)?](concept-automated-ml.md).
23
29
24
30
In this article you'll know how to deploy AutoML trained machine learning model to online endpoints using:
25
31
26
32
- Azure Machine Learning studio
27
-
- Azure Machine Learning CLI (v2))
33
+
- Azure Machine Learning CLI v2
34
+
- Azure Machine Learning Python SDK v2
28
35
29
36
## Prerequisites
30
37
@@ -156,10 +163,140 @@ You'll need to modify this file to use the files you downloaded from the AutoML
156
163
az ml online-deployment create -f automl_deployment.yml
157
164
```
158
165
159
-
---
160
-
161
166
After you create a deployment, you can score it as described in [Invoke the endpoint to score data by using your model](how-to-deploy-managed-online-endpoints.md#invoke-the-endpoint-to-score-data-by-using-your-model).
If you haven't installed Python SDK v2 yet, please install with this command:
176
+
177
+
```azurecli
178
+
pip install --pre azure-ai-ml
179
+
```
180
+
181
+
For more information, see [Install the Azure Machine Learning SDK v2 for Python](/python/api/overview/azure/ml/installv2).
182
+
183
+
## Put the scoring file in its own directory
184
+
185
+
Create a directory called `src/` and place the scoring file you downloaded into it. This directory is uploaded to Azure and contains all the source code necessary to do inference. For an AutoML model, there's just the single scoring file.
186
+
187
+
## Connect to Azure Machine Learning workspace
188
+
189
+
1. Import the required libraries:
190
+
191
+
```python
192
+
# import required libraries
193
+
from azure.ai.ml import MLClient
194
+
from azure.ai.ml.entities import (
195
+
ManagedOnlineEndpoint,
196
+
ManagedOnlineDeployment,
197
+
Model,
198
+
Environment,
199
+
CodeConfiguration,
200
+
)
201
+
from azure.identity import DefaultAzureCredential
202
+
```
203
+
204
+
1. Configure workspace details and get a handle to the workspace:
Next, we'll create the managed online endpoints and deployments.
223
+
224
+
1. Configure online endpoint:
225
+
226
+
> [!TIP]
227
+
>*`name`: The name of the endpoint. It must be unique in the Azure region. The name for an endpoint must start with an upper-or lowercase letter and only consist of '-'s and alphanumeric characters. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).
228
+
>*`auth_mode` : Use `key`for key-based authentication. Use `aml_token`for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md).
229
+
230
+
231
+
```python
232
+
# Creating a unique endpoint name with current datetime to avoid conflicts
Using the `MLClient` created earlier, we'll now create the Endpoint in the workspace. This command will start the endpoint creation and return a confirmation response while the endpoint creation continues.
248
+
249
+
```python
250
+
ml_client.begin_create_or_update(endpoint)
251
+
```
252
+
253
+
1. Configure online deployment:
254
+
255
+
A deployment is a set of resources required for hosting the model that does the actual inferencing. We'll create a deployment for our endpoint using the `ManagedOnlineDeployment` class.
In the above example, we assume the files you downloaded from the AutoML Models page are in the `src` directory. You can modify the parameters in the code to suit your situation.
278
+
279
+
| Parameter | Change to |
280
+
|---|---|
281
+
|`model:path`| The path to the `model.pkl`file you downloaded. |
282
+
|`code_configuration:code:path`| The directory in which you placed the scoring file. |
283
+
|`code_configuration:scoring_script`| The name of the Python scoring file (`scoring_file_<VERSION>.py`). |
284
+
|`environment:conda_file`| A fileURLfor the downloaded conda environment file (`conda_env_<VERSION>.yml`). |
285
+
286
+
1. Create the deployment:
287
+
288
+
Using the `MLClient` created earlier, we'll now create the deployment in the workspace. This command will start the deployment creation and return a confirmation response while the deployment creation continues.
289
+
290
+
```python
291
+
ml_client.begin_create_or_update(blue_deployment)
292
+
```
293
+
294
+
After you create a deployment, you can score it as described in [Test the endpoint with sample data](how-to-deploy-managed-online-endpoint-sdk-v2.md#test-the-endpoint-with-sample-data).
295
+
296
+
You can learn to deploy to managed online endpoints withSDK more in [Deploy machine learning models to managed online endpoint using Python SDK v2](how-to-deploy-managed-online-endpoint-sdk-v2.md).
0 commit comments