You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/v1/how-to-deploy-advanced-entry-script.md
+36-43Lines changed: 36 additions & 43 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,34 +1,35 @@
1
1
---
2
-
title: Entry script authoring for advanced scenarios
2
+
title: Entry scripts for advanced scenarios
3
3
titleSuffix: Azure Machine Learning
4
-
description: Learn how to write Azure Machine Learning entry scripts for pre- and post-processing during deployment.
4
+
description: See how to write Azure Machine Learning entry scripts for advanced scenarios like schema generation, accepting raw data, and loading registered models.
5
5
services: machine-learning
6
6
ms.service: azure-machine-learning
7
7
ms.subservice: mlops
8
8
ms.topic: how-to
9
-
ms.date: 03/12/2024
9
+
ms.date: 02/03/2025
10
10
author: msakande
11
11
ms.author: mopeakande
12
12
ms.reviewer: sehan
13
13
ms.custom: UpdateFrequency5, deploy, sdkv1
14
+
# customer intent: As a developer, I want to see how to use advanced entry scripts in Azure Machine Learning so that I can implement schema generation, accept raw data, and load registered models.
This article explains how to write entry scripts for specialized use cases in Azure Machine Learning.
21
+
This article explains how to write entry scripts for specialized use cases in Azure Machine Learning. An entry script, which is also called a scoring script, accepts requests, uses a model to score data, and returns a response.
21
22
22
23
## Prerequisites
23
24
24
-
- A trained machine learning model that you intend to deploy with Azure Machine Learning. To learn more about model deployment, see [Deploy machine learning models to Azure](how-to-deploy-and-where.md).
25
+
* A trained machine learning model that you intend to deploy with Azure Machine Learning. For more information about model deployment, see [Deploy machine learning models to Azure](how-to-deploy-and-where.md).
25
26
26
27
## Automatically generate a Swagger schema
27
28
28
29
To automatically generate a schema for your web service, provide a sample of the input or output in the constructor for one of the defined type objects. The type and sample are used to automatically create the schema. Azure Machine Learning then creates an [OpenAPI specification](https://swagger.io/docs/specification/about/) (formerly, a Swagger specification) for the web service during deployment.
29
30
30
31
> [!WARNING]
31
-
> Don't use sensitive or private data for the sample input or output. The Swagger page for AML-hosted inferencing exposes the sample data.
32
+
> Don't use sensitive or private data for the sample input or output. In Azure Machine Learning, the Swagger page for inferencing exposes the sample data.
32
33
33
34
The following types are currently supported:
34
35
@@ -67,40 +68,40 @@ from inference_schema.parameter_types.pandas_parameter_type import PandasParamet
#this is optional, 'GlobalParameters' is case sensitive
95
+
#The preceding line is optional. "GlobalParameters" is case sensitive.
95
96
96
97
@output_schema(outputs)
97
98
98
99
defrun(Inputs, GlobalParameters):
99
-
#the parameters here have to match those in decorator, both 'Inputs' and
100
-
#'GlobalParameters' here are case sensitive
100
+
#The parameters in the preceding line have to match those in the decorator. "Inputs" and
101
+
#"GlobalParameters" are case sensitive.
101
102
try:
102
103
data = Inputs['input1']
103
-
# data will be convert to target format
104
+
#The data gets converted to the target format.
104
105
assertisinstance(data, np.ndarray)
105
106
result = model.predict(data)
106
107
return result.tolist()
@@ -132,45 +133,44 @@ from azureml.contrib.services.aml_response import AMLResponse
132
133
fromPILimport Image
133
134
import json
134
135
135
-
136
136
definit():
137
137
print("This is init()")
138
-
139
138
140
139
@rawhttp
141
140
defrun(request):
142
141
print("This is run()")
143
142
144
143
if request.method =='GET':
145
-
# For this example, just return the URL for GETs.
144
+
# For this example, return the URL for GET requests.
146
145
respBody =str.encode(request.full_path)
147
146
return AMLResponse(respBody, 200)
148
147
elif request.method =='POST':
149
148
file_bytes = request.files["image"]
150
149
image = Image.open(file_bytes).convert('RGB')
151
-
# For a real-world solution, you would load the data from reqBody
150
+
# For a real-world solution, load the data from the request body
152
151
# and send it to the model. Then return the response.
153
152
154
-
# For demonstration purposes, this example just returns the size of the image as the response.
153
+
# For demonstration purposes, this example returns the size of the image as the response.
155
154
return AMLResponse(json.dumps(image.size), 200)
156
155
else:
157
156
return AMLResponse("bad request", 500)
158
157
```
159
158
160
159
> [!IMPORTANT]
161
-
> The `AMLRequest` class is in the `azureml.contrib` namespace. Entities in this namespace are in preview. They change frequently as the service undergoes improvements. These entities aren't fully supported by Microsoft.
160
+
> The `AMLRequest` class is in the `azureml.contrib` namespace. Entities in this namespace are in preview. They change frequently while the service undergoes improvements. Microsoft doesn't offer full support for these entities.
162
161
>
163
-
> If you need to test this code in your local development environment, you can install the components by using the following command:
162
+
> If you need to test code that uses this class in your local development environment, you can install the components by using the following command:
164
163
>
165
164
> ```shell
166
165
> pip install azureml-contrib-services
167
166
>```
168
167
169
168
> [!NOTE]
170
-
> We don't recommend using `500` as a custom status code. On the `azureml-fe` side, the status code is rewritten to `502`.
171
-
> * The status code is passed through `azureml-fe` and then sent to the client.
172
-
> * The `azureml-fe` code rewrites the `500` that's returned from the model side as `502`. The client receives a code of `502`.
173
-
>* If the `azureml-fe` code itself returns `500`, the client side still receives a code of `500`.
169
+
> We don't recommend using `500` as a custom status code. On the Azure Machine Learning inference router (`azureml-fe`) side, the status code is rewritten to `502`.
170
+
>
171
+
> * The status code is passed through `azureml-fe` and then sent to the client.
172
+
> * The `azureml-fe` code rewrites the `500` that's returned from the model side as `502`. The client receives a code of `502`.
173
+
>* If the `azureml-fe` code itself returns `500`, the client side still receives a code of `500`.
174
174
175
175
When you use the `AMLRequest` class, you can access only the raw posted data in the score.py file. There's no client-side component. From a client, you can post data as usual. For example, the following Python code reads an image file and posts the data:
176
176
@@ -240,17 +240,16 @@ def run(request):
240
240
```
241
241
242
242
> [!IMPORTANT]
243
-
> The `AMLRequest` class is in the `azureml.contrib` namespace. Entities in this namespace are in preview. They change frequently as the service undergoes improvements. These entities aren't fully supported by Microsoft.
243
+
> The `AMLRequest` class is in the `azureml.contrib` namespace. Entities in this namespace are in preview. They change frequently while the service undergoes improvements. Microsoft doesn't offer full support for these entities.
244
244
>
245
-
> If you need to test code this in your local development environment, you can install the components by using the following command:
245
+
> If you need to test code that uses this classin your local development environment, you can install the components by using the following command:
246
246
>
247
247
>```shell
248
248
> pip install azureml-contrib-services
249
249
>```
250
250
251
251
> [!WARNING]
252
252
> Azure Machine Learning only routes POST and GET requests to the containers that run the scoring service. Errors can result if browsers use OPTIONS requests to issue preflight requests.
253
-
>
254
253
255
254
## Load registered models
256
255
@@ -263,14 +262,14 @@ There are two ways to locate models in your entry script:
263
262
264
263
`AZUREML_MODEL_DIR` is an environment variable that's created during service deployment. You can use this environment variable to find the location of deployed models.
265
264
266
-
The following table describes the value of `AZUREML_MODEL_DIR` when a varying number of models are deployed:
265
+
The following table describes possible values of `AZUREML_MODEL_DIR` for a varying number of deployed models:
267
266
268
267
| Deployment | Environment variable value |
269
268
| ----- | ----- |
270
269
| Single model | The path to the folder that contains the model. |
271
270
| Multiple models | The path to the folder that contains all models. Models are located by name and version in this folder in the format `<model-name>/<version>`. |
272
271
273
-
During model registration and deployment, models are placed in the `AZUREML_MODEL_DIR` path, and their original filenames are preserved.
272
+
During model registration and deployment, models are placed in the `AZUREML_MODEL_DIR` path, and their original file names are preserved.
274
273
275
274
To get the path to a model file in your entry script, combine the environment variable with the file path you're looking for.
276
275
@@ -281,10 +280,10 @@ The following example shows you how to find the path when you have a single mode
@@ -313,7 +312,7 @@ In the Docker image that hosts the service, the `AZUREML_MODEL_DIR` environment
313
312
In this example, the path of the first model is `$AZUREML_MODEL_DIR/my_first_model/1/my_first_model.pkl`. The path of the second model is `$AZUREML_MODEL_DIR/my_second_model/2/my_second_model.pkl`.
314
313
315
314
```python
316
-
#Example when the model is a file, and the deployment contains multiple models
315
+
#In the following example, the model is a file, and the deployment contains multiple models.
0 commit comments