You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This article explains how to write entry scripts for specialized use cases.
20
+
This article explains how to write entry scripts for specialized use cases in Azure Machine Learning.
21
21
22
22
## Prerequisites
23
23
24
-
This article assumes you already have a trained machine learning model that you intend to deploy with Azure Machine Learning. To learn more about model deployment, see [Deploy machine learning models to Azure](how-to-deploy-and-where.md).
24
+
- A trained machine learning model that you intend to deploy with Azure Machine Learning. To learn more about model deployment, see [Deploy machine learning models to Azure](how-to-deploy-and-where.md).
25
25
26
26
## Automatically generate a Swagger schema
27
27
28
-
To automatically generate a schema for your web service, provide a sample of the input and/or output in the constructor for one of the defined type objects. The type and sample are used to automatically create the schema. Azure Machine Learning then creates an [OpenAPI specification](https://swagger.io/docs/specification/about/) (formerly, Swagger specification) for the web service during deployment.
28
+
To automatically generate a schema for your web service, provide a sample of the input or output in the constructor for one of the defined type objects. The type and sample are used to automatically create the schema. Azure Machine Learning then creates an [OpenAPI specification](https://swagger.io/docs/specification/about/) (formerly, a Swagger specification) for the web service during deployment.
29
29
30
30
> [!WARNING]
31
-
> You must not use sensitive or private data for sample input or output. The Swagger page for AML-hosted inferencing exposes the sample data.
31
+
> Don't use sensitive or private data for the sample input or output. The Swagger page for AML-hosted inferencing exposes the sample data.
32
32
33
-
These types are currently supported:
33
+
The following types are currently supported:
34
34
35
35
*`pandas`
36
36
*`numpy`
37
37
*`pyspark`
38
38
* Standard Python object
39
39
40
-
To use schema generation, include the open-source `inference-schema` package version 1.1.0 or above in your dependencies file. For more information on this package, see [InferenceSchema on GitHub](https://github.com/Azure/InferenceSchema). In order to generate conforming Swagger for automated web service consumption, scoring script run() function must have API shape of:
41
-
* A first parameter of type `StandardPythonParameterType`, named *Inputs* and nested
42
-
* An optional second parameter of type `StandardPythonParameterType`, named *GlobalParameters*
43
-
* Return a dictionary of type `StandardPythonParameterType`, named *Results* and nested
40
+
To use schema generation, include the open-source `inference-schema` package version 1.1.0 or later in your dependencies file. For more information about this package, see [InferenceSchema on GitHub](https://github.com/Azure/InferenceSchema). In order to generate conforming Swagger for automated web service consumption, the `run` function in your scoring script must meet the following conditions:
44
41
45
-
Define the input and output sample formats in the `input_sample` and `output_sample` variables, which represent the request and response formats for the web service. Use these samples in the input and output function decorators on the `run()` function. The following scikit-learn example uses schema generation.
42
+
* The first parameter must have the type `StandardPythonParameterType`, be named `Inputs`, and be nested.
43
+
* There must be an optional second parameter of type `StandardPythonParameterType` that's named `GlobalParameters`.
44
+
* The function must return a dictionary of type `StandardPythonParameterType` that's named `Results` and is nested.
46
45
47
-
## Power BI compatible endpoint
46
+
Define the input and output sample formats in the `sample_input` and `sample_output` variables, which represent the request and response formats for the web service. Use these samples in the input and output function decorators on the `run` function. The `scikit-learn` example in the following section uses schema generation.
48
47
49
-
The following example demonstrates how to define API shape according to preceding instruction. This method is supported for consuming the deployed web service from Power BI.
48
+
## Power BI-compatible endpoint
49
+
50
+
The following example demonstrates how to define the `run` function according to the instructions in the preceding section. You can use this script when you consume your deployed web service from Power BI.
50
51
51
52
```python
53
+
import os
52
54
import json
53
55
import pickle
54
56
import numpy as np
55
57
import pandas as pd
56
58
import azureml.train.automl
57
-
from sklearn.externals import joblib
59
+
import joblib
58
60
from sklearn.linear_model import Ridge
59
61
60
62
from inference_schema.schema_decorators import input_schema, output_schema
> The return value from the script can be any Python object that is serializable to JSON. For example, if your model returns a Pandas dataframe that contains multiple columns, you might use an output decorator similar to the following code:
113
+
> The return value from the script can be any Python object that's serializable to JSON. For example, if your model returns a Pandas dataframe that contains multiple columns, you might use an output decorator similar to the following code:
## <a id="binary-data"></a> Binary (that is, image) data
123
+
## <a id="binary-data"></a> Binary (image) data
122
124
123
-
If your model accepts binary data, like an image, you must modify the *score.py*fileused foryour deployment to accept raw HTTP requests. To accept raw data, use the `AMLRequest`classin your entry script and add the `@rawhttp` decorator to the `run()` function.
125
+
If your model accepts binary data, like an image, you must modify the score.py filethat your deployment uses so that it accepts raw HTTP requests. To accept raw data, use the `AMLRequest`classin your entry script and add the `@rawhttp` decorator to the `run` function.
124
126
125
-
Here's an example of a `score.py` that accepts binary data:
127
+
The following score.py script accepts binary data:
126
128
127
129
```python
128
130
from azureml.contrib.services.aml_request import AMLRequest, rawhttp
@@ -156,21 +158,21 @@ def run(request):
156
158
```
157
159
158
160
> [!IMPORTANT]
159
-
> The `AMLRequest` class is in the `azureml.contrib` namespace. Entities in this namespace change frequently as we work to improve the service. Anything in this namespace should be considered a preview that's not fully supported by Microsoft.
161
+
> The `AMLRequest` class is in the `azureml.contrib` namespace. Entities in this namespace are in preview. They change frequently as the service undergoes improvements. These entities aren't fully supported by Microsoft.
160
162
>
161
-
> If you need to test this in your local development environment, you can install the components by using the following command:
163
+
> If you need to test this code in your local development environment, you can install the components by using the following command:
162
164
>
163
165
> ```shell
164
166
> pip install azureml-contrib-services
165
167
>```
166
168
167
169
> [!NOTE]
168
-
>*500* is not recommended as a customed status code, as at azureml-fe side, the status code will be rewritten to *502*.
169
-
>* The status code is passed through the azureml-fe, then sent to client.
170
-
>* The azureml-fe only rewrites the 500returned from the model side to be 502, the client receives 502.
171
-
>*But ifthe azureml-feitself returns 500, client side still receives 500.
170
+
>We don't recommend using `500` as a custom status code. On the `azureml-fe` side, the status code is rewritten to `502`.
171
+
> * The status code is passed through `azureml-fe` and then sent to the client.
172
+
> * The `azureml-fe` code rewrites the `500` that's returned from the model side as `502`. The client receives a code of `502`.
173
+
>*If the `azureml-fe` code itself returns `500`, the client side still receives a code of `500`.
172
174
173
-
The `AMLRequest` class only allows you to access the raw posted data in the *score.py* file, there's no client-side component. From a client, you post data as normal. For example, the following Python code reads an image file and posts the data:
175
+
When you use the `AMLRequest` class, you can access only the raw posted data in the score.py file. There's no client-side component. From a client, you can post data as usual. For example, the following Python code reads an image file and posts the data:
174
176
175
177
```python
176
178
import requests
@@ -185,11 +187,11 @@ print(response.json)
185
187
186
188
<a id="cors"></a>
187
189
188
-
## Cross-origin resource sharing (CORS)
190
+
## Cross-origin resource sharing
189
191
190
-
Cross-origin resource sharing is a way to allow resources on a webpage to be requested from another domain. CORS works via HTTP headers sent with the client request and returned with the service response. For more information on CORS and valid headers, see [Cross-origin resource sharing](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing) in Wikipedia.
192
+
Cross-origin resource sharing (CORS) provides a way for resources on a webpage to be requested from another domain. CORS works via HTTP headers that are sent with the client request and returned with the service response. For more information about CORS and valid headers, see [Cross-origin resource sharing](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing).
191
193
192
-
To configure your model deployment to support CORS, use the `AMLResponse` class in your entry script. This class allows you to set the headers on the response object.
194
+
To configure your model deployment to support CORS, use the `AMLResponse` class in your entry script. When you use this class, you can set the headers on the response object.
193
195
194
196
The following example sets the `Access-Control-Allow-Origin` header for the response from the entry script:
195
197
@@ -238,66 +240,77 @@ def run(request):
238
240
```
239
241
240
242
> [!IMPORTANT]
241
-
> The `AMLResponse` class is in the `azureml.contrib` namespace. Entities in this namespace change frequently as we work to improve the service. Anything in this namespace should be considered a preview that's not fully supported by Microsoft.
243
+
> The `AMLRequest` class is in the `azureml.contrib` namespace. Entities in this namespace are in preview. They change frequently as the service undergoes improvements. These entities aren't fully supported by Microsoft.
242
244
>
243
-
> If you need to test this in your local development environment, you can install the components by using the following command:
245
+
> If you need to testcode this in your local development environment, you can install the components by using the following command:
244
246
>
245
247
>```shell
246
248
> pip install azureml-contrib-services
247
249
>```
248
250
249
251
> [!WARNING]
250
-
> Azure Machine Learning only routes POST and GET requests to the containers running the scoring service. This can cause errors due to browsers using OPTIONS requests to pre-flight CORS requests.
252
+
> Azure Machine Learning only routes POST and GET requests to the containers that run the scoring service. Errors can result ifbrowsers use OPTIONS requests to issue preflight requests.
251
253
>
252
254
253
255
## Load registered models
254
256
255
257
There are two ways to locate models in your entry script:
256
-
*`AZUREML_MODEL_DIR`: An environment variable containing the path to the model location
257
-
*`Model.get_model_path`: An API that returns the path to model file using the registered model name
258
+
259
+
*`AZUREML_MODEL_DIR`: An environment variable that contains the path to the model location
260
+
*`Model.get_model_path`: An API that returns the path to the model file by using the registered model name
258
261
259
262
#### AZUREML_MODEL_DIR
260
263
261
-
`AZUREML_MODEL_DIR` is an environment variable created during service deployment. You can use this environment variable to find the location of the deployed model(s).
264
+
`AZUREML_MODEL_DIR` is an environment variable that's created during service deployment. You can use this environment variable to find the location of deployed models.
262
265
263
-
The following table describes the value of `AZUREML_MODEL_DIR`depending on the number of models deployed:
266
+
The following table describes the value of `AZUREML_MODEL_DIR` when a varying number of models are deployed:
264
267
265
268
| Deployment | Environment variable value |
266
269
| ----- | ----- |
267
-
| Single model | The path to the folder containing the model. |
268
-
| Multiple models | The path to the folder containing all models. Models are located by name and version in this folder (`$MODEL_NAME/$VERSION`)|
270
+
| Single model | The path to the folder that contains the model. |
271
+
| Multiple models | The path to the folder that contains all models. Models are located by name and version in this folder in the format `<model-name>/<version>`. |
269
272
270
-
During model registration and deployment, Models are placed in the AZUREML_MODEL_DIR path, and their original filenames are preserved.
273
+
During model registration and deployment, models are placed in the `AZUREML_MODEL_DIR` path, and their original filenames are preserved.
271
274
272
275
To get the path to a model file in your entry script, combine the environment variable with the file path you're looking for.
273
276
274
-
**Single model example**
277
+
##### Single model
278
+
279
+
The following example shows you how to find the path when you have a single model:
service = Model.deploy(ws, "myservice", [first_model, second_model], inference_config, deployment_config)
296
309
```
297
310
298
-
In the Docker image that hosts the service, the `AZUREML_MODEL_DIR` environment variable contains the directory where the models are located. In this directory, each of the models is located in a directory path of `MODEL_NAME/VERSION`. Where `MODEL_NAME` is the name of the registered model, and `VERSION` is the version of the model. The files that make up the registered model are stored in these directories.
311
+
In the Docker image that hosts the service, the `AZUREML_MODEL_DIR` environment variable contains the directory where the models are located. In this directory, each model is located in a directory path of `<model-name>/<version>`. In this path, `<model-name>` is the name of the registered model, and `<version>` is the version of the model. The files that make up the registered model are stored in these directories.
299
312
300
-
In this example, the paths would be `$AZUREML_MODEL_DIR/my_first_model/1/my_first_model.pkl` and`$AZUREML_MODEL_DIR/my_second_model/2/my_second_model.pkl`.
313
+
In this example, the path of the first model is `$AZUREML_MODEL_DIR/my_first_model/1/my_first_model.pkl`. The path of the second model is`$AZUREML_MODEL_DIR/my_second_model/2/my_second_model.pkl`.
301
314
302
315
```python
303
316
# Example when the model is a file, and the deployment contains multiple models
When you register a model, you provide a model name that's used for managing the model in the registry. You use this name with the [Model.get_model_path()](/python/api/azureml-core/azureml.core.model.model#azureml-core-model-model-get-model-path) method to retrieve the path of the model file or files on the local file system. If you register a folder or a collection of files, this API returns the path of the directory that contains those files.
327
+
When you register a model, you provide a model name that's used for managing the model in the registry. You use this name with the [`Model.get_model_path`](/python/api/azureml-core/azureml.core.model.model#azureml-core-model-model-get-model-path) method to retrieve the path of the model file or files on the local file system. If you register a folder or a collection of files, this API returns the path of the directory that contains those files.
315
328
316
329
When you register a model, you give it a name. The name corresponds to where the model is placed, either locally or during service deployment.
317
330
318
331
## Framework-specific examples
319
332
320
-
See the following articles for more entry script examples for specific machine learning use cases:
333
+
For more entry script examples for specific machine learning use cases, see the following articles:
0 commit comments