Skip to content

Commit dc36b68

Browse files
committed
Update code and text
1 parent d2a8687 commit dc36b68

File tree

1 file changed

+60
-47
lines changed

1 file changed

+60
-47
lines changed

articles/machine-learning/v1/how-to-deploy-advanced-entry-script.md

Lines changed: 60 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -17,44 +17,46 @@ ms.custom: UpdateFrequency5, deploy, sdkv1
1717

1818
[!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
1919

20-
This article explains how to write entry scripts for specialized use cases.
20+
This article explains how to write entry scripts for specialized use cases in Azure Machine Learning.
2121

2222
## Prerequisites
2323

24-
This article assumes you already have a trained machine learning model that you intend to deploy with Azure Machine Learning. To learn more about model deployment, see [Deploy machine learning models to Azure](how-to-deploy-and-where.md).
24+
- A trained machine learning model that you intend to deploy with Azure Machine Learning. To learn more about model deployment, see [Deploy machine learning models to Azure](how-to-deploy-and-where.md).
2525

2626
## Automatically generate a Swagger schema
2727

28-
To automatically generate a schema for your web service, provide a sample of the input and/or output in the constructor for one of the defined type objects. The type and sample are used to automatically create the schema. Azure Machine Learning then creates an [OpenAPI specification](https://swagger.io/docs/specification/about/) (formerly, Swagger specification) for the web service during deployment.
28+
To automatically generate a schema for your web service, provide a sample of the input or output in the constructor for one of the defined type objects. The type and sample are used to automatically create the schema. Azure Machine Learning then creates an [OpenAPI specification](https://swagger.io/docs/specification/about/) (formerly, a Swagger specification) for the web service during deployment.
2929

3030
> [!WARNING]
31-
> You must not use sensitive or private data for sample input or output. The Swagger page for AML-hosted inferencing exposes the sample data.
31+
> Don't use sensitive or private data for the sample input or output. The Swagger page for AML-hosted inferencing exposes the sample data.
3232
33-
These types are currently supported:
33+
The following types are currently supported:
3434

3535
* `pandas`
3636
* `numpy`
3737
* `pyspark`
3838
* Standard Python object
3939

40-
To use schema generation, include the open-source `inference-schema` package version 1.1.0 or above in your dependencies file. For more information on this package, see [InferenceSchema on GitHub](https://github.com/Azure/InferenceSchema). In order to generate conforming Swagger for automated web service consumption, scoring script run() function must have API shape of:
41-
* A first parameter of type `StandardPythonParameterType`, named *Inputs* and nested
42-
* An optional second parameter of type `StandardPythonParameterType`, named *GlobalParameters*
43-
* Return a dictionary of type `StandardPythonParameterType`, named *Results* and nested
40+
To use schema generation, include the open-source `inference-schema` package version 1.1.0 or later in your dependencies file. For more information about this package, see [InferenceSchema on GitHub](https://github.com/Azure/InferenceSchema). In order to generate conforming Swagger for automated web service consumption, the `run` function in your scoring script must meet the following conditions:
4441

45-
Define the input and output sample formats in the `input_sample` and `output_sample` variables, which represent the request and response formats for the web service. Use these samples in the input and output function decorators on the `run()` function. The following scikit-learn example uses schema generation.
42+
* The first parameter must have the type `StandardPythonParameterType`, be named `Inputs`, and be nested.
43+
* There must be an optional second parameter of type `StandardPythonParameterType` that's named `GlobalParameters`.
44+
* The function must return a dictionary of type `StandardPythonParameterType` that's named `Results` and is nested.
4645

47-
## Power BI compatible endpoint
46+
Define the input and output sample formats in the `sample_input` and `sample_output` variables, which represent the request and response formats for the web service. Use these samples in the input and output function decorators on the `run` function. The `scikit-learn` example in the following section uses schema generation.
4847

49-
The following example demonstrates how to define API shape according to preceding instruction. This method is supported for consuming the deployed web service from Power BI.
48+
## Power BI-compatible endpoint
49+
50+
The following example demonstrates how to define the `run` function according to the instructions in the preceding section. You can use this script when you consume your deployed web service from Power BI.
5051

5152
```python
53+
import os
5254
import json
5355
import pickle
5456
import numpy as np
5557
import pandas as pd
5658
import azureml.train.automl
57-
from sklearn.externals import joblib
59+
import joblib
5860
from sklearn.linear_model import Ridge
5961

6062
from inference_schema.schema_decorators import input_schema, output_schema
@@ -108,7 +110,7 @@ def run(Inputs, GlobalParameters):
108110
```
109111

110112
> [!TIP]
111-
> The return value from the script can be any Python object that is serializable to JSON. For example, if your model returns a Pandas dataframe that contains multiple columns, you might use an output decorator similar to the following code:
113+
> The return value from the script can be any Python object that's serializable to JSON. For example, if your model returns a Pandas dataframe that contains multiple columns, you might use an output decorator similar to the following code:
112114
>
113115
> ```python
114116
> output_sample = pd.DataFrame(data=[{"a1": 5, "a2": 6}])
@@ -118,11 +120,11 @@ def run(Inputs, GlobalParameters):
118120
> return result
119121
> ```
120122
121-
## <a id="binary-data"></a> Binary (that is, image) data
123+
## <a id="binary-data"></a> Binary (image) data
122124
123-
If your model accepts binary data, like an image, you must modify the *score.py* file used for your deployment to accept raw HTTP requests. To accept raw data, use the `AMLRequest` class in your entry script and add the `@rawhttp` decorator to the `run()` function.
125+
If your model accepts binary data, like an image, you must modify the score.py file that your deployment uses so that it accepts raw HTTP requests. To accept raw data, use the `AMLRequest` class in your entry script and add the `@rawhttp` decorator to the `run` function.
124126
125-
Here's an example of a `score.py` that accepts binary data:
127+
The following score.py script accepts binary data:
126128
127129
```python
128130
from azureml.contrib.services.aml_request import AMLRequest, rawhttp
@@ -156,21 +158,21 @@ def run(request):
156158
```
157159
158160
> [!IMPORTANT]
159-
> The `AMLRequest` class is in the `azureml.contrib` namespace. Entities in this namespace change frequently as we work to improve the service. Anything in this namespace should be considered a preview that's not fully supported by Microsoft.
161+
> The `AMLRequest` class is in the `azureml.contrib` namespace. Entities in this namespace are in preview. They change frequently as the service undergoes improvements. These entities aren't fully supported by Microsoft.
160162
>
161-
> If you need to test this in your local development environment, you can install the components by using the following command:
163+
> If you need to test this code in your local development environment, you can install the components by using the following command:
162164
>
163165
> ```shell
164166
> pip install azureml-contrib-services
165167
> ```
166168
167169
> [!NOTE]
168-
> *500* is not recommended as a customed status code, as at azureml-fe side, the status code will be rewritten to *502*.
169-
> * The status code is passed through the azureml-fe, then sent to client.
170-
> * The azureml-fe only rewrites the 500 returned from the model side to be 502, the client receives 502.
171-
> * But if the azureml-fe itself returns 500, client side still receives 500.
170+
> We don't recommend using `500` as a custom status code. On the `azureml-fe` side, the status code is rewritten to `502`.
171+
> * The status code is passed through `azureml-fe` and then sent to the client.
172+
> * The `azureml-fe` code rewrites the `500` that's returned from the model side as `502`. The client receives a code of `502`.
173+
> * If the `azureml-fe` code itself returns `500`, the client side still receives a code of `500`.
172174
173-
The `AMLRequest` class only allows you to access the raw posted data in the *score.py* file, there's no client-side component. From a client, you post data as normal. For example, the following Python code reads an image file and posts the data:
175+
When you use the `AMLRequest` class, you can access only the raw posted data in the score.py file. There's no client-side component. From a client, you can post data as usual. For example, the following Python code reads an image file and posts the data:
174176
175177
```python
176178
import requests
@@ -185,11 +187,11 @@ print(response.json)
185187
186188
<a id="cors"></a>
187189
188-
## Cross-origin resource sharing (CORS)
190+
## Cross-origin resource sharing
189191
190-
Cross-origin resource sharing is a way to allow resources on a webpage to be requested from another domain. CORS works via HTTP headers sent with the client request and returned with the service response. For more information on CORS and valid headers, see [Cross-origin resource sharing](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing) in Wikipedia.
192+
Cross-origin resource sharing (CORS) provides a way for resources on a webpage to be requested from another domain. CORS works via HTTP headers that are sent with the client request and returned with the service response. For more information about CORS and valid headers, see [Cross-origin resource sharing](https://en.wikipedia.org/wiki/Cross-origin_resource_sharing).
191193
192-
To configure your model deployment to support CORS, use the `AMLResponse` class in your entry script. This class allows you to set the headers on the response object.
194+
To configure your model deployment to support CORS, use the `AMLResponse` class in your entry script. When you use this class, you can set the headers on the response object.
193195
194196
The following example sets the `Access-Control-Allow-Origin` header for the response from the entry script:
195197
@@ -238,66 +240,77 @@ def run(request):
238240
```
239241
240242
> [!IMPORTANT]
241-
> The `AMLResponse` class is in the `azureml.contrib` namespace. Entities in this namespace change frequently as we work to improve the service. Anything in this namespace should be considered a preview that's not fully supported by Microsoft.
243+
> The `AMLRequest` class is in the `azureml.contrib` namespace. Entities in this namespace are in preview. They change frequently as the service undergoes improvements. These entities aren't fully supported by Microsoft.
242244
>
243-
> If you need to test this in your local development environment, you can install the components by using the following command:
245+
> If you need to test code this in your local development environment, you can install the components by using the following command:
244246
>
245247
> ```shell
246248
> pip install azureml-contrib-services
247249
> ```
248250
249251
> [!WARNING]
250-
> Azure Machine Learning only routes POST and GET requests to the containers running the scoring service. This can cause errors due to browsers using OPTIONS requests to pre-flight CORS requests.
252+
> Azure Machine Learning only routes POST and GET requests to the containers that run the scoring service. Errors can result if browsers use OPTIONS requests to issue preflight requests.
251253
>
252254
253255
## Load registered models
254256
255257
There are two ways to locate models in your entry script:
256-
* `AZUREML_MODEL_DIR`: An environment variable containing the path to the model location
257-
* `Model.get_model_path`: An API that returns the path to model file using the registered model name
258+
259+
* `AZUREML_MODEL_DIR`: An environment variable that contains the path to the model location
260+
* `Model.get_model_path`: An API that returns the path to the model file by using the registered model name
258261
259262
#### AZUREML_MODEL_DIR
260263
261-
`AZUREML_MODEL_DIR` is an environment variable created during service deployment. You can use this environment variable to find the location of the deployed model(s).
264+
`AZUREML_MODEL_DIR` is an environment variable that's created during service deployment. You can use this environment variable to find the location of deployed models.
262265
263-
The following table describes the value of `AZUREML_MODEL_DIR` depending on the number of models deployed:
266+
The following table describes the value of `AZUREML_MODEL_DIR` when a varying number of models are deployed:
264267
265268
| Deployment | Environment variable value |
266269
| ----- | ----- |
267-
| Single model | The path to the folder containing the model. |
268-
| Multiple models | The path to the folder containing all models. Models are located by name and version in this folder (`$MODEL_NAME/$VERSION`) |
270+
| Single model | The path to the folder that contains the model. |
271+
| Multiple models | The path to the folder that contains all models. Models are located by name and version in this folder in the format `<model-name>/<version>`. |
269272
270-
During model registration and deployment, Models are placed in the AZUREML_MODEL_DIR path, and their original filenames are preserved.
273+
During model registration and deployment, models are placed in the `AZUREML_MODEL_DIR` path, and their original filenames are preserved.
271274
272275
To get the path to a model file in your entry script, combine the environment variable with the file path you're looking for.
273276
274-
**Single model example**
277+
##### Single model
278+
279+
The following example shows you how to find the path when you have a single model:
280+
275281
```python
282+
import os
283+
276284
# Example when the model is a file
277285
model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'sklearn_regression_model.pkl')
278286
279287
# Example when the model is a folder containing a file
280288
file_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'my_model_folder', 'sklearn_regression_model.pkl')
281289
```
282290
283-
**Multiple model example**
291+
##### Multiple models
284292
285-
In this scenario, two models are registered with the workspace:
293+
The following example shows you how to find the path when you have multiple models. In this scenario, two models are registered with the workspace:
286294
287-
* `my_first_model`: Contains one file (`my_first_model.pkl`) and there's only one version, `1`
288-
* `my_second_model`: Contains one file (`my_second_model.pkl`) and there are two versions, `1` and `2`
295+
* `my_first_model`: This model contains one file, my_first_model.pkl, and has one version, `1`.
296+
* `my_second_model`: This model contains one file, my_second_model.pkl, and has two versions, `1` and `2`.
289297
290-
When the service was deployed, both models are provided in the deploy operation:
298+
When you deploy the service, you provide both models in the deploy operation:
291299
292300
```python
301+
from azureml.core import Workspace, Model
302+
303+
# Get a handle to the workspace.
304+
ws = Workspace.from_config()
305+
293306
first_model = Model(ws, name="my_first_model", version=1)
294307
second_model = Model(ws, name="my_second_model", version=2)
295308
service = Model.deploy(ws, "myservice", [first_model, second_model], inference_config, deployment_config)
296309
```
297310
298-
In the Docker image that hosts the service, the `AZUREML_MODEL_DIR` environment variable contains the directory where the models are located. In this directory, each of the models is located in a directory path of `MODEL_NAME/VERSION`. Where `MODEL_NAME` is the name of the registered model, and `VERSION` is the version of the model. The files that make up the registered model are stored in these directories.
311+
In the Docker image that hosts the service, the `AZUREML_MODEL_DIR` environment variable contains the directory where the models are located. In this directory, each model is located in a directory path of `<model-name>/<version>`. In this path, `<model-name>` is the name of the registered model, and `<version>` is the version of the model. The files that make up the registered model are stored in these directories.
299312
300-
In this example, the paths would be `$AZUREML_MODEL_DIR/my_first_model/1/my_first_model.pkl` and `$AZUREML_MODEL_DIR/my_second_model/2/my_second_model.pkl`.
313+
In this example, the path of the first model is `$AZUREML_MODEL_DIR/my_first_model/1/my_first_model.pkl`. The path of the second model is `$AZUREML_MODEL_DIR/my_second_model/2/my_second_model.pkl`.
301314
302315
```python
303316
# Example when the model is a file, and the deployment contains multiple models
@@ -311,13 +324,13 @@ second_model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), second_model_na
311324
312325
### get_model_path
313326
314-
When you register a model, you provide a model name that's used for managing the model in the registry. You use this name with the [Model.get_model_path()](/python/api/azureml-core/azureml.core.model.model#azureml-core-model-model-get-model-path) method to retrieve the path of the model file or files on the local file system. If you register a folder or a collection of files, this API returns the path of the directory that contains those files.
327+
When you register a model, you provide a model name that's used for managing the model in the registry. You use this name with the [`Model.get_model_path`](/python/api/azureml-core/azureml.core.model.model#azureml-core-model-model-get-model-path) method to retrieve the path of the model file or files on the local file system. If you register a folder or a collection of files, this API returns the path of the directory that contains those files.
315328
316329
When you register a model, you give it a name. The name corresponds to where the model is placed, either locally or during service deployment.
317330
318331
## Framework-specific examples
319332
320-
See the following articles for more entry script examples for specific machine learning use cases:
333+
For more entry script examples for specific machine learning use cases, see the following articles:
321334
322335
* [PyTorch](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/pytorch)
323336
* [TensorFlow](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/tensorflow)

0 commit comments

Comments
 (0)