Skip to content

Commit 2280031

Browse files
Merge pull request #268848 from cdpark/azureml-entry-dem108
User Story 226300: Q&M: AzureML Freshness updates - Entry script
2 parents b1b95e6 + 2f84970 commit 2280031

File tree

1 file changed

+39
-47
lines changed

1 file changed

+39
-47
lines changed

articles/machine-learning/v1/how-to-deploy-advanced-entry-script.md

Lines changed: 39 additions & 47 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,12 @@
11
---
2-
title: Author entry script for advanced scenarios
3-
titleSuffix: Azure Machine Learning entry script authoring
2+
title: Entry script authoring for advanced scenarios
3+
titleSuffix: Azure Machine Learning
44
description: Learn how to write Azure Machine Learning entry scripts for pre- and post-processing during deployment.
55
services: machine-learning
66
ms.service: machine-learning
77
ms.subservice: mlops
88
ms.topic: how-to
9-
ms.date: 08/15/2022
9+
ms.date: 03/12/2024
1010
author: dem108
1111
ms.author: sehan
1212
ms.reviewer: mopeakande
@@ -17,18 +17,18 @@ ms.custom: UpdateFrequency5, deploy, sdkv1
1717

1818
[!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
1919

20-
This article shows how to write entry scripts for specialized use cases.
20+
This article explains how to write entry scripts for specialized use cases.
2121

2222
## Prerequisites
2323

24-
This article assumes you already have a trained machine learning model that you intend to deploy with Azure Machine Learning. To learn more about model deployment, see [How to deploy and where](how-to-deploy-and-where.md).
24+
This article assumes you already have a trained machine learning model that you intend to deploy with Azure Machine Learning. To learn more about model deployment, see [Deploy machine learning models to Azure](how-to-deploy-and-where.md).
2525

2626
## Automatically generate a Swagger schema
2727

28-
To automatically generate a schema for your web service, provide a sample of the input and/or output in the constructor for one of the defined type objects. The type and sample are used to automatically create the schema. Azure Machine Learning then creates an [OpenAPI](https://swagger.io/docs/specification/about/) (Swagger) specification for the web service during deployment.
28+
To automatically generate a schema for your web service, provide a sample of the input and/or output in the constructor for one of the defined type objects. The type and sample are used to automatically create the schema. Azure Machine Learning then creates an [OpenAPI specification](https://swagger.io/docs/specification/about/) (formerly, Swagger specification) for the web service during deployment.
2929

3030
> [!WARNING]
31-
> You must not use sensitive or private data for sample input or output. The Swagger page for AML-hosted inferencing exposes the sample data.
31+
> You must not use sensitive or private data for sample input or output. The Swagger page for AML-hosted inferencing exposes the sample data.
3232
3333
These types are currently supported:
3434

@@ -37,18 +37,16 @@ These types are currently supported:
3737
* `pyspark`
3838
* Standard Python object
3939

40-
To use schema generation, include the open-source `inference-schema` package version 1.1.0 or above in your dependencies file. For more information on this package, see [https://github.com/Azure/InferenceSchema](https://github.com/Azure/InferenceSchema). In order to generate conforming swagger for automated web service consumption, scoring script run() function must have API shape of:
41-
* A first parameter of type "StandardPythonParameterType", named **Inputs** and nested.
42-
* An optional second parameter of type "StandardPythonParameterType", named **GlobalParameters**.
43-
* Return a dictionary of type "StandardPythonParameterType" named **Results** and nested.
40+
To use schema generation, include the open-source `inference-schema` package version 1.1.0 or above in your dependencies file. For more information on this package, see [InferenceSchema on GitHub](https://github.com/Azure/InferenceSchema). In order to generate conforming Swagger for automated web service consumption, scoring script run() function must have API shape of:
41+
* A first parameter of type `StandardPythonParameterType`, named *Inputs* and nested
42+
* An optional second parameter of type `StandardPythonParameterType`, named *GlobalParameters*
43+
* Return a dictionary of type `StandardPythonParameterType`, named *Results* and nested
4444

4545
Define the input and output sample formats in the `input_sample` and `output_sample` variables, which represent the request and response formats for the web service. Use these samples in the input and output function decorators on the `run()` function. The following scikit-learn example uses schema generation.
4646

47+
## Power BI compatible endpoint
4748

48-
49-
## Power BI compatible endpoint
50-
51-
The following example demonstrates how to define API shape according to above instruction. This method is supported for consuming the deployed web service from Power BI. ([Learn more about how to consume the web service from Power BI](/power-bi/service-machine-learning-integration).)
49+
The following example demonstrates how to define API shape according to preceding instruction. This method is supported for consuming the deployed web service from Power BI.
5250

5351
```python
5452
import json
@@ -122,7 +120,7 @@ def run(Inputs, GlobalParameters):
122120
123121
## <a id="binary-data"></a> Binary (that is, image) data
124122
125-
If your model accepts binary data, like an image, you must modify the `score.py` file used for your deployment to accept raw HTTP requests. To accept raw data, use the `AMLRequest` class in your entry script and add the `@rawhttp` decorator to the `run()` function.
123+
If your model accepts binary data, like an image, you must modify the *score.py* file used for your deployment to accept raw HTTP requests. To accept raw data, use the `AMLRequest` class in your entry script and add the `@rawhttp` decorator to the `run()` function.
126124
127125
Here's an example of a `score.py` that accepts binary data:
128126
@@ -157,7 +155,6 @@ def run(request):
157155
return AMLResponse("bad request", 500)
158156
```
159157
160-
161158
> [!IMPORTANT]
162159
> The `AMLRequest` class is in the `azureml.contrib` namespace. Entities in this namespace change frequently as we work to improve the service. Anything in this namespace should be considered a preview that's not fully supported by Microsoft.
163160
>
@@ -168,13 +165,12 @@ def run(request):
168165
> ```
169166
170167
> [!NOTE]
171-
> 500 is not recommended as a customed status code, as at azureml-fe side, the status code will be rewritten to 502.
172-
> * The status code will be passed through the azureml-fe then sent to client.
173-
> * The azureml-fe will only rewrite the 500 returned from the model side to be 502, the client will receive 502.
174-
> * But if the azureml-fe itself returns 500, client side will still receive 500.
168+
> *500* is not recommended as a customed status code, as at azureml-fe side, the status code will be rewritten to *502*.
169+
> * The status code is passed through the azureml-fe, then sent to client.
170+
> * The azureml-fe only rewrites the 500 returned from the model side to be 502, the client receives 502.
171+
> * But if the azureml-fe itself returns 500, client side still receives 500.
175172
176-
177-
The `AMLRequest` class only allows you to access the raw posted data in the score.py, there's no client-side component. From a client, you post data as normal. For example, the following Python code reads an image file and posts the data:
173+
The `AMLRequest` class only allows you to access the raw posted data in the *score.py* file, there's no client-side component. From a client, you post data as normal. For example, the following Python code reads an image file and posts the data:
178174
179175
```python
180176
import requests
@@ -250,23 +246,21 @@ def run(request):
250246
> pip install azureml-contrib-services
251247
> ```
252248
253-
254249
> [!WARNING]
255-
> Azure Machine Learning will route only POST and GET requests to the containers running the scoring service. This can cause errors due to browsers using OPTIONS requests to pre-flight CORS requests.
256-
>
257-
250+
> Azure Machine Learning only routes POST and GET requests to the containers running the scoring service. This can cause errors due to browsers using OPTIONS requests to pre-flight CORS requests.
251+
>
258252
259253
## Load registered models
260254
261255
There are two ways to locate models in your entry script:
262-
* `AZUREML_MODEL_DIR`: An environment variable containing the path to the model location.
263-
* `Model.get_model_path`: An API that returns the path to model file using the registered model name.
256+
* `AZUREML_MODEL_DIR`: An environment variable containing the path to the model location
257+
* `Model.get_model_path`: An API that returns the path to model file using the registered model name
264258
265259
#### AZUREML_MODEL_DIR
266260
267-
AZUREML_MODEL_DIR is an environment variable created during service deployment. You can use this environment variable to find the location of the deployed model(s).
261+
`AZUREML_MODEL_DIR` is an environment variable created during service deployment. You can use this environment variable to find the location of the deployed model(s).
268262
269-
The following table describes the value of AZUREML_MODEL_DIR depending on the number of models deployed:
263+
The following table describes the value of `AZUREML_MODEL_DIR` depending on the number of models deployed:
270264
271265
| Deployment | Environment variable value |
272266
| ----- | ----- |
@@ -290,8 +284,8 @@ file_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), 'my_model_folder', 'skl
290284
291285
In this scenario, two models are registered with the workspace:
292286
293-
* `my_first_model`: Contains one file (`my_first_model.pkl`) and there's only one version (`1`).
294-
* `my_second_model`: Contains one file (`my_second_model.pkl`) and there are two versions; `1` and `2`.
287+
* `my_first_model`: Contains one file (`my_first_model.pkl`) and there's only one version, `1`
288+
* `my_second_model`: Contains one file (`my_second_model.pkl`) and there are two versions, `1` and `2`
295289
296290
When the service was deployed, both models are provided in the deploy operation:
297291
@@ -301,12 +295,10 @@ second_model = Model(ws, name="my_second_model", version=2)
301295
service = Model.deploy(ws, "myservice", [first_model, second_model], inference_config, deployment_config)
302296
```
303297
304-
In the Docker image that hosts the service, the `AZUREML_MODEL_DIR` environment variable contains the directory where the models are located.
305-
In this directory, each of the models is located in a directory path of `MODEL_NAME/VERSION`. Where `MODEL_NAME` is the name of the registered model, and `VERSION` is the version of the model. The files that make up the registered model are stored in these directories.
298+
In the Docker image that hosts the service, the `AZUREML_MODEL_DIR` environment variable contains the directory where the models are located. In this directory, each of the models is located in a directory path of `MODEL_NAME/VERSION`. Where `MODEL_NAME` is the name of the registered model, and `VERSION` is the version of the model. The files that make up the registered model are stored in these directories.
306299
307300
In this example, the paths would be `$AZUREML_MODEL_DIR/my_first_model/1/my_first_model.pkl` and `$AZUREML_MODEL_DIR/my_second_model/2/my_second_model.pkl`.
308301
309-
310302
```python
311303
# Example when the model is a file, and the deployment contains multiple models
312304
first_model_name = 'my_first_model'
@@ -319,28 +311,28 @@ second_model_path = os.path.join(os.getenv('AZUREML_MODEL_DIR'), second_model_na
319311
320312
### get_model_path
321313
322-
When you register a model, you provide a model name that's used for managing the model in the registry. You use this name with the [Model.get_model_path()](/python/api/azureml-core/azureml.core.model.model#get-model-path-model-name--version-none---workspace-none-) method to retrieve the path of the model file or files on the local file system. If you register a folder or a collection of files, this API returns the path of the directory that contains those files.
314+
When you register a model, you provide a model name that's used for managing the model in the registry. You use this name with the [Model.get_model_path()](/python/api/azureml-core/azureml.core.model.model#azureml-core-model-model-get-model-path) method to retrieve the path of the model file or files on the local file system. If you register a folder or a collection of files, this API returns the path of the directory that contains those files.
323315
324316
When you register a model, you give it a name. The name corresponds to where the model is placed, either locally or during service deployment.
325317
326318
## Framework-specific examples
327319
328-
More entry script examples for specific machine learning use cases can be found below:
320+
See the following articles for more entry script examples for specific machine learning use cases:
329321
330322
* [PyTorch](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/pytorch)
331323
* [TensorFlow](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/ml-frameworks/tensorflow)
332324
* [Keras](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/keras/train-hyperparameter-tune-deploy-with-keras/train-hyperparameter-tune-deploy-with-keras.ipynb)
333325
* [AutoML](https://github.com/Azure/MachineLearningNotebooks/tree/master/how-to-use-azureml/automated-machine-learning/classification-bank-marketing-all-features)
334326
* [ONNX](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/)
335327
336-
## Next steps
328+
## Related content
337329
338-
* [Troubleshoot a failed deployment](how-to-troubleshoot-deployment.md)
339-
* [Deploy to Azure Kubernetes Service](how-to-deploy-azure-kubernetes-service.md)
340-
* [Create client applications to consume web services](how-to-consume-web-service.md)
341-
* [Update web service](how-to-deploy-update-web-service.md)
342-
* [How to deploy a model using a custom Docker image](../how-to-deploy-custom-container.md)
330+
* [Troubleshooting remote model deployment](how-to-troubleshoot-deployment.md)
331+
* [Deploy a model to an Azure Kubernetes Service cluster with v1](how-to-deploy-azure-kubernetes-service.md)
332+
* [Consume an Azure Machine Learning model deployed as a web service](how-to-consume-web-service.md)
333+
* [Update a deployed web service (v1)](how-to-deploy-update-web-service.md)
334+
* [Use a custom container to deploy a model to an online endpoint](../how-to-deploy-custom-container.md)
343335
* [Use TLS to secure a web service through Azure Machine Learning](how-to-secure-web-service.md)
344-
* [Monitor your Azure Machine Learning models with Application Insights](how-to-enable-app-insights.md)
345-
* [Collect data for models in production](how-to-enable-data-collection.md)
346-
* [Create event alerts and triggers for model deployments](../how-to-use-event-grid.md)
336+
* [Monitor and collect data from ML web service endpoints](how-to-enable-app-insights.md)
337+
* [Collect data from models in production](how-to-enable-data-collection.md)
338+
* [Trigger applications, processes, or CI/CD workflows based on Azure Machine Learning events](../how-to-use-event-grid.md)

0 commit comments

Comments
 (0)