You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This article shows how to write entry scripts for specialized use cases.
20
+
This article explains how to write entry scripts for specialized use cases.
21
21
22
22
## Prerequisites
23
23
24
-
This article assumes you already have a trained machine learning model that you intend to deploy with Azure Machine Learning. To learn more about model deployment, see [How to deploy and where](how-to-deploy-and-where.md).
24
+
This article assumes you already have a trained machine learning model that you intend to deploy with Azure Machine Learning. To learn more about model deployment, see [Deploy machine learning models to Azure](how-to-deploy-and-where.md).
25
25
26
26
## Automatically generate a Swagger schema
27
27
28
-
To automatically generate a schema for your web service, provide a sample of the input and/or output in the constructor for one of the defined type objects. The type and sample are used to automatically create the schema. Azure Machine Learning then creates an [OpenAPI](https://swagger.io/docs/specification/about/) (Swagger) specification for the web service during deployment.
28
+
To automatically generate a schema for your web service, provide a sample of the input and/or output in the constructor for one of the defined type objects. The type and sample are used to automatically create the schema. Azure Machine Learning then creates an [OpenAPI specification](https://swagger.io/docs/specification/about/) (formerly, Swagger specification) for the web service during deployment.
29
29
30
30
> [!WARNING]
31
-
> You must not use sensitive or private data for sample input or output. The Swagger page for AML-hosted inferencing exposes the sample data.
31
+
> You must not use sensitive or private data for sample input or output. The Swagger page for AML-hosted inferencing exposes the sample data.
32
32
33
33
These types are currently supported:
34
34
@@ -37,18 +37,16 @@ These types are currently supported:
37
37
*`pyspark`
38
38
* Standard Python object
39
39
40
-
To use schema generation, include the open-source `inference-schema` package version 1.1.0 or above in your dependencies file. For more information on this package, see [https://github.com/Azure/InferenceSchema](https://github.com/Azure/InferenceSchema). In order to generate conforming swagger for automated web service consumption, scoring script run() function must have API shape of:
41
-
* A first parameter of type "StandardPythonParameterType", named **Inputs** and nested.
42
-
* An optional second parameter of type "StandardPythonParameterType", named **GlobalParameters**.
43
-
* Return a dictionary of type "StandardPythonParameterType" named **Results** and nested.
40
+
To use schema generation, include the open-source `inference-schema` package version 1.1.0 or above in your dependencies file. For more information on this package, see [InferenceSchema on GitHub](https://github.com/Azure/InferenceSchema). In order to generate conforming Swagger for automated web service consumption, scoring script run() function must have API shape of:
41
+
* A first parameter of type `StandardPythonParameterType`, named *Inputs* and nested
42
+
* An optional second parameter of type `StandardPythonParameterType`, named *GlobalParameters*
43
+
* Return a dictionary of type `StandardPythonParameterType`, named *Results* and nested
44
44
45
45
Define the input and output sample formats in the `input_sample` and `output_sample` variables, which represent the request and response formats for the web service. Use these samples in the input and output function decorators on the `run()` function. The following scikit-learn example uses schema generation.
46
46
47
+
## Power BI compatible endpoint
47
48
48
-
49
-
## Power BI compatible endpoint
50
-
51
-
The following example demonstrates how to define API shape according to above instruction. This method is supported for consuming the deployed web service from Power BI. ([Learn more about how to consume the web service from Power BI](/power-bi/service-machine-learning-integration).)
49
+
The following example demonstrates how to define API shape according to preceding instruction. This method is supported for consuming the deployed web service from Power BI.
## <a id="binary-data"></a> Binary (that is, image) data
124
122
125
-
If your model accepts binary data, like an image, you must modify the `score.py`file used for your deployment to accept raw HTTP requests. To accept raw data, use the `AMLRequest`classin your entry script and add the `@rawhttp` decorator to the `run()` function.
123
+
If your model accepts binary data, like an image, you must modify the *score.py*file used for your deployment to accept raw HTTP requests. To accept raw data, use the `AMLRequest`classin your entry script and add the `@rawhttp` decorator to the `run()` function.
126
124
127
125
Here's an example of a `score.py` that accepts binary data:
128
126
@@ -157,7 +155,6 @@ def run(request):
157
155
return AMLResponse("bad request", 500)
158
156
```
159
157
160
-
161
158
> [!IMPORTANT]
162
159
> The `AMLRequest` class is in the `azureml.contrib` namespace. Entities in this namespace change frequently as we work to improve the service. Anything in this namespace should be considered a preview that's not fully supported by Microsoft.
163
160
>
@@ -168,13 +165,12 @@ def run(request):
168
165
>```
169
166
170
167
> [!NOTE]
171
-
> 500 is not recommended as a customed status code, as at azureml-fe side, the status code will be rewritten to 502.
172
-
>* The status code will be passed through the azureml-fe then sent to client.
173
-
>* The azureml-fe will only rewrite the 500 returned from the model side to be 502, the client will receive 502.
174
-
>* But if the azureml-fe itself returns 500, client side will still receive 500.
168
+
>*500* is not recommended as a customed status code, as at azureml-fe side, the status code will be rewritten to *502*.
169
+
>* The status code is passed through the azureml-fe,then sent to client.
170
+
>* The azureml-fe only rewrites the 500 returned from the model side to be 502, the client receives 502.
171
+
>* But if the azureml-fe itself returns 500, client side still receives 500.
175
172
176
-
177
-
The `AMLRequest` class only allows you to access the raw posted data in the score.py, there's no client-side component. From a client, you post data as normal. For example, the following Python code reads an image file and posts the data:
173
+
The `AMLRequest` class only allows you to access the raw posted data in the *score.py* file, there's no client-side component. From a client, you post data as normal. For example, the following Python code reads an image file and posts the data:
178
174
179
175
```python
180
176
import requests
@@ -250,23 +246,21 @@ def run(request):
250
246
> pip install azureml-contrib-services
251
247
>```
252
248
253
-
254
249
> [!WARNING]
255
-
> Azure Machine Learning will route only POST and GET requests to the containers running the scoring service. This can cause errors due to browsers using OPTIONS requests to pre-flight CORS requests.
256
-
>
257
-
250
+
> Azure Machine Learning only routes POST and GET requests to the containers running the scoring service. This can cause errors due to browsers using OPTIONS requests to pre-flight CORS requests.
251
+
>
258
252
259
253
## Load registered models
260
254
261
255
There are two ways to locate models in your entry script:
262
-
*`AZUREML_MODEL_DIR`: An environment variable containing the path to the model location.
263
-
*`Model.get_model_path`: An API that returns the path to model file using the registered model name.
256
+
*`AZUREML_MODEL_DIR`: An environment variable containing the path to the model location
257
+
*`Model.get_model_path`: An API that returns the path to model file using the registered model name
264
258
265
259
#### AZUREML_MODEL_DIR
266
260
267
-
AZUREML_MODEL_DIR is an environment variable created during service deployment. You can use this environment variable to find the location of the deployed model(s).
261
+
`AZUREML_MODEL_DIR` is an environment variable created during service deployment. You can use this environment variable to find the location of the deployed model(s).
268
262
269
-
The following table describes the value of AZUREML_MODEL_DIR depending on the number of models deployed:
263
+
The following table describes the value of `AZUREML_MODEL_DIR` depending on the number of models deployed:
service = Model.deploy(ws, "myservice", [first_model, second_model], inference_config, deployment_config)
302
296
```
303
297
304
-
In the Docker image that hosts the service, the `AZUREML_MODEL_DIR` environment variable contains the directory where the models are located.
305
-
In this directory, each of the models is located in a directory path of `MODEL_NAME/VERSION`. Where `MODEL_NAME` is the name of the registered model, and `VERSION` is the version of the model. The files that make up the registered model are stored in these directories.
298
+
In the Docker image that hosts the service, the `AZUREML_MODEL_DIR` environment variable contains the directory where the models are located. In this directory, each of the models is located in a directory path of `MODEL_NAME/VERSION`. Where `MODEL_NAME` is the name of the registered model, and `VERSION` is the version of the model. The files that make up the registered model are stored in these directories.
306
299
307
300
In this example, the paths would be `$AZUREML_MODEL_DIR/my_first_model/1/my_first_model.pkl` and `$AZUREML_MODEL_DIR/my_second_model/2/my_second_model.pkl`.
308
301
309
-
310
302
```python
311
303
# Example when the model is a file, and the deployment contains multiple models
When you register a model, you provide a model name that's used for managing the model in the registry. You use this name with the [Model.get_model_path()](/python/api/azureml-core/azureml.core.model.model#get-model-path-model-name--version-none---workspace-none-) method to retrieve the path of the model file or files on the local file system. If you register a folder or a collection of files, this API returns the path of the directory that contains those files.
314
+
When you register a model, you provide a model name that's used for managing the model in the registry. You use this name with the [Model.get_model_path()](/python/api/azureml-core/azureml.core.model.model#azureml-core-model-model-get-model-path) method to retrieve the path of the model file or files on the local file system. If you register a folder or a collection of files, this API returns the path of the directory that contains those files.
323
315
324
316
When you register a model, you give it a name. The name corresponds to where the model is placed, either locally or during service deployment.
325
317
326
318
## Framework-specific examples
327
319
328
-
More entry script examples for specific machine learning use cases can be found below:
320
+
See the following articles for more entry script examples for specific machine learning use cases:
0 commit comments