You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ms.custom: inference server, local development, local debugging, devplatv2
13
-
ms.date: 08/07/2024
13
+
ms.date: 08/21/2024
14
14
15
15
#customer intent: As a developer, I want to work with the Azure Machine Learning inference HTTP server so I can debug scoring scripts or endpoints before deployment.
16
16
---
17
17
18
18
# Debug scoring scripts with Azure Machine Learning inference HTTP server
19
19
20
-
The Azure Machine Learning inference HTTP server is a Python package that exposes your scoring function as an HTTP endpoint and wraps the Flask server code and dependencies into a singular package. The serve is included in the [prebuilt Docker images for inference](concept-prebuilt-docker-images-inference.md) that are used when deploying a model with Azure Machine Learning. Using the package alone, you can deploy the model locally for production, and easily validate your scoring (entry) script in a local development environment. If there's a problem with the scoring script, the server returns an error and the location of the error.
20
+
The Azure Machine Learning inference HTTP server is a Python package that exposes your scoring function as an HTTP endpoint and wraps the Flask server code and dependencies into a singular package. The server is included in the [prebuilt Docker images for inference](concept-prebuilt-docker-images-inference.md) that are used when deploying a model with Azure Machine Learning. Using the package alone, you can deploy the model locally for production, and easily validate your scoring (entry) script in a local development environment. If there's a problem with the scoring script, the server returns an error and the location of the error.
21
21
22
22
The server can also be used to create validation gates in a continuous integration and deployment pipeline. For example, you can start the server with the candidate script and run the test suite against the local endpoint.
23
23
24
24
This article supports developers who want to use the inference server to debug locally and describes how to use the inference server with online endpoints on Windows.
25
25
26
+
## Prerequisites
27
+
28
+
To use the Azure Machine Learning inference HTTP server for local debugging, your configuration must include the following components:
29
+
30
+
- Python 3.8 or later
31
+
- Anaconda
32
+
33
+
The Azure Machine Learning inference HTTP server runs on Windows and Linux based operating systems.
34
+
26
35
## Explore local debugging options for online endpoints
27
36
28
37
By debugging endpoints locally before you deploy to the cloud, you can catch errors in your code and configuration earlier. To debug endpoints locally, you have several options, including:
@@ -43,15 +52,6 @@ The following table provides an overview of scenarios to help you choose the bes
43
52
44
53
When you run the inference HTTP server locally, you can focus on debugging your scoring script without concern for deployment container configurations.
45
54
46
-
## Prerequisites
47
-
48
-
To use the Azure Machine Learning inference HTTP server for local debugging, your configuration must include the following components:
49
-
50
-
- Python 3.8 or later
51
-
- Anaconda
52
-
53
-
The Azure Machine Learning inference HTTP server runs on Windows and Linux based operating systems.
54
-
55
55
## Install azureml-inference-server-http package
56
56
57
57
To install the `azureml-inference-server-http` package, run the following command:
0 commit comments