Skip to content

Commit 8fa955a

Browse files
committed
edits per reviewer
1 parent e989129 commit 8fa955a

File tree

1 file changed

+11
-11
lines changed

1 file changed

+11
-11
lines changed

articles/machine-learning/how-to-inference-server-http.md

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -10,19 +10,28 @@ ms.service: azure-machine-learning
1010
ms.subservice: inferencing
1111
ms.topic: how-to
1212
ms.custom: inference server, local development, local debugging, devplatv2
13-
ms.date: 08/07/2024
13+
ms.date: 08/21/2024
1414

1515
#customer intent: As a developer, I want to work with the Azure Machine Learning inference HTTP server so I can debug scoring scripts or endpoints before deployment.
1616
---
1717

1818
# Debug scoring scripts with Azure Machine Learning inference HTTP server
1919

20-
The Azure Machine Learning inference HTTP server is a Python package that exposes your scoring function as an HTTP endpoint and wraps the Flask server code and dependencies into a singular package. The serve is included in the [prebuilt Docker images for inference](concept-prebuilt-docker-images-inference.md) that are used when deploying a model with Azure Machine Learning. Using the package alone, you can deploy the model locally for production, and easily validate your scoring (entry) script in a local development environment. If there's a problem with the scoring script, the server returns an error and the location of the error.
20+
The Azure Machine Learning inference HTTP server is a Python package that exposes your scoring function as an HTTP endpoint and wraps the Flask server code and dependencies into a singular package. The server is included in the [prebuilt Docker images for inference](concept-prebuilt-docker-images-inference.md) that are used when deploying a model with Azure Machine Learning. Using the package alone, you can deploy the model locally for production, and easily validate your scoring (entry) script in a local development environment. If there's a problem with the scoring script, the server returns an error and the location of the error.
2121

2222
The server can also be used to create validation gates in a continuous integration and deployment pipeline. For example, you can start the server with the candidate script and run the test suite against the local endpoint.
2323

2424
This article supports developers who want to use the inference server to debug locally and describes how to use the inference server with online endpoints on Windows.
2525

26+
## Prerequisites
27+
28+
To use the Azure Machine Learning inference HTTP server for local debugging, your configuration must include the following components:
29+
30+
- Python 3.8 or later
31+
- Anaconda
32+
33+
The Azure Machine Learning inference HTTP server runs on Windows and Linux based operating systems.
34+
2635
## Explore local debugging options for online endpoints
2736

2837
By debugging endpoints locally before you deploy to the cloud, you can catch errors in your code and configuration earlier. To debug endpoints locally, you have several options, including:
@@ -43,15 +52,6 @@ The following table provides an overview of scenarios to help you choose the bes
4352

4453
When you run the inference HTTP server locally, you can focus on debugging your scoring script without concern for deployment container configurations.
4554

46-
## Prerequisites
47-
48-
To use the Azure Machine Learning inference HTTP server for local debugging, your configuration must include the following components:
49-
50-
- Python 3.8 or later
51-
- Anaconda
52-
53-
The Azure Machine Learning inference HTTP server runs on Windows and Linux based operating systems.
54-
5555
## Install azureml-inference-server-http package
5656

5757
To install the `azureml-inference-server-http` package, run the following command:

0 commit comments

Comments
 (0)