Skip to content

Commit dce259b

Browse files
Merge pull request #258792 from msakande/remove-preview-from-inference-server
remove preview note and rearrange section
2 parents b788c49 + 6a4a26c commit dce259b

File tree

1 file changed

+7
-9
lines changed

1 file changed

+7
-9
lines changed

articles/machine-learning/concept-endpoints-online.md

Lines changed: 7 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -146,6 +146,13 @@ The following table highlights key aspects about the online deployment options:
146146

147147
Azure Machine Learning provides various ways to debug online endpoints locally and by using container logs.
148148

149+
#### Local debugging with the Azure Machine Learning inference HTTP server
150+
151+
You can debug your scoring script locally by using the Azure Machine Learning inference HTTP server. The HTTP server is a Python package that exposes your scoring function as an HTTP endpoint and wraps the Flask server code and dependencies into a singular package. It's included in the [prebuilt Docker images for inference](concept-prebuilt-docker-images-inference.md) that are used when deploying a model with Azure Machine Learning. Using the package alone, you can deploy the model locally for production, and you can also easily validate your scoring (entry) script in a local development environment. If there's a problem with the scoring script, the server will return an error and the location where the error occurred.
152+
You can also use Visual Studio Code to debug with the Azure Machine Learning inference HTTP server.
153+
154+
To learn more about debugging with the HTTP server, see [Debugging scoring script with Azure Machine Learning inference HTTP server](how-to-inference-server-http.md).
155+
149156
#### Local debugging
150157

151158
For **local debugging**, you need a local deployment; that is, a model that is deployed to a local Docker environment. You can use this local deployment for testing and debugging before deployment to the cloud. To deploy locally, you'll need to have the [Docker Engine](https://docs.docker.com/engine/install/) installed and running. Azure Machine Learning then creates a local Docker image that mimics the Azure Machine Learning image. Azure Machine Learning will build and run deployments for you locally and cache the image for rapid iterations.
@@ -166,15 +173,6 @@ As with local debugging, you first need to have the [Docker Engine](https://docs
166173

167174
To learn more about interactively debugging online endpoints in VS Code, see [Debug online endpoints locally in Visual Studio Code](/azure/machine-learning/how-to-debug-managed-online-endpoints-visual-studio-code).
168175

169-
#### Local debugging with the Azure Machine Learning inference HTTP server (preview)
170-
171-
[!INCLUDE [machine-learning-preview-generic-disclaimer](includes/machine-learning-preview-generic-disclaimer.md)]
172-
173-
You can debug your scoring script locally by using the Azure Machine Learning inference HTTP server. The HTTP server is a Python package that exposes your scoring function as an HTTP endpoint and wraps the Flask server code and dependencies into a singular package. It's included in the [prebuilt Docker images for inference](concept-prebuilt-docker-images-inference.md) that are used when deploying a model with Azure Machine Learning. Using the package alone, you can deploy the model locally for production, and you can also easily validate your scoring (entry) script in a local development environment. If there's a problem with the scoring script, the server will return an error and the location where the error occurred.
174-
You can also use Visual Studio Code to debug with the Azure Machine Learning inference HTTP server.
175-
176-
To learn more about debugging with the HTTP server, see [Debugging scoring script with Azure Machine Learning inference HTTP server (preview)](how-to-inference-server-http.md).
177-
178176
#### Debugging with container logs
179177

180178
For a deployment, you can't get direct access to the VM where the model is deployed. However, you can get logs from some of the containers that are running on the VM.

0 commit comments

Comments
 (0)