You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/concept-endpoints-online.md
+15-10Lines changed: 15 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -286,16 +286,14 @@ The following table highlights key aspects about the online deployment options:
286
286
287
287
We *highly recommend* that you test-run your endpoint locally to validate and debug your code and configuration before you deploy to Azure. Azure CLI and Python SDK support local endpoints and deployments, while Azure Machine Learning studio and ARM template don't.
288
288
289
-
#### Limitations of local endpoints
290
-
291
-
Local endpoints have the following limitations:
292
-
- They do *not* support traffic rules, authentication, or probe settings.
293
-
- They support only one deployment per endpoint.
294
-
- They support local model files and environment with local conda file only. If you want to test registered models, first download them using [CLI](/cli/azure/ml/model#az-ml-model-download) or [SDK](/python/api/azure-ai-ml/azure.ai.ml.operations.modeloperations#azure-ai-ml-operations-modeloperations-download), then use `path` in the deployment definition to refer to the parent folder. If you want to test registered environments, check the context of the environment in Azure Machine Learning studio and prepare a local conda file to use.
295
-
296
289
Azure Machine Learning provides various ways to debug online endpoints locally and by using container logs.
297
290
298
-
#### Local debugging with the Azure Machine Learning inference HTTP server
291
+
- [Local debugging with Azure Machine Learning inference HTTP server](#local-debugging-with-azure-machine-learning-inference-http-server)
292
+
- [Local debugging with local endpoint](#local-debugging-with-local-endpoint)
293
+
- [Local debugging with local endpoint and Visual Studio Code](#local-debugging-with-local-endpoint-and-visual-studio-code-preview)
294
+
- [Debugging with container logs](#debugging-with-container-logs)
295
+
296
+
#### Local debugging with Azure Machine Learning inference HTTP server
299
297
300
298
You can debug your scoring script locally by using the Azure Machine Learning inference HTTP server. The HTTP server is a Python package that exposes your scoring function as an HTTP endpoint and wraps the Flask server code and dependencies into a singular package. It's included in the [prebuilt Docker images for inference](concept-prebuilt-docker-images-inference.md) that are used when deploying a model with Azure Machine Learning. Using the package alone, you can deploy the model locally for production, and you can also easily validate your scoring (entry) script in a local development environment. If there's a problem with the scoring script, the server will return an error and the location where the error occurred.
301
299
You can also use Visual Studio Code to debug with the Azure Machine Learning inference HTTP server.
@@ -305,22 +303,29 @@ You can also use Visual Studio Code to debug with the Azure Machine Learning inf
305
303
306
304
To learn more about debugging with the HTTP server, see [Debugging scoring script with Azure Machine Learning inference HTTP server](how-to-inference-server-http.md).
307
305
308
-
#### Local debugging
306
+
#### Local debugging with local endpoint
309
307
310
308
For **local debugging**, you need a local deployment; that is, a model that is deployed to a local Docker environment. You can use this local deployment for testing and debugging before deployment to the cloud. To deploy locally, you'll need to have the [Docker Engine](https://docs.docker.com/engine/install/) installed and running. Azure Machine Learning then creates a local Docker image that mimics the Azure Machine Learning image. Azure Machine Learning will build and run deployments for you locally and cache the image for rapid iterations.
311
309
312
310
> [!TIP]
313
311
> Docker Engine typically starts when the computer starts. If it doesn't, you can [troubleshoot Docker Engine](https://docs.docker.com/config/daemon/#start-the-daemon-manually).
312
+
> You can use client-side tools such as [Docker Desktop](https://www.docker.com/blog/getting-started-with-docker-desktop/) to debug what happens in the container.
314
313
315
314
The steps for local debugging typically include:
316
315
317
316
- Checking that the local deployment succeeded
318
317
- Invoking the local endpoint for inferencing
319
318
- Reviewing the logs for output of the invoke operation
320
319
320
+
> [!NOTE]
321
+
> Local endpoints have the following limitations:
322
+
> - They do *not* support traffic rules, authentication, or probe settings.
323
+
> - They support only one deployment per endpoint.
324
+
> - They support local model files and environment with local conda file only. If you want to test registered models, first download them using [CLI](/cli/azure/ml/model#az-ml-model-download) or [SDK](/python/api/azure-ai-ml/azure.ai.ml.operations.modeloperations#azure-ai-ml-operations-modeloperations-download), then use `path` in the deployment definition to refer to the parent folder. If you want to test registered environments, check the context of the environment in Azure Machine Learning studio and prepare a local conda file to use.
325
+
321
326
To learn more about local debugging, see [Deploy and debug locally by using a local endpoint](how-to-deploy-online-endpoints.md#deploy-and-debug-locally-by-using-a-local-endpoint).
322
327
323
-
#### Local debugging with Visual Studio Code (preview)
328
+
#### Local debugging with local endpoint and Visual Studio Code (preview)
0 commit comments