Skip to content

Commit 386b9bb

Browse files
authored
Merge pull request #228435 from dem108/patch-17
Clarify local endpoint limitation
2 parents cf6bb63 + 756b006 commit 386b9bb

File tree

1 file changed

+2
-1
lines changed

1 file changed

+2
-1
lines changed

articles/machine-learning/how-to-deploy-online-endpoints.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ ms.subservice: mlops
88
author: dem108
99
ms.author: sehan
1010
ms.reviewer: mopeakande
11-
ms.date: 11/03/2022
11+
ms.date: 02/23/2023
1212
ms.topic: how-to
1313
ms.custom: how-to, devplatv2, ignite-fall-2021, cliv2, event-tier1-build-2022, sdkv2
1414
---
@@ -426,6 +426,7 @@ To save time debugging, we *highly recommend* that you test-run your endpoint lo
426426
> The goal of a local endpoint deployment is to validate and debug your code and configuration before you deploy to Azure. Local deployment has the following limitations:
427427
> - Local endpoints do *not* support traffic rules, authentication, or probe settings.
428428
> - Local endpoints support only one deployment per endpoint.
429+
> - Local endpoints do *not* support registered models. To use models already registered, you can download them using [CLI](/cli/azure/ml/model#az-ml-model-download) or [SDK](/python/api/azure-ai-ml/azure.ai.ml.operations.modeloperations#azure-ai-ml-operations-modeloperations-download) and refer to them in the deployment definition.
429430

430431
> [!TIP]
431432
> You can use [Azure Machine Learning inference HTTP server Python package](how-to-inference-server-http.md) to debug your scoring script locally **without Docker Engine**. Debugging with the inference server helps you to debug the scoring script before deploying to local endpoints so that you can debug without being affected by the deployment container configurations.

0 commit comments

Comments
 (0)