Skip to content

Commit 78c60f0

Browse files
authored
Merge pull request #319 from J-Silvestre/patch-7
Update how-to-deploy-online-endpoints.md
2 parents 0483b97 + e2acc83 commit 78c60f0

File tree

1 file changed

+3
-0
lines changed

1 file changed

+3
-0
lines changed

articles/machine-learning/how-to-deploy-online-endpoints.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -760,6 +760,9 @@ For more information on creating an environment in the studio, see [Create an en
760760

761761
---
762762

763+
> [!IMPORTANT]
764+
> When defining a custom environment for your deployment, ensure the `azureml-inference-server-http` package is included in the conda file. This package is essential for the inference server to function properly. If you are unfamiliar with creating your own custom environment, it is advisable to instead use one of our curated environments such as `minimal-py-inference` (for custom models that don't use mlflow) or `mlflow-py-inference` (for models that use mlflow). These curated environments can be found in the "Environments" tab of your Machine Learning Studio.
765+
763766
### Configure a deployment that uses registered assets
764767

765768
Your deployment configuration uses the registered model that you wish to deploy and your registered environment.

0 commit comments

Comments
 (0)