Skip to content

Commit 295407f

Browse files
authored
update how to deploy custom docker image instructions. User cannot bring his own custom inference stack, we will add our own stack on the base image provided by the user.
1 parent e403393 commit 295407f

File tree

1 file changed

+1
-2
lines changed

1 file changed

+1
-2
lines changed

articles/machine-learning/how-to-deploy-custom-docker-image.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -229,13 +229,12 @@ myenv.inferencing_stack_version = "latest" # This will install the inference sp
229229
# Define the packages needed by the model and scripts
230230
from azureml.core.conda_dependencies import CondaDependencies
231231
conda_dep = CondaDependencies()
232-
# Unless you are using your own custom inference stack,
233232
# you must list azureml-defaults as a pip dependency
234233
conda_dep.add_pip_package("azureml-defaults")
235234
myenv.python.conda_dependencies=conda_dep
236235
```
237236

238-
Please note that unless you are also using your own custom inference stack, you must add azureml-defaults with version >= 1.0.45 as a pip dependency. This package contains the functionality needed to host the model as a web service.
237+
You must add azureml-defaults with version >= 1.0.45 as a pip dependency. This package contains the functionality needed to host the model as a web service. You must also set inferencing_stack_version property on the environment to "latest", this will install specific apt packages needed by web service.
239238

240239
After defining the environment, use it with an [InferenceConfig](https://docs.microsoft.com/python/api/azureml-core/azureml.core.model.inferenceconfig?view=azure-ml-py) object to define the inference environment in which the model and web service will run.
241240

0 commit comments

Comments
 (0)