Skip to content

Commit 704ebaa

Browse files
dhanviadem108
andauthored
Update articles/machine-learning/how-to-deploy-custom-container.md
Co-authored-by: SeokJin Han <[email protected]>
1 parent c016f21 commit 704ebaa

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/machine-learning/how-to-deploy-custom-container.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -337,7 +337,7 @@ In this case, when you create a deployment, your model is located under the foll
337337
You can optionally configure your `model_mount_path` value. By adjusting this setting, you can change the path where the model is mounted.
338338

339339
> [!IMPORTANT]
340-
> The `model_mount_path` value must be a valid absolute path in Linux (the OS of the container image).
340+
> The `model_mount_path` value must be a valid absolute path in Linux (in the guest OS of the container image).
341341

342342
> [!IMPORTANT]
343343
> For BYOC scenarios, where a custom `model_mount_path` is to be configured on an online deployment, you must set the [`inference_config` parameter](#the-inference_config-parameter) in the custom environment created for the model, in order for the environment to be recognized as a custom environment. For such scenarios, use the Azure CLI or Python SDK to set the parameter. Do not try to set the parameter when creating the custom environment in the Azure portal, as there are some known limitations with configuring the `inference_config` parameter this way.

0 commit comments

Comments
 (0)