Skip to content

Commit f5eeb14

Browse files
dhanviadem108
andauthored
Update articles/machine-learning/how-to-deploy-custom-container.md
Co-authored-by: SeokJin Han <[email protected]>
1 parent 704ebaa commit f5eeb14

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

articles/machine-learning/how-to-deploy-custom-container.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -340,7 +340,7 @@ You can optionally configure your `model_mount_path` value. By adjusting this se
340340
> The `model_mount_path` value must be a valid absolute path in Linux (in the guest OS of the container image).
341341

342342
> [!IMPORTANT]
343-
> For BYOC scenarios, where a custom `model_mount_path` is to be configured on an online deployment, you must set the [`inference_config` parameter](#the-inference_config-parameter) in the custom environment created for the model, in order for the environment to be recognized as a custom environment. For such scenarios, use the Azure CLI or Python SDK to set the parameter. Do not try to set the parameter when creating the custom environment in the Azure portal, as there are some known limitations with configuring the `inference_config` parameter this way.
343+
> `model_mount_path` is usable only in BYOC (Bring your own container) scenario. In BYOC scenario, the environment that the online deployment uses must have [`inference_config` parameter](#the-inference_config-parameter) configured. You can use Azure ML CLI or Python SDK to specify `inference_config` parameter when creating the environment. Studio UI currently doesn't support specifying this parameter.
344344

345345
When you change the value of `model_mount_path`, you also need to update the `MODEL_BASE_PATH` environment variable. Set `MODEL_BASE_PATH` to the same value as `model_mount_path` to avoid a failed deployment due to an error about the base path not being found.
346346

0 commit comments

Comments
 (0)