You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/v1/how-to-deploy-advanced-entry-script.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -22,7 +22,7 @@ This article explains how to write entry scripts for specialized use cases in Az
22
22
23
23
## Prerequisites
24
24
25
-
*A trained machine learning model that you intend to deploy with Azure Machine Learning. For more information about model deployment, see [Deploy machine learning models to Azure](how-to-deploy-and-where.md).
25
+
A trained machine learning model that you intend to deploy with Azure Machine Learning. For more information about model deployment, see [Deploy machine learning models to Azure](how-to-deploy-and-where.md).
> The return value from the script can be any Python object that's serializable to JSON. For example, if your model returns a Pandas dataframe that contains multiple columns, you might use an output decorator similar to the following code:
114
+
> The return value from the script can be any Python object that's serializable to JSON. For example, if your model returns a Pandas dataframe that contains multiple columns, you can use an output decorator that's similar to the following code:
> Azure Machine Learning only routes POST and GET requests to the containers that run the scoring service. Errors can result if browsers use OPTIONS requests to issue preflight requests.
252
+
> Azure Machine Learning routes only POST and GET requests to the containers that run the scoring service. Errors can result if browsers use OPTIONS requests to issue preflight requests.
service = Model.deploy(ws, "myservice", [first_model, second_model], inference_config, deployment_config)
308
308
```
309
309
310
-
In the Docker image that hosts the service, the `AZUREML_MODEL_DIR` environment variable contains the directory where the models are located. In this directory, each model is located in a directory path of `<model-name>/<version>`. In this path, `<model-name>` is the name of the registered model, and `<version>` is the version of the model. The files that make up the registered model are stored in these directories.
310
+
In the Docker image that hosts the service, the `AZUREML_MODEL_DIR` environment variable contains the folder where the models are located. In this folder, each model is located in a folder path of `<model-name>/<version>`. In this path, `<model-name>` is the name of the registered model, and `<version>` is the version of the model. The files that make up the registered model are stored in these folders.
311
311
312
312
In this example, the path of the first model is `$AZUREML_MODEL_DIR/my_first_model/1/my_first_model.pkl`. The path of the second model is `$AZUREML_MODEL_DIR/my_second_model/2/my_second_model.pkl`.
When you register a model, you provide a model name that's used for managing the model in the registry. You use this name with the [`Model.get_model_path`](/python/api/azureml-core/azureml.core.model.model#azureml-core-model-model-get-model-path) method to retrieve the path of the model file or files on the local file system. If you register a folder or a collection of files, this API returns the path of the directory that contains those files.
326
+
When you register a model, you provide a model name that's used for managing the model in the registry. You use this name with the [`Model.get_model_path`](/python/api/azureml-core/azureml.core.model.model#azureml-core-model-model-get-model-path) method to retrieve the path of the model file or files on the local file system. If you register a folder or a collection of files, this API returns the path of the folder that contains those files.
327
327
328
328
When you register a model, you give it a name. The name corresponds to where the model is placed, either locally or during service deployment.
0 commit comments