Skip to content

Commit b8b4e7a

Browse files
Merge pull request #221113 from shohei1029/patch-7
embed score.py instead of just linking it
2 parents a0eade4 + c0f93e7 commit b8b4e7a

File tree

1 file changed

+11
-7
lines changed

1 file changed

+11
-7
lines changed

articles/machine-learning/how-to-deploy-online-endpoints.md

Lines changed: 11 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ The main example in this doc uses managed online endpoints for deployment. To us
6262
# [ARM template](#tab/arm)
6363

6464
> [!NOTE]
65-
> While the Azure CLI and CLI extension for machine learning are used in these steps, they are not the main focus. They are used more as utilities, passing templates to Azure and checking the status of template deployments.
65+
> While the Azure CLI and CLI extension for machine learning are used in these steps, they're not the main focus. they're used more as utilities, passing templates to Azure and checking the status of template deployments.
6666
6767
[!INCLUDE [basic prereqs cli](../../includes/machine-learning-cli-prereqs.md)]
6868

@@ -354,7 +354,7 @@ For more information on creating an environment, see
354354

355355
:::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="create_model":::
356356

357-
1. Part of the environment is a conda file that specifies the model dependencies needed to host the model. The following example demonstrates how to read the contents of the conda file into an environment variables:
357+
1. Part of the environment is a conda file that specifies the model dependencies needed to host the model. The following example demonstrates how to read the contents of the conda file into environment variables:
358358

359359
:::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="read_condafile":::
360360

@@ -375,35 +375,39 @@ For supported general-purpose and GPU instance types, see [Managed online endpoi
375375

376376
### Use more than one model
377377

378-
Currently, you can specify only one model per deployment in the YAML. If you've more than one model, when you register the model, copy all the models as files or subdirectories into a folder that you use for registration. In your scoring script, use the environment variable `AZUREML_MODEL_DIR` to get the path to the model root folder. The underlying directory structure is retained. For an example of deploying multiple models to one deployment, see [Deploy multiple models to one deployment](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/minimal/multimodel/README.md).
378+
Currently, you can specify only one model per deployment in the YAML. If you've more than one model, when you register the model, copy all the models as files or subdirectories into a folder that you use for registration. In your scoring script, use the environment variable `AZUREML_MODEL_DIR` to get the path to the model root folder. The underlying directory structure is retained. For an example of deploying multiple models to one deployment, see [Deploy multiple models to one deployment](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/custom-container/minimal/multimodel).
379379

380380
## Understand the scoring script
381381

382382
> [!TIP]
383383
> The format of the scoring script for online endpoints is the same format that's used in the preceding version of the CLI and in the Python SDK.
384384

385385
# [Azure CLI](#tab/azure-cli)
386-
As noted earlier, the script specified in `code_configuration.scoring_script` must have an `init()` function and a `run()` function. This example uses the [score.py file](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/model-1/onlinescoring/score.py).
386+
As noted earlier, the script specified in `code_configuration.scoring_script` must have an `init()` function and a `run()` function.
387387

388388
# [Python](#tab/python)
389-
As noted earlier, the script specified in `CodeConfiguration(scoring_script="score.py")` must have an `init()` function and a `run()` function. This example uses the [score.py file](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/model-1/onlinescoring/score.py).
389+
As noted earlier, the script specified in `CodeConfiguration(scoring_script="score.py")` must have an `init()` function and a `run()` function.
390390

391391
# [ARM template](#tab/arm)
392392

393393
As noted earlier, the script specified in `code_configuration.scoring_script` must have an `init()` function and a `run()` function. This example uses the [score.py file](https://github.com/Azure/azureml-examples/blob/main/cli/endpoints/online/model-1/onlinescoring/score.py).
394394

395-
When using a template for deployment, you must first upload the scoring file(s) to an Azure Blob store and then register it:
395+
When using a template for deployment, you must first upload the scoring file(s) to an Azure Blob store, and then register it:
396396

397397
1. The following example uses the Azure CLI command `az storage blob upload-batch` to upload the scoring file(s):
398398

399399
:::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="upload_code":::
400400

401-
1. The following example demonstrates hwo to register the code using a template:
401+
1. The following example demonstrates how to register the code using a template:
402402

403403
:::code language="azurecli" source="~/azureml-examples-main/deploy-arm-templates-az-cli.sh" id="create_code":::
404404

405405
---
406406

407+
This example uses the [score.py file](https://github.com/Azure/azureml-examples/blob/main/sdk/python/endpoints/online/model-1/onlinescoring/score.py):
408+
__score.py__
409+
:::code language="python" source="~/azureml-examples-main/cli/endpoints/online/model-1/onlinescoring/score.py" :::
410+
407411
The `init()` function is called when the container is initialized or started. Initialization typically occurs shortly after the deployment is created or updated. Write logic here for global initialization operations like caching the model in memory (as we do in this example). The `run()` function is called for every invocation of the endpoint and should do the actual scoring and prediction. In the example, we extract the data from the JSON input, call the scikit-learn model's `predict()` method, and then return the result.
408412

409413
## Deploy and debug locally by using local endpoints

0 commit comments

Comments
 (0)