Skip to content

Commit 32e54b0

Browse files
authored
Update how-to-use-batch-endpoint.md
1 parent 68bdfd9 commit 32e54b0

File tree

1 file changed

+55
-50
lines changed

1 file changed

+55
-50
lines changed

articles/machine-learning/how-to-use-batch-endpoint.md

Lines changed: 55 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -37,11 +37,11 @@ In this article, you'll learn how to use batch endpoints to do batch scoring.
3737

3838
In this example, we're going to deploy a model to solve the classic MNIST ("Modified National Institute of Standards and Technology") digit recognition problem to perform batch inferencing over large amounts of data (image files). In the first section of this tutorial, we're going to create a batch deployment with a model created using Torch. Such deployment will become our default one in the endpoint. In the second half, [we're going to see how we can create a second deployment](#adding-deployments-to-an-endpoint) using a model created with TensorFlow (Keras), test it out, and then switch the endpoint to start using the new deployment as default.
3939

40-
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, first clone the repo. Then, change directories to either `cli/endpoints/batch` if you're using the Azure CLI or `sdk/endpoints/batch` if you're using the Python SDK.
40+
The information in this article is based on code samples contained in the [azureml-examples](https://github.com/azure/azureml-examples) repository. To run the commands locally without having to copy/paste YAML and other files, first clone the repo. Then, change directories to either `cli/endpoints/batch/deploy-models/mnist-classifier` if you're using the Azure CLI or `sdk/python/endpoints/batch/deploy-models/mnist-classifier` if you're using the Python SDK.
4141

4242
```azurecli
4343
git clone https://github.com/Azure/azureml-examples --depth 1
44-
cd azureml-examples/cli/endpoints/batch
44+
cd azureml-examples/cli/endpoints/batch/deploy-models/mnist-classifier
4545
```
4646

4747
### Follow along in Jupyter Notebooks
@@ -120,41 +120,6 @@ ml_client.begin_create_or_update(compute_cluster)
120120
> You are not charged for compute at this point as the cluster will remain at 0 nodes until a batch endpoint is invoked and a batch scoring job is submitted. Learn more about [manage and optimize cost for AmlCompute](./how-to-manage-optimize-cost.md#use-azure-machine-learning-compute-cluster-amlcompute).
121121
122122

123-
### Registering the model
124-
125-
Batch Deployments can only deploy models registered in the workspace. You can skip this step if the model you're trying to deploy is already registered. In this case, we're registering a Torch model for the popular digit recognition problem (MNIST).
126-
127-
> [!TIP]
128-
> Models are associated with the deployment rather than with the endpoint. This means that a single endpoint can serve different models or different model versions under the same endpoint as long as they are deployed in different deployments.
129-
130-
131-
# [Azure CLI](#tab/azure-cli)
132-
133-
```azurecli
134-
MODEL_NAME='mnist'
135-
az ml model create --name $MODEL_NAME --type "custom_model" --path "./mnist/model/"
136-
```
137-
138-
# [Python](#tab/python)
139-
140-
```python
141-
model_name = 'mnist'
142-
model = ml_client.models.create_or_update(
143-
Model(name=model_name, path='./mnist/model/', type=AssetTypes.CUSTOM_MODEL)
144-
)
145-
```
146-
147-
# [Studio](#tab/azure-studio)
148-
149-
1. Navigate to the __Models__ tab on the side menu.
150-
1. Select __Register__ > __From local files__.
151-
1. In the wizard, leave the option *Model type* as __Unspecified type__.
152-
1. Select __Browse__ > __Browse folder__ > Select the folder `./mnist/model/` > __Next__.
153-
1. Configure the name of the model: `mnist`. You can leave the rest of the fields as they are.
154-
1. Select __Register__.
155-
156-
---
157-
158123
## Create a batch endpoint
159124

160125
A batch endpoint is an HTTPS endpoint that clients can call to trigger a batch scoring job. A batch scoring job is a job that scores multiple inputs (for more, see [What are batch endpoints?](./concept-endpoints.md#what-are-batch-endpoints)). A batch deployment is a set of compute resources hosting the model that does the actual batch scoring. One batch endpoint can have multiple batch deployments.
@@ -252,17 +217,53 @@ A deployment is a set of resources required for hosting the model that does the
252217
* The environment in which the model runs.
253218
* The pre-created compute and resource settings.
254219
255-
1. Batch deployments require a scoring script that indicates how a given model should be executed and how input data must be processed. Batch Endpoints support scripts created in Python. In this case, we're deploying a model that reads image files representing digits and outputs the corresponding digit. The scoring script is as follows:
220+
1. Let's start by registering the model we want to deploy. Batch Deployments can only deploy models registered in the workspace. You can skip this step if the model you're trying to deploy is already registered. In this case, we're registering a Torch model for the popular digit recognition problem (MNIST).
256221
257-
> [!NOTE]
258-
> For MLflow models, Azure Machine Learning automatically generates the scoring script, so you're not required to provide one. If your model is an MLflow model, you can skip this step. For more information about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
222+
> [!TIP]
223+
> Models are associated with the deployment rather than with the endpoint. This means that a single endpoint can serve different models or different model versions under the same endpoint as long as they are deployed in different deployments.
259224
260-
> [!WARNING]
261-
> If you're deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for online endpoints and is not designed for batch execution. Please see [Author scoring scripts for batch deployments](how-to-batch-scoring-script.md) to learn how to create one depending on what your model does.
225+
226+
# [Azure CLI](#tab/azure-cli)
262227
263-
__deployment-torch/code/batch_driver.py__
228+
```azurecli
229+
MODEL_NAME='mnist-classifier-torch'
230+
az ml model create --name $MODEL_NAME --type "custom_model" --path "deployment-torch/model"
231+
```
232+
233+
# [Python](#tab/python)
234+
235+
```python
236+
model_name = 'mnist-classifier-torch'
237+
model = ml_client.models.create_or_update(
238+
Model(name=model_name, path='deployment-torch/model/', type=AssetTypes.CUSTOM_MODEL)
239+
)
240+
```
241+
242+
# [Studio](#tab/azure-studio)
243+
244+
1. Navigate to the __Models__ tab on the side menu.
245+
246+
1. Select __Register__ > __From local files__.
247+
248+
1. In the wizard, leave the option *Model type* as __Unspecified type__.
264249
265-
:::code language="python" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/code/batch_driver.py" :::
250+
1. Select __Browse__ > __Browse folder__ > Select the folder `deployment-torch/model` > __Next__.
251+
252+
1. Configure the name of the model: `mnist-classifier-torch`. You can leave the rest of the fields as they are.
253+
254+
1. Select __Register__.
255+
256+
1. Now it's time to create an scoring script. Batch deployments require a scoring script that indicates how a given model should be executed and how input data must be processed. Batch Endpoints support scripts created in Python. In this case, we're deploying a model that reads image files representing digits and outputs the corresponding digit. The scoring script is as follows:
257+
258+
> [!NOTE]
259+
> For MLflow models, Azure Machine Learning automatically generates the scoring script, so you're not required to provide one. If your model is an MLflow model, you can skip this step. For more information about how batch endpoints work with MLflow models, see the dedicated tutorial [Using MLflow models in batch deployments](how-to-mlflow-batch.md).
260+
261+
> [!WARNING]
262+
> If you're deploying an Automated ML model under a batch endpoint, notice that the scoring script that Automated ML provides only works for online endpoints and is not designed for batch execution. Please see [Author scoring scripts for batch deployments](how-to-batch-scoring-script.md) to learn how to create one depending on what your model does.
263+
264+
__deployment-torch/code/batch_driver.py__
265+
266+
:::code language="python" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/code/batch_driver.py" :::
266267
267268
1. Create an environment where your batch deployment will run. Such environment needs to include the packages `azureml-core` and `azureml-dataset-runtime[fuse]`, which are required by batch endpoints, plus any dependency your code requires for running. In this case, the dependencies have been captured in a `conda.yml`:
268269
@@ -287,6 +288,7 @@ A deployment is a set of resources required for hosting the model that does the
287288
288289
```python
289290
env = Environment(
291+
name="batch-torch-py38",
290292
conda_file="deployment-torch/environment/conda.yml",
291293
image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
292294
)
@@ -321,7 +323,7 @@ A deployment is a set of resources required for hosting the model that does the
321323
322324
# [Azure CLI](#tab/azure-cli)
323325
324-
__mnist-torch-deployment.yml__
326+
__deployment-torch/deployment.yml__
325327
326328
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-torch/deployment.yml":::
327329
@@ -748,16 +750,19 @@ In this example, you'll learn how to add a second deployment __that solves the s
748750

749751
# [Azure CLI](#tab/azure-cli)
750752

751-
*No extra step is required for the Azure Machine Learning CLI. The environment definition will be included in the deployment file as an anonymous environment.*
753+
The environment definition will be included in the deployment definition itself as an anonymous environment. You'll see in the following lines in the deployment:
754+
755+
:::code language="yaml" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deployment-keras/deployment.yml" range="11-14":::
752756

753757
# [Python](#tab/python)
754758

755759
Let's get a reference to the environment:
756760

757761
```python
758762
env = Environment(
759-
conda_file="deployment-kera/environment/conda.yml",
760-
image="mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04:latest",
763+
name="batch-tensorflow-py38",
764+
conda_file="deployment-keras/environment/conda.yml",
765+
image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest",
761766
)
762767
```
763768

@@ -771,9 +776,9 @@ In this example, you'll learn how to add a second deployment __that solves the s
771776

772777
1. On __Select environment type__ select __Use existing docker image with conda__.
773778

774-
1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi3.1.2-ubuntu18.04`.
779+
1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.0`.
775780

776-
1. On __Customize__ section copy the content of the file `./mnist-keras/environment/conda.yml` included in the repository into the portal.
781+
1. On __Customize__ section copy the content of the file `deployment-keras/environment/conda.yml` included in the repository into the portal.
777782

778783
1. Select __Next__ and then on __Create__.
779784

0 commit comments

Comments
 (0)