Skip to content

Commit a045c87

Browse files
Merge pull request #215416 from santiagxf/santiagxf/aml-batch-defaults
Update how-to-mlflow-batch.md
2 parents ec1376a + 0dcae9f commit a045c87

File tree

4 files changed

+72
-7
lines changed

4 files changed

+72
-7
lines changed

articles/machine-learning/batch-inference/how-to-deploy-model-custom-output.md

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -179,7 +179,8 @@ Follow the next steps to create a deployment using the previous scoring script:
179179
Then, create the deployment with the following command:
180180
181181
```azurecli
182-
az ml batch-endpoint create -f endpoint.yml
182+
DEPLOYMENT_NAME="classifier-xgboost-parquet"
183+
az ml batch-deployment create -f endpoint.yml
183184
```
184185

185186
# [Azure ML SDK for Python](#tab/sdk)
@@ -205,6 +206,11 @@ Follow the next steps to create a deployment using the previous scoring script:
205206
retry_settings=BatchRetrySettings(max_retries=3, timeout=300),
206207
logging_level="info",
207208
)
209+
```
210+
211+
Then, create the deployment with the following command:
212+
213+
```python
208214
ml_client.batch_deployments.begin_create_or_update(deployment)
209215
```
210216
---
@@ -259,7 +265,7 @@ For testing our endpoint, we are going to use a sample of unlabeled data located
259265
# [Azure ML CLI](#tab/cli)
260266

261267
```azurecli
262-
JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --input azureml:heart-dataset-unlabeled@latest | jq -r '.name')
268+
JOB_NAME = $(az ml batch-endpoint invoke --name $ENDPOINT_NAME --deployment-name $DEPLOYMENT_NAME --input azureml:heart-dataset-unlabeled@latest | jq -r '.name')
263269
```
264270

265271
> [!NOTE]

articles/machine-learning/batch-inference/how-to-image-processing-batch.md

Lines changed: 24 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -204,7 +204,8 @@ One the scoring script is created, it's time to create a batch deployment for it
204204
Then, create the deployment with the following command:
205205
206206
```azurecli
207-
az ml batch-endpoint create -f endpoint.yml
207+
DEPLOYMENT_NAME="imagenet-classifier-resnetv2"
208+
az ml batch-deployment create -f deployment.yml
208209
```
209210

210211
# [Azure ML SDK for Python](#tab/sdk)
@@ -232,8 +233,29 @@ One the scoring script is created, it's time to create a batch deployment for it
232233
logging_level="info",
233234
)
234235
```
236+
237+
Then, create the deployment with the following command:
238+
239+
```python
240+
ml_client.batch_deployments.begin_create_or_update(deployment)
241+
```
242+
243+
1. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
244+
245+
# [Azure ML CLI](#tab/cli)
246+
247+
```bash
248+
az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME
249+
```
250+
251+
# [Azure ML SDK for Python](#tab/sdk)
252+
253+
```python
254+
endpoint.defaults.deployment_name = deployment.name
255+
ml_client.batch_endpoints.begin_create_or_update(endpoint)
256+
```
235257

236-
1. At this point, our batch endpoint is ready to be used.
258+
1. At this point, our batch endpoint is ready to be used.
237259

238260
## Testing out the deployment
239261

articles/machine-learning/batch-inference/how-to-mlflow-batch.md

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -196,6 +196,7 @@ Follow these steps to deploy an MLflow model to a batch endpoint for running bat
196196
Then, create the deployment with the following command:
197197
198198
```bash
199+
DEPLOYMENT_NAME="classifier-xgboost-mlflow"
199200
az ml batch-endpoint create -f endpoint.yml
200201
```
201202

@@ -225,7 +226,22 @@ Follow these steps to deploy an MLflow model to a batch endpoint for running bat
225226
> [!NOTE]
226227
> `scoring_script` and `environment` auto generation only supports `pyfunc` model flavor. To use a different flavor, see [Using MLflow models with a scoring script](#using-mlflow-models-with-a-scoring-script).
227228
228-
6. At this point, our batch endpoint is ready to be used.
229+
6. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
230+
231+
# [Azure ML CLI](#tab/cli)
232+
233+
```bash
234+
az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME
235+
```
236+
237+
# [Azure ML SDK for Python](#tab/sdk)
238+
239+
```python
240+
endpoint.defaults.deployment_name = deployment.name
241+
ml_client.batch_endpoints.begin_create_or_update(endpoint)
242+
```
243+
244+
7. At this point, our batch endpoint is ready to be used.
229245

230246
## Testing out the deployment
231247

articles/machine-learning/batch-inference/how-to-nlp-processing-batch.md

Lines changed: 23 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -181,7 +181,8 @@ One the scoring script is created, it's time to create a batch deployment for it
181181
Then, create the deployment with the following command:
182182
183183
```bash
184-
az ml batch-endpoint create -f endpoint.yml
184+
DEPLOYMENT_NAME="text-summarization-hfbart"
185+
az ml batch-deployment create -f endpoint.yml
185186
```
186187

187188
# [Azure ML SDK for Python](#tab/sdk)
@@ -208,15 +209,35 @@ One the scoring script is created, it's time to create a batch deployment for it
208209
retry_settings=BatchRetrySettings(max_retries=3, timeout=3000),
209210
logging_level="info",
210211
)
212+
```
213+
214+
Then, create the deployment with the following command:
211215

216+
```python
212217
ml_client.batch_deployments.begin_create_or_update(deployment)
213218
```
214219
---
215220

216221
> [!IMPORTANT]
217222
> You will notice in this deployment a high value in `timeout` in the parameter `retry_settings`. The reason for it is due to the nature of the model we are running. This is a very expensive model and inference on a single row may take up to 60 seconds. The `timeout` parameters controls how much time the Batch Deployment should wait for the scoring script to finish processing each mini-batch. Since our model runs predictions row by row, processing a long file may take time. Also notice that the number of files per batch is set to 1 (`mini_batch_size=1`). This is again related to the nature of the work we are doing. Processing one file at a time per batch is expensive enough to justify it. You will notice this being a pattern in NLP processing.
218223
219-
3. At this point, our batch endpoint is ready to be used.
224+
3. Although you can invoke a specific deployment inside of an endpoint, you will usually want to invoke the endpoint itself and let the endpoint decide which deployment to use. Such deployment is named the "default" deployment. This gives you the possibility of changing the default deployment and hence changing the model serving the deployment without changing the contract with the user invoking the endpoint. Use the following instruction to update the default deployment:
225+
226+
# [Azure ML CLI](#tab/cli)
227+
228+
```bash
229+
az ml batch-endpoint update --name $ENDPOINT_NAME --set defaults.deployment_name=$DEPLOYMENT_NAME
230+
```
231+
232+
# [Azure ML SDK for Python](#tab/sdk)
233+
234+
```python
235+
endpoint.defaults.deployment_name = deployment.name
236+
ml_client.batch_endpoints.begin_create_or_update(endpoint)
237+
```
238+
239+
4. At this point, our batch endpoint is ready to be used.
240+
220241

221242
## Considerations when deploying models that process text
222243

0 commit comments

Comments
 (0)