You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-use-batch-model-deployments.md
+41-64Lines changed: 41 additions & 64 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -183,7 +183,7 @@ A __batch endpoint__ is an HTTPS endpoint that clients can call to trigger a _ba
183
183
184
184
# [Studio](#tab/azure-studio)
185
185
186
-
*You'll create the endpoint in the same step you are creating the deployment later.*
186
+
You create the endpoint later, at the point when you create the deployment.
187
187
188
188
## Create a batch deployment
189
189
@@ -258,26 +258,20 @@ A model deployment is a set of resources required for hosting the model that doe
258
258
259
259
# [Studio](#tab/azure-studio)
260
260
261
-
On [Azure Machine Learning studio portal](https://ml.azure.com), follow these steps:
261
+
In the [Azure Machine Learning studio](https://ml.azure.com), follow these steps:
262
262
263
263
1. Navigate to the __Environments__ tab on the side menu.
264
-
265
264
1. Select the tab __Custom environments__ > __Create__.
266
-
267
265
1. Enter the name of the environment, in this case `torch-batch-env`.
268
-
269
-
1. On __Select environment type__ select __Use existing docker image with conda__.
270
-
271
-
1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04`.
272
-
273
-
1. On __Customize__ section copy the content of the file`deployment-torch/environment/conda.yaml` included in the repository into the portal.
274
-
275
-
1. Select __Next__ and then on __Create__.
276
-
277
-
1. The environment is ready to be used.
278
-
266
+
1. For __Select environment source__ select __Use existing docker image with optional conda file__.
267
+
1. For __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04`.
268
+
1. Select **Next** to go to the "Customize" section.
269
+
1. Copy the content of the file _deployment-torch/environment/conda.yaml_ from the GitHub repo into the portal.
270
+
1. Select __Next__ until you get to the "Review page".
271
+
1. Select __Create__ and wait until the environment is ready for use.
272
+
279
273
---
280
-
274
+
281
275
> [!WARNING]
282
276
> Curated environments are not supported in batch deployments. You need to specify your own environment. You can always use the base image of a curated environment as yours to simplify the process.
283
277
@@ -339,48 +333,31 @@ A model deployment is a set of resources required for hosting the model that doe
339
333
|`settings.environment_variables`| Dictionary of environment variable name-value pairs to setfor each batch scoring job. |
340
334
341
335
# [Studio](#tab/azure-studio)
342
-
343
-
On [Azure Machine Learning studio portal](https://ml.azure.com), follow these steps:
344
-
336
+
337
+
In the studio, follow these steps:
338
+
345
339
1. Navigate to the __Endpoints__ tab on the side menu.
346
-
347
340
1. Select the tab __Batch endpoints__ > __Create__.
348
-
349
341
1. Give the endpoint a name, in this case `mnist-batch`. You can configure the rest of the fields or leave them blank.
350
-
351
-
1. Select __Next__.
352
-
353
-
1. On the model list, select the model `mnist`and select __Next__.
354
-
355
-
1. On the deployment configuration page, give the deployment a name.
356
-
357
-
1. On __Output action__, ensure __Append row__ is selected.
358
-
359
-
1. On __Output file name__, ensure the batch scoring output fileis the one you need. Default is`predictions.csv`.
360
-
361
-
1. On __Mini batch size__, adjust the size of the files that will be included on each mini-batch. This will control the amount of data your scoring script receives per each batch.
362
-
363
-
1. On __Scoring timeout (seconds)__, ensure you're giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
364
-
365
-
1. On __Max concurrency per instance__, configure the number of executors you want to have per each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__.
366
-
367
-
1. Once done, select __Next__.
368
-
369
-
1. On environment, go to __Select scoring fileand dependencies__ and select __Browse__.
370
-
371
-
1. Select the scoring script file on `deployment-torch/code/batch_driver.py`.
372
-
373
-
1. On the section __Choose an environment__, select the environment you created a previous step.
374
-
375
-
1. Select __Next__.
376
-
377
-
1. On the section __Compute__, select the compute cluster you created in a previous step.
342
+
1. Select __Next__ to go to the "Model" section.
343
+
1. Select the model __mnist-classifier-torch__.
344
+
1. Select __Next__ to go to the "Deployment" page.
345
+
1. Give the deployment a name.
346
+
1. For __Output action__, ensure __Append row__ is selected.
347
+
1. For __Output file name__, ensure the batch scoring output fileis the one you need. Default is`predictions.csv`.
348
+
1. For __Mini batch size__, adjust the size of the files that will be included on each mini-batch. This size will control the amount of data your scoring script receives per batch.
349
+
1. For __Scoring timeout (seconds)__, ensure you're giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
350
+
1. For __Max concurrency per instance__, configure the number of executors you want to have for each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__.
351
+
1. Once done, select __Next__ to go to the "Code + environment" page.
352
+
1. For "Select a scoring script for inferencing", browse to find and select the scoring script file*deployment-torch/code/batch_driver.py*.
353
+
1. In the "Select environment" section, select the environment you created previously _torch-batch-env_.
354
+
1. Select __Next__ to go to the "Compute" page.
355
+
1. Select the compute cluster you created in a previous step.
378
356
379
357
> [!WARNING]
380
358
> Azure Kubernetes cluster are supported in batch deployments, but only when created using the Azure Machine Learning CLIor Python SDK.
381
359
382
-
1. On __Instance count__, enter the number of compute instances you want for the deployment. In this case, we'll use 2.
383
-
360
+
1. For __Instance count__, enter the number of compute instances you want for the deployment. In this case, use 2.
384
361
1. Select __Next__.
385
362
386
363
1. Create the deployment:
@@ -408,7 +385,7 @@ A model deployment is a set of resources required for hosting the model that doe
408
385
409
386
In the wizard, select __Create__ to start the deployment process.
410
387
411
-
:::image type="content"source="./media/how-to-use-batch-endpoints-studio/review-batch-wizard.png" alt-text="Screenshot of batch endpoints/deployment review screen.":::
388
+
:::image type="content"source="./media/how-to-use-batch-model-deployments/review-batch-wizard.png" alt-text="Screenshot of batch endpoints/deployment review screen.":::
412
389
413
390
---
414
391
@@ -436,7 +413,7 @@ A model deployment is a set of resources required for hosting the model that doe
436
413
437
414
1. In the endpoint page, you'll see all the details of the endpoint along with all the deployments available.
438
415
439
-
:::image type="content"source="./media/how-to-use-batch-endpoints-studio/batch-endpoint-details.png" alt-text="Screenshot of the check batch endpoints and deployment details.":::
416
+
:::image type="content"source="./media/how-to-use-batch-model-deployments/batch-endpoint-details.png" alt-text="Screenshot of the check batch endpoints and deployment details.":::
440
417
441
418
## Run batch endpoints and access results
442
419
@@ -470,17 +447,17 @@ You can run and invoke a batch endpoint using Azure CLI, Azure Machine Learning
470
447
471
448
1. Select __Create job__.
472
449
473
-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
450
+
:::image type="content" source="./media/how-to-use-batch-model-deployments/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
474
451
475
452
1. On __Deployment__, select the deployment you want to execute.
476
453
477
-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/job-setting-batch-scoring.png" alt-text="Screenshot of using the deployment to submit a batch job.":::
454
+
:::image type="content" source="./media/how-to-use-batch-model-deployments/job-setting-batch-scoring.png" alt-text="Screenshot of using the deployment to submit a batch job.":::
478
455
479
456
1. Select __Next__.
480
457
481
458
1. On __Select data source__, select the data input you want to use. For this example, select __Datastore__ andin the section __Path__ enter the full URL`https://azuremlexampledata.blob.core.windows.net/data/mnist/sample`. Notice that this only works because the given path has public access enabled. In general, you'll need to register the data source as a __Datastore__. See [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md) for details.
482
459
483
-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/select-datastore-job.png" alt-text="Screenshot of selecting datastore as an input option.":::
460
+
:::image type="content" source="./media/how-to-use-batch-model-deployments/select-datastore-job.png" alt-text="Screenshot of selecting datastore as an input option.":::
484
461
485
462
1. Start the job.
486
463
@@ -514,7 +491,7 @@ The following code checks the job status and outputs a link to the Azure Machine
514
491
515
492
1. Select the tab __Jobs__.
516
493
517
-
:::image type="content" source="media/how-to-use-batch-endpoints-studio/summary-jobs.png" alt-text="Screenshot of summary of jobs submitted to a batch endpoint.":::
494
+
:::image type="content" source="media/how-to-use-batch-model-deployments/summary-jobs.png" alt-text="Screenshot of summary of jobs submitted to a batch endpoint.":::
518
495
519
496
1. You'll see a list of the jobs created for the selected endpoint.
520
497
@@ -574,23 +551,23 @@ Once you've identified the data store you want to use, configure the output as f
574
551
575
552
1. Select __Create job__.
576
553
577
-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
554
+
:::image type="content" source="./media/how-to-use-batch-model-deployments/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
578
555
579
556
1. On __Deployment__, select the deployment you want to execute.
580
557
581
558
1. Select __Next__.
582
559
583
560
1. Check the option __Override deployment settings__.
584
561
585
-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
562
+
:::image type="content" source="./media/how-to-use-batch-model-deployments/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
586
563
587
564
1. You can now configure __Output file name__ and some extra properties of the deployment execution. Just this execution will be affected.
588
565
589
566
1. On __Select data source__, select the data input you want to use.
590
567
591
568
1. On __Configure output location__, check the option __Enable output configuration__.
592
569
593
-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/configure-output-location.png" alt-text="Screenshot of optionally configuring output location.":::
570
+
:::image type="content" source="./media/how-to-use-batch-model-deployments/configure-output-location.png" alt-text="Screenshot of optionally configuring output location.":::
594
571
595
572
1. Configure the __Blob datastore__ where the outputs should be placed.
596
573
@@ -628,15 +605,15 @@ When you invoke a batch endpoint, some settings can be overwritten to make best
628
605
629
606
1. Select __Create job__.
630
607
631
-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
608
+
:::image type="content" source="./media/how-to-use-batch-model-deployments/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
632
609
633
610
1. On __Deployment__, select the deployment you want to execute.
634
611
635
612
1. Select __Next__.
636
613
637
614
1. Check the option __Override deployment settings__.
638
615
639
-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
616
+
:::image type="content" source="./media/how-to-use-batch-model-deployments/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
640
617
641
618
1. Configure the job parameters. Only the current job execution will be affected by this configuration.
642
619
@@ -718,7 +695,7 @@ In this example, you add a second deployment that uses a __model built with Kera
718
695
719
696
1. Select __Add deployment__.
720
697
721
-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/add-deployment-option.png" alt-text="Screenshot of add new deployment option.":::
698
+
:::image type="content" source="./media/how-to-use-batch-model-deployments/add-deployment-option.png" alt-text="Screenshot of add new deployment option.":::
722
699
723
700
1. On the model list, select the model `mnist`and select __Next__.
724
701
@@ -825,7 +802,7 @@ Although you can invoke a specific deployment inside an endpoint, you'll typical
825
802
826
803
1. Select __Update default deployment__.
827
804
828
-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/update-default-deployment.png" alt-text="Screenshot of updating default deployment.":::
805
+
:::image type="content" source="./media/how-to-use-batch-model-deployments/update-default-deployment.png" alt-text="Screenshot of updating default deployment.":::
829
806
830
807
1. On __Select default deployment__, select the name of the deployment you want to be the default one.
0 commit comments