Skip to content

Commit e719399

Browse files
committed
more edits to Studio tab
1 parent e46461c commit e719399

File tree

1 file changed

+41
-64
lines changed

1 file changed

+41
-64
lines changed

articles/machine-learning/how-to-use-batch-model-deployments.md

Lines changed: 41 additions & 64 deletions
Original file line numberDiff line numberDiff line change
@@ -183,7 +183,7 @@ A __batch endpoint__ is an HTTPS endpoint that clients can call to trigger a _ba
183183

184184
# [Studio](#tab/azure-studio)
185185

186-
*You'll create the endpoint in the same step you are creating the deployment later.*
186+
You create the endpoint later, at the point when you create the deployment.
187187

188188
## Create a batch deployment
189189

@@ -258,26 +258,20 @@ A model deployment is a set of resources required for hosting the model that doe
258258

259259
# [Studio](#tab/azure-studio)
260260

261-
On [Azure Machine Learning studio portal](https://ml.azure.com), follow these steps:
261+
In the [Azure Machine Learning studio](https://ml.azure.com), follow these steps:
262262

263263
1. Navigate to the __Environments__ tab on the side menu.
264-
265264
1. Select the tab __Custom environments__ > __Create__.
266-
267265
1. Enter the name of the environment, in this case `torch-batch-env`.
268-
269-
1. On __Select environment type__ select __Use existing docker image with conda__.
270-
271-
1. On __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04`.
272-
273-
1. On __Customize__ section copy the content of the file `deployment-torch/environment/conda.yaml` included in the repository into the portal.
274-
275-
1. Select __Next__ and then on __Create__.
276-
277-
1. The environment is ready to be used.
278-
266+
1. For __Select environment source__ select __Use existing docker image with optional conda file__.
267+
1. For __Container registry image path__, enter `mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04`.
268+
1. Select **Next** to go to the "Customize" section.
269+
1. Copy the content of the file _deployment-torch/environment/conda.yaml_ from the GitHub repo into the portal.
270+
1. Select __Next__ until you get to the "Review page".
271+
1. Select __Create__ and wait until the environment is ready for use.
272+
279273
---
280-
274+
281275
> [!WARNING]
282276
> Curated environments are not supported in batch deployments. You need to specify your own environment. You can always use the base image of a curated environment as yours to simplify the process.
283277

@@ -339,48 +333,31 @@ A model deployment is a set of resources required for hosting the model that doe
339333
| `settings.environment_variables` | Dictionary of environment variable name-value pairs to set for each batch scoring job. |
340334

341335
# [Studio](#tab/azure-studio)
342-
343-
On [Azure Machine Learning studio portal](https://ml.azure.com), follow these steps:
344-
336+
337+
In the studio, follow these steps:
338+
345339
1. Navigate to the __Endpoints__ tab on the side menu.
346-
347340
1. Select the tab __Batch endpoints__ > __Create__.
348-
349341
1. Give the endpoint a name, in this case `mnist-batch`. You can configure the rest of the fields or leave them blank.
350-
351-
1. Select __Next__.
352-
353-
1. On the model list, select the model `mnist` and select __Next__.
354-
355-
1. On the deployment configuration page, give the deployment a name.
356-
357-
1. On __Output action__, ensure __Append row__ is selected.
358-
359-
1. On __Output file name__, ensure the batch scoring output file is the one you need. Default is `predictions.csv`.
360-
361-
1. On __Mini batch size__, adjust the size of the files that will be included on each mini-batch. This will control the amount of data your scoring script receives per each batch.
362-
363-
1. On __Scoring timeout (seconds)__, ensure you're giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
364-
365-
1. On __Max concurrency per instance__, configure the number of executors you want to have per each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__.
366-
367-
1. Once done, select __Next__.
368-
369-
1. On environment, go to __Select scoring file and dependencies__ and select __Browse__.
370-
371-
1. Select the scoring script file on `deployment-torch/code/batch_driver.py`.
372-
373-
1. On the section __Choose an environment__, select the environment you created a previous step.
374-
375-
1. Select __Next__.
376-
377-
1. On the section __Compute__, select the compute cluster you created in a previous step.
342+
1. Select __Next__ to go to the "Model" section.
343+
1. Select the model __mnist-classifier-torch__.
344+
1. Select __Next__ to go to the "Deployment" page.
345+
1. Give the deployment a name.
346+
1. For __Output action__, ensure __Append row__ is selected.
347+
1. For __Output file name__, ensure the batch scoring output file is the one you need. Default is `predictions.csv`.
348+
1. For __Mini batch size__, adjust the size of the files that will be included on each mini-batch. This size will control the amount of data your scoring script receives per batch.
349+
1. For __Scoring timeout (seconds)__, ensure you're giving enough time for your deployment to score a given batch of files. If you increase the number of files, you usually have to increase the timeout value too. More expensive models (like those based on deep learning), may require high values in this field.
350+
1. For __Max concurrency per instance__, configure the number of executors you want to have for each compute instance you get in the deployment. A higher number here guarantees a higher degree of parallelization but it also increases the memory pressure on the compute instance. Tune this value altogether with __Mini batch size__.
351+
1. Once done, select __Next__ to go to the "Code + environment" page.
352+
1. For "Select a scoring script for inferencing", browse to find and select the scoring script file *deployment-torch/code/batch_driver.py*.
353+
1. In the "Select environment" section, select the environment you created previously _torch-batch-env_.
354+
1. Select __Next__ to go to the "Compute" page.
355+
1. Select the compute cluster you created in a previous step.
378356

379357
> [!WARNING]
380358
> Azure Kubernetes cluster are supported in batch deployments, but only when created using the Azure Machine Learning CLI or Python SDK.
381359

382-
1. On __Instance count__, enter the number of compute instances you want for the deployment. In this case, we'll use 2.
383-
360+
1. For __Instance count__, enter the number of compute instances you want for the deployment. In this case, use 2.
384361
1. Select __Next__.
385362

386363
1. Create the deployment:
@@ -408,7 +385,7 @@ A model deployment is a set of resources required for hosting the model that doe
408385

409386
In the wizard, select __Create__ to start the deployment process.
410387

411-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/review-batch-wizard.png" alt-text="Screenshot of batch endpoints/deployment review screen.":::
388+
:::image type="content" source="./media/how-to-use-batch-model-deployments/review-batch-wizard.png" alt-text="Screenshot of batch endpoints/deployment review screen.":::
412389

413390
---
414391

@@ -436,7 +413,7 @@ A model deployment is a set of resources required for hosting the model that doe
436413

437414
1. In the endpoint page, you'll see all the details of the endpoint along with all the deployments available.
438415

439-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/batch-endpoint-details.png" alt-text="Screenshot of the check batch endpoints and deployment details.":::
416+
:::image type="content" source="./media/how-to-use-batch-model-deployments/batch-endpoint-details.png" alt-text="Screenshot of the check batch endpoints and deployment details.":::
440417

441418
## Run batch endpoints and access results
442419

@@ -470,17 +447,17 @@ You can run and invoke a batch endpoint using Azure CLI, Azure Machine Learning
470447

471448
1. Select __Create job__.
472449

473-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
450+
:::image type="content" source="./media/how-to-use-batch-model-deployments/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
474451

475452
1. On __Deployment__, select the deployment you want to execute.
476453

477-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/job-setting-batch-scoring.png" alt-text="Screenshot of using the deployment to submit a batch job.":::
454+
:::image type="content" source="./media/how-to-use-batch-model-deployments/job-setting-batch-scoring.png" alt-text="Screenshot of using the deployment to submit a batch job.":::
478455

479456
1. Select __Next__.
480457

481458
1. On __Select data source__, select the data input you want to use. For this example, select __Datastore__ and in the section __Path__ enter the full URL `https://azuremlexampledata.blob.core.windows.net/data/mnist/sample`. Notice that this only works because the given path has public access enabled. In general, you'll need to register the data source as a __Datastore__. See [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md) for details.
482459

483-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/select-datastore-job.png" alt-text="Screenshot of selecting datastore as an input option.":::
460+
:::image type="content" source="./media/how-to-use-batch-model-deployments/select-datastore-job.png" alt-text="Screenshot of selecting datastore as an input option.":::
484461

485462
1. Start the job.
486463

@@ -514,7 +491,7 @@ The following code checks the job status and outputs a link to the Azure Machine
514491

515492
1. Select the tab __Jobs__.
516493

517-
:::image type="content" source="media/how-to-use-batch-endpoints-studio/summary-jobs.png" alt-text="Screenshot of summary of jobs submitted to a batch endpoint.":::
494+
:::image type="content" source="media/how-to-use-batch-model-deployments/summary-jobs.png" alt-text="Screenshot of summary of jobs submitted to a batch endpoint.":::
518495

519496
1. You'll see a list of the jobs created for the selected endpoint.
520497

@@ -574,23 +551,23 @@ Once you've identified the data store you want to use, configure the output as f
574551

575552
1. Select __Create job__.
576553

577-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
554+
:::image type="content" source="./media/how-to-use-batch-model-deployments/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
578555

579556
1. On __Deployment__, select the deployment you want to execute.
580557

581558
1. Select __Next__.
582559

583560
1. Check the option __Override deployment settings__.
584561

585-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
562+
:::image type="content" source="./media/how-to-use-batch-model-deployments/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
586563

587564
1. You can now configure __Output file name__ and some extra properties of the deployment execution. Just this execution will be affected.
588565

589566
1. On __Select data source__, select the data input you want to use.
590567

591568
1. On __Configure output location__, check the option __Enable output configuration__.
592569

593-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/configure-output-location.png" alt-text="Screenshot of optionally configuring output location.":::
570+
:::image type="content" source="./media/how-to-use-batch-model-deployments/configure-output-location.png" alt-text="Screenshot of optionally configuring output location.":::
594571

595572
1. Configure the __Blob datastore__ where the outputs should be placed.
596573

@@ -628,15 +605,15 @@ When you invoke a batch endpoint, some settings can be overwritten to make best
628605

629606
1. Select __Create job__.
630607

631-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
608+
:::image type="content" source="./media/how-to-use-batch-model-deployments/create-batch-job.png" alt-text="Screenshot of the create job option to start batch scoring.":::
632609

633610
1. On __Deployment__, select the deployment you want to execute.
634611

635612
1. Select __Next__.
636613

637614
1. Check the option __Override deployment settings__.
638615

639-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
616+
:::image type="content" source="./media/how-to-use-batch-model-deployments/overwrite-setting.png" alt-text="Screenshot of the overwrite setting when starting a batch job.":::
640617

641618
1. Configure the job parameters. Only the current job execution will be affected by this configuration.
642619

@@ -718,7 +695,7 @@ In this example, you add a second deployment that uses a __model built with Kera
718695

719696
1. Select __Add deployment__.
720697

721-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/add-deployment-option.png" alt-text="Screenshot of add new deployment option.":::
698+
:::image type="content" source="./media/how-to-use-batch-model-deployments/add-deployment-option.png" alt-text="Screenshot of add new deployment option.":::
722699

723700
1. On the model list, select the model `mnist` and select __Next__.
724701

@@ -825,7 +802,7 @@ Although you can invoke a specific deployment inside an endpoint, you'll typical
825802

826803
1. Select __Update default deployment__.
827804

828-
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/update-default-deployment.png" alt-text="Screenshot of updating default deployment.":::
805+
:::image type="content" source="./media/how-to-use-batch-model-deployments/update-default-deployment.png" alt-text="Screenshot of updating default deployment.":::
829806

830807
1. On __Select default deployment__, select the name of the deployment you want to be the default one.
831808

0 commit comments

Comments
 (0)