Skip to content

Commit 3551a5b

Browse files
authored
Update how-to-use-batch-endpoint.md
1 parent 4e6d7d8 commit 3551a5b

File tree

1 file changed

+63
-59
lines changed

1 file changed

+63
-59
lines changed

articles/machine-learning/how-to-use-batch-endpoint.md

Lines changed: 63 additions & 59 deletions
Original file line numberDiff line numberDiff line change
@@ -503,7 +503,7 @@ A deployment is a set of resources required for hosting the model that does the
503503
504504
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/batch-endpoint-details.png" alt-text="Screenshot of the check batch endpoints and deployment details.":::
505505
506-
## Run endpoint and configure inputs and outputs
506+
## Run batch endpoints and access results
507507
508508
Invoking a batch endpoint triggers a batch scoring job. A job `name` will be returned from the invoke response and can be used to track the batch scoring progress. The batch scoring job runs for some time. It splits the entire inputs into multiple `mini_batch` and processes in parallel on the compute cluster. The batch scoring job outputs will be stored in cloud storage, either in the workspace's default blob storage, or the storage you specified.
509509
@@ -546,6 +546,68 @@ job = ml_client.batch_endpoints.invoke(
546546

547547
---
548548

549+
### Monitor batch job execution progress
550+
551+
Batch scoring jobs usually take some time to process the entire set of inputs.
552+
553+
# [Azure CLI](#tab/azure-cli)
554+
555+
You can use CLI `job show` to view the job. Run the following code to check job status from the previous endpoint invoke. To learn more about job commands, run `az ml job -h`.
556+
557+
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="check_job_status" :::
558+
559+
# [Python](#tab/python)
560+
561+
The following code checks the job status and outputs a link to the Azure Machine Learning studio for further details.
562+
563+
```python
564+
ml_client.jobs.get(job.name)
565+
```
566+
567+
# [Studio](#tab/azure-studio)
568+
569+
1. Navigate to the __Endpoints__ tab on the side menu.
570+
571+
1. Select the tab __Batch endpoints__.
572+
573+
1. Select the batch endpoint you want to monitor.
574+
575+
1. Select the tab __Jobs__.
576+
577+
:::image type="content" source="media/how-to-use-batch-endpoints-studio/summary-jobs.png" alt-text="Screenshot of summary of jobs submitted to a batch endpoint.":::
578+
579+
1. You'll see a list of the jobs created for the selected endpoint.
580+
581+
1. Select the last job that is running.
582+
583+
1. You'll be redirected to the job monitoring page.
584+
585+
---
586+
587+
### Check batch scoring results
588+
589+
Follow the following steps to view the scoring results in Azure Storage Explorer when the job is completed:
590+
591+
1. Run the following code to open batch scoring job in Azure Machine Learning studio. The job studio link is also included in the response of `invoke`, as the value of `interactionEndpoints.Studio.endpoint`.
592+
593+
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="show_job_in_studio" :::
594+
595+
1. In the graph of the job, select the `batchscoring` step.
596+
597+
1. Select the __Outputs + logs__ tab and then select **Show data outputs**.
598+
599+
1. From __Data outputs__, select the icon to open __Storage Explorer__.
600+
601+
:::image type="content" source="media/how-to-use-batch-endpoint/view-data-outputs.png" alt-text="Studio screenshot showing view data outputs location." lightbox="media/how-to-use-batch-endpoint/view-data-outputs.png":::
602+
603+
The scoring results in Storage Explorer are similar to the following sample page:
604+
605+
:::image type="content" source="media/how-to-use-batch-endpoint/scoring-view.png" alt-text="Screenshot of the scoring output." lightbox="media/how-to-use-batch-endpoint/scoring-view.png":::
606+
607+
## Configure job's inputs, outputs, and execution
608+
609+
Batch Endpoints require only one data input, which is the data you want to score. However, you can indicate also the outputs, and some other parameters about the execution.
610+
549611
### Configure job's inputs
550612

551613
Batch endpoints support reading files or folders that are located in different locations. To learn more about how the supported types and how to specify them read [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md).
@@ -680,64 +742,6 @@ job = ml_client.batch_endpoints.invoke(
680742

681743
---
682744

683-
### Monitor batch scoring job execution progress
684-
685-
Batch scoring jobs usually take some time to process the entire set of inputs.
686-
687-
# [Azure CLI](#tab/azure-cli)
688-
689-
You can use CLI `job show` to view the job. Run the following code to check job status from the previous endpoint invoke. To learn more about job commands, run `az ml job -h`.
690-
691-
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="check_job_status" :::
692-
693-
# [Python](#tab/python)
694-
695-
The following code checks the job status and outputs a link to the Azure Machine Learning studio for further details.
696-
697-
```python
698-
ml_client.jobs.get(job.name)
699-
```
700-
701-
# [Studio](#tab/azure-studio)
702-
703-
1. Navigate to the __Endpoints__ tab on the side menu.
704-
705-
1. Select the tab __Batch endpoints__.
706-
707-
1. Select the batch endpoint you want to monitor.
708-
709-
1. Select the tab __Jobs__.
710-
711-
:::image type="content" source="media/how-to-use-batch-endpoints-studio/summary-jobs.png" alt-text="Screenshot of summary of jobs submitted to a batch endpoint.":::
712-
713-
1. You'll see a list of the jobs created for the selected endpoint.
714-
715-
1. Select the last job that is running.
716-
717-
1. You'll be redirected to the job monitoring page.
718-
719-
---
720-
721-
### Check batch scoring results
722-
723-
Follow the following steps to view the scoring results in Azure Storage Explorer when the job is completed:
724-
725-
1. Run the following code to open batch scoring job in Azure Machine Learning studio. The job studio link is also included in the response of `invoke`, as the value of `interactionEndpoints.Studio.endpoint`.
726-
727-
:::code language="azurecli" source="~/azureml-examples-main/cli/endpoints/batch/deploy-models/mnist-classifier/deploy-and-run.sh" ID="show_job_in_studio" :::
728-
729-
1. In the graph of the job, select the `batchscoring` step.
730-
731-
1. Select the __Outputs + logs__ tab and then select **Show data outputs**.
732-
733-
1. From __Data outputs__, select the icon to open __Storage Explorer__.
734-
735-
:::image type="content" source="media/how-to-use-batch-endpoint/view-data-outputs.png" alt-text="Studio screenshot showing view data outputs location." lightbox="media/how-to-use-batch-endpoint/view-data-outputs.png":::
736-
737-
The scoring results in Storage Explorer are similar to the following sample page:
738-
739-
:::image type="content" source="media/how-to-use-batch-endpoint/scoring-view.png" alt-text="Screenshot of the scoring output." lightbox="media/how-to-use-batch-endpoint/scoring-view.png":::
740-
741745
## Adding deployments to an endpoint
742746

743747
Once you have a batch endpoint with a deployment, you can continue to refine your model and add new deployments. Batch endpoints will continue serving the default deployment while you develop and deploy new models under the same endpoint. Deployments can't affect one to another.

0 commit comments

Comments
 (0)