Skip to content

Commit d2f9b9a

Browse files
authored
Update how-to-use-batch-endpoint.md
1 parent 93c8873 commit d2f9b9a

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

articles/machine-learning/how-to-use-batch-endpoint.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -493,7 +493,7 @@ Invoking a batch endpoint triggers a batch scoring job. A job `name` will be ret
493493
```python
494494
job = ml_client.batch_endpoints.invoke(
495495
endpoint_name=endpoint_name,
496-
inputs=Input(path="https://pipelinedata.blob.core.windows.net/sampledata/mnist", type=AssetTypes.URI_FOLDER)
496+
inputs=Input(path="https://azuremlexampledata.blob.core.windows.net/data/mnist/sample/", type=AssetTypes.URI_FOLDER)
497497
)
498498
```
499499

@@ -511,7 +511,7 @@ job = ml_client.batch_endpoints.invoke(
511511
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/job-setting-batch-scoring.png" alt-text="Screenshot of using the deployment to submit a batch job.":::
512512

513513
1. Select __Next__.
514-
1. On __Select data source__, select the data input you want to use. For this example, select __Datastore__ and in the section __Path__ enter the full URL `https://pipelinedata.blob.core.windows.net/sampledata/mnist`. Notice that this only works because the given path has public access enabled. In general, you'll need to register the data source as a __Datastore__. See [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md) for details.
514+
1. On __Select data source__, select the data input you want to use. For this example, select __Datastore__ and in the section __Path__ enter the full URL `https://azuremlexampledata.blob.core.windows.net/data/mnist/sample`. Notice that this only works because the given path has public access enabled. In general, you'll need to register the data source as a __Datastore__. See [Accessing data from batch endpoints jobs](how-to-access-data-batch-endpoints-jobs.md) for details.
515515

516516
:::image type="content" source="./media/how-to-use-batch-endpoints-studio/select-datastore-job.png" alt-text="Screenshot of selecting datastore as an input option.":::
517517

@@ -553,7 +553,7 @@ Once you identified the data store you want to use, configure the output as foll
553553
job = ml_client.batch_endpoints.invoke(
554554
endpoint_name=endpoint_name,
555555
inputs={
556-
"input": Input(path="https://pipelinedata.blob.core.windows.net/sampledata/mnist", type=AssetTypes.URI_FOLDER)
556+
"input": Input(path="https://azuremlexampledata.blob.core.windows.net/data/mnist/sample/", type=AssetTypes.URI_FOLDER)
557557
},
558558
params_override=[
559559
{ "output_dataset.datastore_id": f"azureml:{batch_ds.id}" },
@@ -618,7 +618,7 @@ Some settings can be overwritten when invoke to make best use of the compute res
618618
```python
619619
job = ml_client.batch_endpoints.invoke(
620620
endpoint_name=endpoint_name,
621-
input=Input(path="https://pipelinedata.blob.core.windows.net/sampledata/mnist"),
621+
input=Input(path="https://azuremlexampledata.blob.core.windows.net/data/mnist/sample/"),
622622
params_override=[
623623
{ "mini_batch_size": "20" },
624624
{ "compute.instance_count": "5" }

0 commit comments

Comments
 (0)