Skip to content

Commit dd9aac4

Browse files
authored
Update how-to-access-data-batch-endpoints-jobs.md
1 parent c4b9a7c commit dd9aac4

File tree

1 file changed

+65
-1
lines changed

1 file changed

+65
-1
lines changed

articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md

Lines changed: 65 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -80,6 +80,8 @@ To successfully invoke a batch endpoint and create jobs, ensure you have the fol
8080
> [!TIP]
8181
> If you are using a credential-less data store or external Azure Storage Account as data input, ensure you [configure compute clusters for data access](how-to-authenticate-batch-endpoint.md#configure-compute-clusters-for-data-access). **The managed identity of the compute cluster** is used **for mounting** the storage account. The identity of the job (invoker) is still used to read the underlying data allowing you to achieve granular access control.
8282
83+
84+
8385
## Understanding inputs and outputs
8486
8587
Batch endpoints provide a durable API that consumers can use to create batch jobs. The same interface can be used to specify the inputs and the outputs your deployment expects. Use inputs to pass any information your endpoint needs to perform the job.
@@ -135,7 +137,7 @@ Literal inputs are only supported in pipeline component deployments. See [Create
135137
Data outputs refer to the location where the results of a batch job should be placed. Outputs are identified by name, and Azure Machine Learning automatically assigns a unique path to each named output. However, you can specify another path if required.
136138
137139
> [!IMPORTANT]
138-
> Batch endpoints only support writing outputs in Azure Blob Storage datastores.
140+
> Batch endpoints only support writing outputs in Azure Blob Storage datastores. If you need to write to an storage account with hierarchical namespaces enabled (also known as Azure Datalake Gen2 or ADLS Gen2), notice that such storage service can be registered as a Azure Blob Storage datastore since the services are fully compatible. In this way, you can write outputs from batch endpoints to ADLS Gen2.
139141
140142
141143
## Create jobs with data inputs
@@ -821,6 +823,68 @@ azureml-model-deployment: DEPLOYMENT_NAME
821823
```
822824
---
823825

826+
## Configure job properties
827+
828+
You can configure some of the properties in the created job at invocation time.
829+
830+
### Configure experiment name
831+
832+
# [Azure CLI](#tab/cli)
833+
834+
Use the argument `--experiment-name` to specify the name of the experiment:
835+
836+
```azurecli
837+
az ml batch-endpoint invoke --name $ENDPOINT_NAME --experiment-name "my-batch-job-experiment" --input $INPUT_DATA
838+
```
839+
840+
# [Python](#tab/sdk)
841+
842+
Use the parameter `experiment_name` to specify the name of the experiment:
843+
844+
```python
845+
job = ml_client.batch_endpoints.invoke(
846+
endpoint_name=endpoint.name,
847+
experiment_name="my-batch-job-experiment",
848+
inputs={
849+
"heart_dataset": input,
850+
}
851+
)
852+
```
853+
854+
# [REST](#tab/rest)
855+
856+
Use the key `experimentName` in `properties` section to indicate the experiment name:
857+
858+
__Body__
859+
860+
```json
861+
{
862+
"properties": {
863+
"InputData": {
864+
"heart_dataset": {
865+
"JobInputType" : "UriFolder",
866+
"Uri": "https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data"
867+
}
868+
},
869+
"properties":
870+
{
871+
"experimentName": "my-batch-job-experiment"
872+
}
873+
}
874+
}
875+
```
876+
877+
__Request__
878+
879+
```http
880+
POST jobs HTTP/1.1
881+
Host: <ENDPOINT_URI>
882+
Authorization: Bearer <TOKEN>
883+
Content-Type: application/json
884+
```
885+
---
886+
887+
824888
## Next steps
825889

826890
* [Troubleshooting batch endpoints](how-to-troubleshoot-batch-endpoints.md).

0 commit comments

Comments
 (0)