@@ -51,7 +51,7 @@ Use the Azure CLI to sign in with **interactive** or **device code** authenticat
51
51
az login
52
52
```
53
53
54
- # [ Python] ( #tab/sdk )
54
+ # [ Python SDK ] ( #tab/sdk )
55
55
56
56
Use the Azure Machine Learning SDK for Python to sign in:
57
57
@@ -105,13 +105,12 @@ az ml batch-endpoint invoke --name $ENDPOINT_NAME \
105
105
--input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data
106
106
```
107
107
108
- # [ Python] ( #tab/sdk )
108
+ # [ Python SDK ] ( #tab/sdk )
109
109
110
110
Use the ` MLClient.batch_endpoints.invoke() ` method to invoke a batch endpoint. In the following code, ` endpoint ` is an endpoint object.
111
111
112
112
``` python
113
- from azure.ai.ml import MLClient, Input
114
- from azure.identity import DefaultAzureCredential
113
+ from azure.ai.ml import Input
115
114
116
115
job = ml_client.batch_endpoints.invoke(
117
116
endpoint_name = endpoint.name,
@@ -168,14 +167,11 @@ az ml batch-endpoint invoke --name $ENDPOINT_NAME \
168
167
--input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data
169
168
```
170
169
171
- # [ Python] ( #tab/sdk )
170
+ # [ Python SDK ] ( #tab/sdk )
172
171
173
172
Use the parameter ` deployment_name ` to specify the name of the deployment. In the following code, ` deployment ` is a deployment object.
174
173
175
174
``` python
176
- from azure.ai.ml import MLClient, Input
177
- from azure.identity import DefaultAzureCredential
178
-
179
175
job = ml_client.batch_endpoints.invoke(
180
176
endpoint_name = endpoint.name,
181
177
deployment_name = deployment.name,
@@ -238,14 +234,11 @@ az ml batch-endpoint invoke --name $ENDPOINT_NAME \
238
234
--input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data
239
235
```
240
236
241
- # [ Python] ( #tab/sdk )
237
+ # [ Python SDK ] ( #tab/sdk )
242
238
243
239
Use the parameter ` experiment_name ` to specify the name of the experiment:
244
240
245
241
``` python
246
- from azure.ai.ml import MLClient, Input
247
- from azure.identity import DefaultAzureCredential
248
-
249
242
job = ml_client.batch_endpoints.invoke(
250
243
endpoint_name = endpoint.name,
251
244
experiment_name = " my-batch-job-experiment" ,
@@ -379,13 +372,11 @@ Azure Machine Learning data assets (formerly known as datasets) are supported as
379
372
az ml data create -f heart-data.yml
380
373
```
381
374
382
- # [Python](#tab/sdk)
375
+ # [Python SDK ](#tab/sdk)
383
376
384
377
1. Create a data asset definition :
385
378
386
379
` ` ` python
387
- from azure.ai.ml import MLClient, Input
388
- from azure.identity import DefaultAzureCredential
389
380
from azure.ai.ml.constants import AssetTypes
390
381
from azure.ai.ml.entities import Data
391
382
@@ -426,7 +417,7 @@ Azure Machine Learning data assets (formerly known as datasets) are supported as
426
417
DATA_ASSET_ID=$(az ml data show -n heart-data --label latest | jq -r .id)
427
418
` ` `
428
419
429
- # [Python](#tab/sdk)
420
+ # [Python SDK ](#tab/sdk)
430
421
431
422
` ` ` python
432
423
input = Input(path=heart_data_asset.id)
@@ -478,7 +469,7 @@ Azure Machine Learning data assets (formerly known as datasets) are supported as
478
469
az ml batch-endpoint invoke --name $ENDPOINT_NAME --file inputs.yml
479
470
` ` `
480
471
481
- # [Python](#tab/sdk)
472
+ # [Python SDK ](#tab/sdk)
482
473
483
474
Call the `invoke` method, and use the `inputs` parameter to specify the required inputs :
484
475
@@ -556,15 +547,11 @@ This example uses the default data store, but you can use a different data store
556
547
INPUT_PATH="azureml://datastores/workspaceblobstore/paths/$DATA_PATH"
557
548
` ` `
558
549
559
- # [Python](#tab/sdk)
550
+ # [Python SDK ](#tab/sdk)
560
551
561
552
Place the file path in the `input` variable :
562
553
563
554
` ` ` python
564
- from azure.ai.ml import MLClient, Input
565
- from azure.identity import DefaultAzureCredential
566
- from azure.ai.ml.constants import AssetTypes
567
-
568
555
data_path = "heart-disease-uci-unlabeled"
569
556
input = Input(type=AssetTypes.URI_FOLDER, path=f"azureml://datastores/workspaceblobstore/paths/{data_path}")
570
557
` ` `
@@ -629,7 +616,7 @@ This example uses the default data store, but you can use a different data store
629
616
az ml batch-endpoint invoke --name $ENDPOINT_NAME --file inputs.yml
630
617
` ` `
631
618
632
- # [Python](#tab/sdk)
619
+ # [Python SDK ](#tab/sdk)
633
620
634
621
Call the `invoke` method by using the `inputs` parameter to specify the required inputs :
635
622
@@ -692,15 +679,11 @@ For more information about extra required configurations for reading data from s
692
679
INPUT_DATA="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data/heart.csv"
693
680
` ` `
694
681
695
- # [Python](#tab/sdk)
682
+ # [Python SDK ](#tab/sdk)
696
683
697
684
Set the `input` variable :
698
685
699
686
` ` ` python
700
- from azure.ai.ml import MLClient, Input
701
- from azure.identity import DefaultAzureCredential
702
- from azure.ai.ml.constants import AssetTypes
703
-
704
687
input = Input(
705
688
type=AssetTypes.URI_FOLDER,
706
689
path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data"
@@ -784,7 +767,7 @@ For more information about extra required configurations for reading data from s
784
767
785
768
If your data is in a file, use the `uri_file` type in the inputs.yml file for the data input.
786
769
787
- # [Python](#tab/sdk)
770
+ # [Python SDK ](#tab/sdk)
788
771
789
772
Call the `invoke` method by using the `inputs` parameter to specify the required inputs :
790
773
@@ -855,14 +838,11 @@ az ml batch-endpoint invoke --name $ENDPOINT_NAME \
855
838
--set inputs.score_mode.type="string" inputs.score_mode.default="append"
856
839
` ` `
857
840
858
- # [Python](#tab/sdk)
841
+ # [Python SDK ](#tab/sdk)
859
842
860
843
Use the `inputs` parameter to supply information about the literal input.
861
844
862
845
` ` ` python
863
- from azure.ai.ml import MLClient, Input
864
- from azure.identity import DefaultAzureCredential
865
-
866
846
job = ml_client.batch_endpoints.invoke(
867
847
endpoint_name=endpoint.name,
868
848
inputs = {
@@ -916,12 +896,10 @@ This example uses the default data store, **workspaceblobstore**. But you can us
916
896
DATA_STORE_ID=$(az ml datastore show -n workspaceblobstore | jq -r '.id')
917
897
` ` `
918
898
919
- # [Python](#tab/sdk)
899
+ # [Python SDK ](#tab/sdk)
920
900
921
901
` ` ` python
922
- from azure.ai.ml import MLClient, Input, Output
923
- from azure.identity import DefaultAzureCredential
924
- from azure.ai.ml.constants import AssetTypes
902
+ from azure.ai.ml import Output
925
903
926
904
default_ds = ml_client.datastores.get_default()
927
905
` ` `
@@ -955,7 +933,7 @@ This example uses the default data store, **workspaceblobstore**. But you can us
955
933
path: <data-store-ID>/paths/batch-jobs/my-unique-path
956
934
` ` `
957
935
958
- # [Python](#tab/sdk)
936
+ # [Python SDK ](#tab/sdk)
959
937
960
938
Set the `output` path variable :
961
939
@@ -1011,7 +989,7 @@ This example uses the default data store, **workspaceblobstore**. But you can us
1011
989
az ml batch-endpoint invoke --name $ENDPOINT_NAME --file inputs-and-outputs.yml
1012
990
` ` `
1013
991
1014
- # [Python](#tab/sdk)
992
+ # [Python SDK ](#tab/sdk)
1015
993
1016
994
Use the `outputs` parameter to supply information about the output.
1017
995
0 commit comments