@@ -51,7 +51,7 @@ Use the Azure CLI to sign in with **interactive** or **device code** authenticat
5151az login
5252```
5353
54- # [ Python] ( #tab/sdk )
54+ # [ Python SDK ] ( #tab/sdk )
5555
5656Use the Azure Machine Learning SDK for Python to sign in:
5757
@@ -105,13 +105,12 @@ az ml batch-endpoint invoke --name $ENDPOINT_NAME \
105105 --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data
106106```
107107
108- # [ Python] ( #tab/sdk )
108+ # [ Python SDK ] ( #tab/sdk )
109109
110110Use the ` MLClient.batch_endpoints.invoke() ` method to invoke a batch endpoint. In the following code, ` endpoint ` is an endpoint object.
111111
112112``` python
113- from azure.ai.ml import MLClient, Input
114- from azure.identity import DefaultAzureCredential
113+ from azure.ai.ml import Input
115114
116115job = ml_client.batch_endpoints.invoke(
117116 endpoint_name = endpoint.name,
@@ -168,14 +167,11 @@ az ml batch-endpoint invoke --name $ENDPOINT_NAME \
168167 --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data
169168```
170169
171- # [ Python] ( #tab/sdk )
170+ # [ Python SDK ] ( #tab/sdk )
172171
173172Use the parameter ` deployment_name ` to specify the name of the deployment. In the following code, ` deployment ` is a deployment object.
174173
175174``` python
176- from azure.ai.ml import MLClient, Input
177- from azure.identity import DefaultAzureCredential
178-
179175job = ml_client.batch_endpoints.invoke(
180176 endpoint_name = endpoint.name,
181177 deployment_name = deployment.name,
@@ -238,14 +234,11 @@ az ml batch-endpoint invoke --name $ENDPOINT_NAME \
238234 --input https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data
239235```
240236
241- # [ Python] ( #tab/sdk )
237+ # [ Python SDK ] ( #tab/sdk )
242238
243239Use the parameter ` experiment_name ` to specify the name of the experiment:
244240
245241``` python
246- from azure.ai.ml import MLClient, Input
247- from azure.identity import DefaultAzureCredential
248-
249242job = ml_client.batch_endpoints.invoke(
250243 endpoint_name = endpoint.name,
251244 experiment_name = " my-batch-job-experiment" ,
@@ -379,13 +372,11 @@ Azure Machine Learning data assets (formerly known as datasets) are supported as
379372 az ml data create -f heart-data.yml
380373 ```
381374
382- # [Python](#tab/sdk)
375+ # [Python SDK ](#tab/sdk)
383376
384377 1. Create a data asset definition :
385378
386379 ` ` ` python
387- from azure.ai.ml import MLClient, Input
388- from azure.identity import DefaultAzureCredential
389380 from azure.ai.ml.constants import AssetTypes
390381 from azure.ai.ml.entities import Data
391382
@@ -426,7 +417,7 @@ Azure Machine Learning data assets (formerly known as datasets) are supported as
426417 DATA_ASSET_ID=$(az ml data show -n heart-data --label latest | jq -r .id)
427418 ` ` `
428419
429- # [Python](#tab/sdk)
420+ # [Python SDK ](#tab/sdk)
430421
431422 ` ` ` python
432423 input = Input(path=heart_data_asset.id)
@@ -478,7 +469,7 @@ Azure Machine Learning data assets (formerly known as datasets) are supported as
478469 az ml batch-endpoint invoke --name $ENDPOINT_NAME --file inputs.yml
479470 ` ` `
480471
481- # [Python](#tab/sdk)
472+ # [Python SDK ](#tab/sdk)
482473
483474 Call the `invoke` method, and use the `inputs` parameter to specify the required inputs :
484475
@@ -556,15 +547,11 @@ This example uses the default data store, but you can use a different data store
556547 INPUT_PATH="azureml://datastores/workspaceblobstore/paths/$DATA_PATH"
557548 ` ` `
558549
559- # [Python](#tab/sdk)
550+ # [Python SDK ](#tab/sdk)
560551
561552 Place the file path in the `input` variable :
562553
563554 ` ` ` python
564- from azure.ai.ml import MLClient, Input
565- from azure.identity import DefaultAzureCredential
566- from azure.ai.ml.constants import AssetTypes
567-
568555 data_path = "heart-disease-uci-unlabeled"
569556 input = Input(type=AssetTypes.URI_FOLDER, path=f"azureml://datastores/workspaceblobstore/paths/{data_path}")
570557 ` ` `
@@ -629,7 +616,7 @@ This example uses the default data store, but you can use a different data store
629616 az ml batch-endpoint invoke --name $ENDPOINT_NAME --file inputs.yml
630617 ` ` `
631618
632- # [Python](#tab/sdk)
619+ # [Python SDK ](#tab/sdk)
633620
634621 Call the `invoke` method by using the `inputs` parameter to specify the required inputs :
635622
@@ -692,15 +679,11 @@ For more information about extra required configurations for reading data from s
692679 INPUT_DATA="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data/heart.csv"
693680 ` ` `
694681
695- # [Python](#tab/sdk)
682+ # [Python SDK ](#tab/sdk)
696683
697684 Set the `input` variable :
698685
699686 ` ` ` python
700- from azure.ai.ml import MLClient, Input
701- from azure.identity import DefaultAzureCredential
702- from azure.ai.ml.constants import AssetTypes
703-
704687 input = Input(
705688 type=AssetTypes.URI_FOLDER,
706689 path="https://azuremlexampledata.blob.core.windows.net/data/heart-disease-uci/data"
@@ -784,7 +767,7 @@ For more information about extra required configurations for reading data from s
784767
785768 If your data is in a file, use the `uri_file` type in the inputs.yml file for the data input.
786769
787- # [Python](#tab/sdk)
770+ # [Python SDK ](#tab/sdk)
788771
789772 Call the `invoke` method by using the `inputs` parameter to specify the required inputs :
790773
@@ -855,14 +838,11 @@ az ml batch-endpoint invoke --name $ENDPOINT_NAME \
855838 --set inputs.score_mode.type="string" inputs.score_mode.default="append"
856839` ` `
857840
858- # [Python](#tab/sdk)
841+ # [Python SDK ](#tab/sdk)
859842
860843Use the `inputs` parameter to supply information about the literal input.
861844
862845` ` ` python
863- from azure.ai.ml import MLClient, Input
864- from azure.identity import DefaultAzureCredential
865-
866846job = ml_client.batch_endpoints.invoke(
867847 endpoint_name=endpoint.name,
868848 inputs = {
@@ -916,12 +896,10 @@ This example uses the default data store, **workspaceblobstore**. But you can us
916896 DATA_STORE_ID=$(az ml datastore show -n workspaceblobstore | jq -r '.id')
917897 ` ` `
918898
919- # [Python](#tab/sdk)
899+ # [Python SDK ](#tab/sdk)
920900
921901 ` ` ` python
922- from azure.ai.ml import MLClient, Input, Output
923- from azure.identity import DefaultAzureCredential
924- from azure.ai.ml.constants import AssetTypes
902+ from azure.ai.ml import Output
925903
926904 default_ds = ml_client.datastores.get_default()
927905 ` ` `
@@ -955,7 +933,7 @@ This example uses the default data store, **workspaceblobstore**. But you can us
955933 path: <data-store-ID>/paths/batch-jobs/my-unique-path
956934 ` ` `
957935
958- # [Python](#tab/sdk)
936+ # [Python SDK ](#tab/sdk)
959937
960938 Set the `output` path variable :
961939
@@ -1011,7 +989,7 @@ This example uses the default data store, **workspaceblobstore**. But you can us
1011989 az ml batch-endpoint invoke --name $ENDPOINT_NAME --file inputs-and-outputs.yml
1012990 ` ` `
1013991
1014- # [Python](#tab/sdk)
992+ # [Python SDK ](#tab/sdk)
1015993
1016994 Use the `outputs` parameter to supply information about the output.
1017995
0 commit comments