You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: SDK and CLI v2 use expressions when a value may not be known when authoring a job or component.
5
+
services: machine-learning
6
+
ms.service: machine-learning
7
+
ms.subservice: core
8
+
ms.topic: conceptual
9
+
ms.author: zhanxia
10
+
author: xiaoharper
11
+
ms.reviewer: larryfr
12
+
ms.date: 07/26/2023
13
+
ms.custom: cliv2, sdkv2
14
+
---
15
+
16
+
# Expressions in Azure Machine Learning SDK and CLI v2
17
+
18
+
With Azure Machine Learning SDK and CLI v2, you can use _expressions_ when a value may not be known when you're authoring a job or component. When you submit a job or call a component, the expression is evaluated and the value is substituted.
19
+
20
+
The format for an expression is `${{ <expression> }}`. Some expressions are evaluated on the _client_, when submitting the job or component. Other expressions are evaluated on the _server_ (the compute where the job or component is running.)
21
+
22
+
## Client expressions
23
+
24
+
> [!NOTE]
25
+
> The "client" that evaluates the expression is where the job is submitted or component is ran. For example, your local machine or a compute instance.
26
+
27
+
| Expression | Description | Scope |
28
+
| ---- | ---- | ---- |
29
+
|`${{inputs.<input_name>}}`| References to an input data asset or model. | Works for all jobs. |
30
+
|`${{outputs.<output_name>}}`| References to an output data asset or model. | Works for all jobs. |
31
+
|`${{search_space.<hyperparameter>}}`| References the hyperparameters to use in a sweep job. The hyperparameter values for each trial are selected based on the `search_space`. | Sweep jobs only. |
32
+
|`${{parent.inputs.<input_name>}}`| Binds the inputs of a child job (pipeline step) in a pipeline to the inputs of the top-level parent pipeline job. | Pipeline jobs only. |
33
+
|`${{parent.outputs.<output_name>}}`| Binds the outputs of a child job (pipeline step) in a pipeline to the outputs of the top-level parent pipeline job. | Pipeline jobs only. |
34
+
|`${{parent.jobs.<step-name>.inputs.<input-name>}}`| Binds to the inputs of another step in the pipeline. | Pipeline jobs only. |
35
+
|`${{parent.jobs.<step-name>.outputs.<output-name>}}`| Binds to the outputs of another step in the pipeline. | Pipeline jobs only. |
Copy file name to clipboardExpand all lines: articles/machine-learning/how-to-train-model.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -180,7 +180,7 @@ curl -X PUT \
180
180
181
181
# [Python SDK](#tab/python)
182
182
183
-
To run this script, you'll use a `command` that executes main.py Python script located under ./sdk/python/jobs/single-step/lightgbm/iris/src/. The command will be run by submitting it as a `job` to Azure ML.
183
+
To run this script, you'll use a `command` that executes main.py Python script located under ./sdk/python/jobs/single-step/lightgbm/iris/src/. The command will be run by submitting it as a `job` to Azure Machine Learning.
184
184
185
185
> [!NOTE]
186
186
> To use [serverless compute (preview)](./how-to-use-serverless-compute.md), delete `compute="cpu-cluster"` in this code.
@@ -193,7 +193,7 @@ In the above examples, you configured:
193
193
-`code` - path where the code to run the command is located
194
194
-`command` - command that needs to be run
195
195
-`environment` - the environment needed to run the training script. In this example, we use a curated or ready-made environment provided by Azure Machine Learning called `AzureML-lightgbm-3.2-ubuntu18.04-py37-cpu`. We use the latest version of this environment by using the `@latest` directive. You can also use custom environments by specifying a base docker image and specifying a conda yaml on top of it.
196
-
-`inputs` - dictionary of inputs using name value pairs to the command. The key is a name for the input within the context of the job and the value is the input value. Inputs are referenced in the `command` using the `${{inputs.<input_name>}}` expression. To use files or folders as inputs, you can use the `Input` class.
196
+
-`inputs` - dictionary of inputs using name value pairs to the command. The key is a name for the input within the context of the job and the value is the input value. Inputs are referenced in the `command` using the `${{inputs.<input_name>}}` expression. To use files or folders as inputs, you can use the `Input` class. For more information, see [SDK and CLI v2 expressions](concept-expressions.md).
197
197
198
198
For more information, see the [reference documentation](/python/api/azure-ai-ml/azure.ai.ml#azure-ai-ml-command).
199
199
@@ -212,7 +212,7 @@ The `az ml job create` command used in this example requires a YAML job definiti
212
212
In the above, you configured:
213
213
-`code` - path where the code to run the command is located
214
214
-`command` - command that needs to be run
215
-
-`inputs` - dictionary of inputs using name value pairs to the command. The key is a name for the input within the context of the job and the value is the input value. Inputs are referenced in the `command` using the `${{inputs.<input_name>}}` expression.
215
+
-`inputs` - dictionary of inputs using name value pairs to the command. The key is a name for the input within the context of the job and the value is the input value. Inputs are referenced in the `command` using the `${{inputs.<input_name>}}` expression. For more information, see [SDK and CLI v2 expressions](concept-expressions.md).
216
216
-`environment` - the environment needed to run the training script. In this example, we use a curated or ready-made environment provided by Azure Machine Learning called `AzureML-sklearn-0.24-ubuntu18.04-py37-cpu`. We use the latest version of this environment by using the `@latest` directive. You can also use custom environments by specifying a base docker image and specifying a conda yaml on top of it.
217
217
To submit the job, use the following command. The run ID (name) of the training job is stored in the `$run_id` variable:
> The following expressions are resolved on the _server_ side, not the _client_ side. For scheduled jobs where the job _creation time_ and job _submission time_ are different, the expressions are resolved when the job is submitted. Since these expressions are resolved on the server side, they use the _current_ state of the workspace, not the state of the workspace when the scheduled job was created. For example, if you change the default datastore of the workspace after you create a scheduled job, the expression `${{default_datastore}}` is resolved to the new default datastore, not the default datastore when the scheduled job was created.
15
+
16
+
| Expression | Description | Scope |
17
+
| --- | --- | --- |
18
+
|`${{default_datastore}}`| If pipeline default datastore is configured, is resolved as pipeline default datastore name; otherwise is resolved as workspace default datastore name. <br><br> Pipeline default datastore can be controlled using `pipeline_job.settings.default_datastore`. | Works for all jobs. <br><br> Pipeline jobs have a configurable pipeline default datastore. |
19
+
|`${{name}}`| The job name. For pipelines, it's the step job name, not the pipeline job name. | Works for all jobs |
20
+
|`${{output_name}}`| The job output name | Works for all jobs |
21
+
22
+
For example, if `azureml://datastores/${{default_datastore}}/paths/{{$name}}/${{output_name}}` is used as the output path, at runtime it's resolved as a path of `azureml://datastores/workspaceblobstore/paths/<job-name>/model_path`.
Copy file name to clipboardExpand all lines: articles/machine-learning/reference-yaml-core-syntax.md
+7-1Lines changed: 7 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -213,7 +213,7 @@ Similar to the `command` for a job, the `command` for a component can also be pa
213
213
214
214
#### Define optional inputs in command line
215
215
When the input is set as `optional = true`, you need use `$[[]]` to embrace the command line with inputs. For example `$[[--input1 ${{inputs.input1}}]`. The command line at runtime may have different inputs.
216
-
- If you are using only specify the required `training_data` and `model_output` parameters, the command line will look like:
216
+
- If you are using only the required `training_data` and `model_output` parameters, the command line will look like:
Copy file name to clipboardExpand all lines: articles/machine-learning/tutorial-pipeline-python-sdk.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -326,7 +326,7 @@ if __name__ == "__main__":
326
326
327
327
Now that you have a script that can perform the desired task, create an Azure Machine Learning Component from it.
328
328
329
-
Use the general purpose `CommandComponent` that can run command line actions. This command line action can directly call system commands or run a script. The inputs/outputs are specified on the command line via the `${{ ... }}` notation.
329
+
Use the general purpose `CommandComponent` that can run command line actions. This command line action can directly call system commands or run a script. The inputs/outputs are specified on the command line via the `${{ ... }}`(expression) notation. For more information, see [SDK and CLI v2 expressions](concept-expressions.md).
0 commit comments