You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When you edit a pipeline in the Designer, pipeline inputs and outputs are in the **Pipeline interface** panel, and component inputs and outputs are in the component panel.
107
+
When you edit a pipeline in the studio Designer, pipeline inputs and outputs are in the **Pipeline interface** panel, and component inputs and outputs are in the component panel.
108
108
109
109
:::image type="content" source="./media/how-to-manage-pipeline-input-output/pipeline-interface.png" alt-text="Screenshot highlighting the pipeline interface in Designer.":::
You can promote a component's input to pipeline level input on the studio Designer authoring page.
204
204
205
205
1. Open the component's settings panel by double-clicking the component.
206
-
1.Find the input you want to promote and select the three dots on the right.
206
+
1.Select **...** next to the input you want to promote.
207
207
1. Select **Add to pipeline input**.
208
208
209
209
:::image type="content" source="./media/how-to-manage-pipeline-input-output/promote-pipeline-input.png" alt-text="Screenshot highlighting how to promote to pipeline input in Designer.":::
@@ -212,16 +212,18 @@ You can promote a component's input to pipeline level input on the studio Design
212
212
213
213
## Define optional inputs
214
214
215
-
By default, all inputs are required and must either have a default value or be assigned a value each time you submit a pipeline job. However, you can define an optional input and not assign a value to the input when you submit a pipeline job.
215
+
By default, all inputs are required and must either have a default value or be assigned a value each time you submit a pipeline job. However, you can define an optional input.
216
216
217
217
> [!NOTE]
218
218
> Optional outputs aren't supported.
219
219
220
-
If you have an optional data/model type input and don't assign a value to it when submitting the pipeline job, a component in the pipeline lacks a preceding data dependency. The component's input port isn't linked to any component or data/model node. The pipeline service invokes the component directly, instead of waiting for the preceding dependency to be ready.
220
+
Setting optional inputs can be useful in two scenarios:
221
221
222
-
If you set `continue_on_step_failure = True` for the pipeline and a second node uses required output from the first node, the second node doesn't execute if the first node fails. But if the second node uses optional input from the first node, it executes even if the first node fails. The following screenshot illustrates this scenario.
222
+
-If you define an optional data/model type input and don't assign a value to it when you submit the pipeline job, the pipeline component lacks a preceding data dependency. If the component's input port isn't linked to any component or data/model node, the pipeline invokes the component directly instead of waiting for the preceding dependency.
223
223
224
-
:::image type="content" source="./media/how-to-manage-pipeline-input-output/continue-on-failure-optional-input.png" alt-text="Screenshot showing the orchestration logic of optional input and continue on failure.":::
224
+
- If you set `continue_on_step_failure = True` for the pipeline and `node2` uses optional input from `node1`, `node2` executes even if `node1` fails. If `node1` input is required, `node2` doesn't execute if `node1` fails. The following example demonstrates this scenario.
225
+
226
+
:::image type="content" source="./media/how-to-manage-pipeline-input-output/continue-on-failure-optional-input.png" alt-text="Screenshot showing the orchestration logic of optional input and continue on failure.":::
225
227
226
228
# [Azure CLI / Python SDK](#tab/cli+python)
227
229
@@ -231,7 +233,7 @@ The following code example shows how to define optional input. When the input is
231
233
232
234
# [Studio UI](#tab/ui)
233
235
234
-
In a pipeline graph, optional inputs of data/model types are represented by dotted circles. Optional inputs of primitive types are in the **Settings** tab. Unlike required inputs, optional inputs don't have an asterisk next to them, indicating that they aren't mandatory.
236
+
In a pipeline graph, dotted circles represent optional inputs of data or model types. Optional inputs of primitive types are in the **Settings** tab. Unlike required inputs, optional inputs don't have an asterisk next to them, indicating that they aren't mandatory.
235
237
236
238
:::image type="content" source="./media/how-to-manage-pipeline-input-output/optional-input.png" lightbox="./media/how-to-manage-pipeline-input-output/optional-input.png" alt-text="Screenshot highlighting the optional input.":::
237
239
@@ -241,11 +243,11 @@ In a pipeline graph, optional inputs of data/model types are represented by dott
241
243
242
244
By default, component output is stored in the `{default_datastore}` you set for the pipeline, `azureml://datastores/${{default_datastore}}/paths/${{name}}/${{output_name}}`. If not set, the default is the workspace blob storage.
243
245
244
-
The job `{name}` is resolved at job execution time, and `{output_name}` is the name you defined in the component YAML. You can customize where to store the output by defining the path of an output.
246
+
Job `{name}` is resolved at job execution time, and `{output_name}` is the name you defined in the component YAML. You can customize where to store the output by defining an output path.
245
247
246
248
# [Azure CLI](#tab/cli)
247
249
248
-
The *pipeline.yml* file defines a pipeline that has three pipeline level outputs. You can use the following command to set custom output paths for the `pipeline_job_trained_model` output.
250
+
The [pipeline.yml](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/pipelines-with-components/basics/1b_e2e_registered_components/pipeline.yml) file at [train-score-eval pipeline with registered components example](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/basics/1b_e2e_registered_components) defines a pipeline that has three pipeline level outputs. You can use the following command to set custom output paths for the `pipeline_job_trained_model` output.
249
251
250
252
```azurecli
251
253
# define the custom output path using datastore uri
az ml job create -f ./pipeline.yml --set outputs.pipeline_job_trained_model.path=$output_path
257
259
```
258
260
259
-
You can find the full YAML file at [train-score-eval pipeline with registered components example](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/pipelines-with-components/basics/1b_e2e_registered_components/pipeline.yml).
The following code that demonstrates how to customize output paths is from the [Build pipeline with command_component decorated python function](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1b_pipeline_with_python_function_components/pipeline_with_python_function_components.ipynb) notebook.
264
264
265
-
You can find the end-to-end notebook at [Build pipeline with command_component decorated python function](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1b_pipeline_with_python_function_components/pipeline_with_python_function_components.ipynb).
In the Designer **Pipeline interface** for a pipeline, or the component panel for a component, expand **Outputs** in the **Settings** tab to specify a custom path.
269
+
In the Designer **Pipeline interface** for a pipeline, or the component panel for a component, expand **Outputs** in the **Settings** tab to specify a custom **Path**.
In the **Outputs + logs** tab of the job details page:
321
+
On the **Outputs + logs** tab of the job details page:
322
322
323
323
- To download all outputs, select **Download all** in the top menu.
324
-
- To download a specific output, select **...** next to a file in the file tree and select **Download** from the context menu.
324
+
- To download a specific output, select **...** next to a file and select **Download** from the context menu.
325
325
326
326
:::image type="content" source="./media/how-to-manage-pipeline-input-output/download.png" lightbox="./media/how-to-manage-pipeline-input-output/download.png" alt-text="Screenshot showing how to download an output file or all outputs from a pipeline job.":::
327
327
@@ -331,7 +331,7 @@ In the **Outputs + logs** tab of the job details page:
331
331
332
332
# [Azure CLI](#tab/cli)
333
333
334
-
To download the outputs of a child component that isn't promoted to pipeline level, first list all child job entities of a pipeline job and then use similar code to download the outputs.
334
+
To download the outputs of a child component, first list all child jobs of a pipeline job and then use similar code to download the outputs.
335
335
336
336
```azurecli
337
337
# List all child jobs in the job and print job details in table format
@@ -357,7 +357,7 @@ ml_client = MLClient(
357
357
)
358
358
```
359
359
360
-
To download the outputs of a child component that isn't promoted to pipeline level, first list all child job entities of a pipeline job and then use similar code to download the outputs.
360
+
To download the outputs of a child component, first list all child jobs of a pipeline job and then use similar code to download the outputs.
361
361
362
362
```python
363
363
# List all child jobs in the job
@@ -370,7 +370,7 @@ for child_job in child_jobs:
370
370
371
371
# [Studio UI](#tab/ui)
372
372
373
-
In the **Outputs + logs** tab of the component panel for a component, select **Download all**.
373
+
On the **Outputs + logs** tab of the component panel for a component, select **Download all**.
374
374
375
375
:::image type="content" source="./media/how-to-manage-pipeline-input-output/download-component.png" alt-text="Screenshot showing how to download outputs from a pipeline component.":::
0 commit comments