Skip to content

Commit 83b8d44

Browse files
committed
touchups
1 parent f092b57 commit 83b8d44

File tree

2 files changed

+19
-19
lines changed

2 files changed

+19
-19
lines changed

articles/machine-learning/how-to-manage-inputs-outputs-pipeline.md

Lines changed: 19 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -104,7 +104,7 @@ The following screenshot shows the **Settings** tab of a pipeline job, which you
104104

105105
:::image type="content" source="./media/how-to-manage-pipeline-input-output/job-overview-setting.png" lightbox="./media/how-to-manage-pipeline-input-output/job-overview-setting.png" alt-text="Screenshot highlighting the job overview setting panel.":::
106106

107-
When you edit a pipeline in the Designer, pipeline inputs and outputs are in the **Pipeline interface** panel, and component inputs and outputs are in the component panel.
107+
When you edit a pipeline in the studio Designer, pipeline inputs and outputs are in the **Pipeline interface** panel, and component inputs and outputs are in the component panel.
108108

109109
:::image type="content" source="./media/how-to-manage-pipeline-input-output/pipeline-interface.png" alt-text="Screenshot highlighting the pipeline interface in Designer.":::
110110

@@ -203,7 +203,7 @@ pipeline_job.settings.default_datastore = "workspaceblobstore"
203203
You can promote a component's input to pipeline level input on the studio Designer authoring page.
204204

205205
1. Open the component's settings panel by double-clicking the component.
206-
1. Find the input you want to promote and select the three dots on the right.
206+
1. Select **...** next to the input you want to promote.
207207
1. Select **Add to pipeline input**.
208208

209209
:::image type="content" source="./media/how-to-manage-pipeline-input-output/promote-pipeline-input.png" alt-text="Screenshot highlighting how to promote to pipeline input in Designer.":::
@@ -212,16 +212,18 @@ You can promote a component's input to pipeline level input on the studio Design
212212

213213
## Define optional inputs
214214

215-
By default, all inputs are required and must either have a default value or be assigned a value each time you submit a pipeline job. However, you can define an optional input and not assign a value to the input when you submit a pipeline job.
215+
By default, all inputs are required and must either have a default value or be assigned a value each time you submit a pipeline job. However, you can define an optional input.
216216

217217
> [!NOTE]
218218
> Optional outputs aren't supported.
219219
220-
If you have an optional data/model type input and don't assign a value to it when submitting the pipeline job, a component in the pipeline lacks a preceding data dependency. The component's input port isn't linked to any component or data/model node. The pipeline service invokes the component directly, instead of waiting for the preceding dependency to be ready.
220+
Setting optional inputs can be useful in two scenarios:
221221

222-
If you set `continue_on_step_failure = True` for the pipeline and a second node uses required output from the first node, the second node doesn't execute if the first node fails. But if the second node uses optional input from the first node, it executes even if the first node fails. The following screenshot illustrates this scenario.
222+
- If you define an optional data/model type input and don't assign a value to it when you submit the pipeline job, the pipeline component lacks a preceding data dependency. If the component's input port isn't linked to any component or data/model node, the pipeline invokes the component directly instead of waiting for the preceding dependency.
223223

224-
:::image type="content" source="./media/how-to-manage-pipeline-input-output/continue-on-failure-optional-input.png" alt-text="Screenshot showing the orchestration logic of optional input and continue on failure.":::
224+
- If you set `continue_on_step_failure = True` for the pipeline and `node2` uses optional input from `node1`, `node2` executes even if `node1` fails. If `node1` input is required, `node2` doesn't execute if `node1` fails. The following example demonstrates this scenario.
225+
226+
:::image type="content" source="./media/how-to-manage-pipeline-input-output/continue-on-failure-optional-input.png" alt-text="Screenshot showing the orchestration logic of optional input and continue on failure.":::
225227

226228
# [Azure CLI / Python SDK](#tab/cli+python)
227229

@@ -231,7 +233,7 @@ The following code example shows how to define optional input. When the input is
231233

232234
# [Studio UI](#tab/ui)
233235

234-
In a pipeline graph, optional inputs of data/model types are represented by dotted circles. Optional inputs of primitive types are in the **Settings** tab. Unlike required inputs, optional inputs don't have an asterisk next to them, indicating that they aren't mandatory.
236+
In a pipeline graph, dotted circles represent optional inputs of data or model types. Optional inputs of primitive types are in the **Settings** tab. Unlike required inputs, optional inputs don't have an asterisk next to them, indicating that they aren't mandatory.
235237

236238
:::image type="content" source="./media/how-to-manage-pipeline-input-output/optional-input.png" lightbox="./media/how-to-manage-pipeline-input-output/optional-input.png" alt-text="Screenshot highlighting the optional input.":::
237239

@@ -241,11 +243,11 @@ In a pipeline graph, optional inputs of data/model types are represented by dott
241243

242244
By default, component output is stored in the `{default_datastore}` you set for the pipeline, `azureml://datastores/${{default_datastore}}/paths/${{name}}/${{output_name}}`. If not set, the default is the workspace blob storage.
243245

244-
The job `{name}` is resolved at job execution time, and `{output_name}` is the name you defined in the component YAML. You can customize where to store the output by defining the path of an output.
246+
Job `{name}` is resolved at job execution time, and `{output_name}` is the name you defined in the component YAML. You can customize where to store the output by defining an output path.
245247

246248
# [Azure CLI](#tab/cli)
247249

248-
The *pipeline.yml* file defines a pipeline that has three pipeline level outputs. You can use the following command to set custom output paths for the `pipeline_job_trained_model` output.
250+
The [pipeline.yml](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/pipelines-with-components/basics/1b_e2e_registered_components/pipeline.yml) file at [train-score-eval pipeline with registered components example](https://github.com/Azure/azureml-examples/tree/main/cli/jobs/pipelines-with-components/basics/1b_e2e_registered_components) defines a pipeline that has three pipeline level outputs. You can use the following command to set custom output paths for the `pipeline_job_trained_model` output.
249251

250252
```azurecli
251253
# define the custom output path using datastore uri
@@ -256,17 +258,15 @@ output_path="azureml://datastores/{datastore_name}/paths/{relative_path_of_conta
256258
az ml job create -f ./pipeline.yml --set outputs.pipeline_job_trained_model.path=$output_path
257259
```
258260

259-
You can find the full YAML file at [train-score-eval pipeline with registered components example](https://github.com/Azure/azureml-examples/blob/main/cli/jobs/pipelines-with-components/basics/1b_e2e_registered_components/pipeline.yml).
260-
261261
# [Python SDK](#tab/python)
262262

263-
[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/pipelines/1b_pipeline_with_python_function_components/pipeline_with_python_function_components.ipynb?name=custom-output-path)]
263+
The following code that demonstrates how to customize output paths is from the [Build pipeline with command_component decorated python function](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1b_pipeline_with_python_function_components/pipeline_with_python_function_components.ipynb) notebook.
264264

265-
You can find the end-to-end notebook at [Build pipeline with command_component decorated python function](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1b_pipeline_with_python_function_components/pipeline_with_python_function_components.ipynb).
265+
[!Notebook-python[] (~/azureml-examples-main/sdk/python/jobs/pipelines/1b_pipeline_with_python_function_components/pipeline_with_python_function_components.ipynb?name=custom-output-path)]
266266

267267
# [Studio UI](#tab/ui)
268268

269-
In the Designer **Pipeline interface** for a pipeline, or the component panel for a component, expand **Outputs** in the **Settings** tab to specify a custom path.
269+
In the Designer **Pipeline interface** for a pipeline, or the component panel for a component, expand **Outputs** in the **Settings** tab to specify a custom **Path**.
270270

271271
:::image type="content" source="./media/how-to-manage-pipeline-input-output/custom-output.png" lightbox="./media/how-to-manage-pipeline-input-output/custom-output.png" alt-text="Screenshot showing custom output.":::
272272

@@ -318,10 +318,10 @@ output = client.jobs.download(name=job.name, download_path=tmp_path, output_name
318318

319319
# [Studio UI](#tab/ui)
320320

321-
In the **Outputs + logs** tab of the job details page:
321+
On the **Outputs + logs** tab of the job details page:
322322

323323
- To download all outputs, select **Download all** in the top menu.
324-
- To download a specific output, select **...** next to a file in the file tree and select **Download** from the context menu.
324+
- To download a specific output, select **...** next to a file and select **Download** from the context menu.
325325

326326
:::image type="content" source="./media/how-to-manage-pipeline-input-output/download.png" lightbox="./media/how-to-manage-pipeline-input-output/download.png" alt-text="Screenshot showing how to download an output file or all outputs from a pipeline job.":::
327327

@@ -331,7 +331,7 @@ In the **Outputs + logs** tab of the job details page:
331331

332332
# [Azure CLI](#tab/cli)
333333

334-
To download the outputs of a child component that isn't promoted to pipeline level, first list all child job entities of a pipeline job and then use similar code to download the outputs.
334+
To download the outputs of a child component, first list all child jobs of a pipeline job and then use similar code to download the outputs.
335335

336336
```azurecli
337337
# List all child jobs in the job and print job details in table format
@@ -357,7 +357,7 @@ ml_client = MLClient(
357357
)
358358
```
359359

360-
To download the outputs of a child component that isn't promoted to pipeline level, first list all child job entities of a pipeline job and then use similar code to download the outputs.
360+
To download the outputs of a child component, first list all child jobs of a pipeline job and then use similar code to download the outputs.
361361

362362
```python
363363
# List all child jobs in the job
@@ -370,7 +370,7 @@ for child_job in child_jobs:
370370

371371
# [Studio UI](#tab/ui)
372372

373-
In the **Outputs + logs** tab of the component panel for a component, select **Download all**.
373+
On the **Outputs + logs** tab of the component panel for a component, select **Download all**.
374374

375375
:::image type="content" source="./media/how-to-manage-pipeline-input-output/download-component.png" alt-text="Screenshot showing how to download outputs from a pipeline component.":::
376376

119 Bytes
Loading

0 commit comments

Comments
 (0)