Skip to content

Commit a570531

Browse files
author
Larry Franks
committed
writing
1 parent b95ae45 commit a570531

File tree

1 file changed

+14
-2
lines changed

1 file changed

+14
-2
lines changed

articles/machine-learning/service/reference-pipeline-yaml.md

Lines changed: 14 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -114,8 +114,7 @@ Steps define a computational environment, along with the files to run on the env
114114
| YAML key | Description |
115115
| ----- | ----- |
116116
| `script_name` | The name of the U-SQL script (relative to the `source_directory`). |
117-
| `name` | TBD |
118-
| `compute_target` | TBD |
117+
| `compute_target` | The Azure Data Lake compute target to use for this step. |
119118
| `parameters` | [Parameters](#parameters) to the pipeline. |
120119
| `inputs` | TBD |
121120
| `outputs` | TBD |
@@ -130,6 +129,7 @@ Steps define a computational environment, along with the files to run on the env
130129

131130
| YAML key | Description |
132131
| ----- | ----- |
132+
| `compute_target` | The Azure Batch compute target to use for this step. |
133133
| `source_directory` | Directory that contains the module binaries, executable, assemblies, etc. |
134134
| `executable` | Name of the command/executable that will be ran as part of this job. |
135135
| `create_pool` | Boolean flag to indicate whether to create the pool before running the job. |
@@ -144,6 +144,7 @@ Steps define a computational environment, along with the files to run on the env
144144

145145
| YAML key | Description |
146146
| ----- | ----- |
147+
| `compute_target` | The Azure Databricks compute target to use for this step. |
147148
| `run_name` | The name in Databricks for this run. |
148149
| `source_directory` | Directory that contains the script and other files. |
149150
| `num_workers` | The static number of workers for the Databricks run cluster. |
@@ -154,6 +155,7 @@ Steps define a computational environment, along with the files to run on the env
154155

155156
| YAML key | Description |
156157
| ----- | ----- |
158+
| `compute_target` | The Azure Data Factory compute target to use for this step. |
157159
| `source_data_reference` | Input connection that serves as the source of data transfer operations. Supported values are TBD. |
158160
| `destination_data_reference` | Input connection that serves as the destination of data transfer operations. Supported values are TBD. |
159161
| `allow_reuse` | Determines whether the step should reuse previous results when re-run with the same settings. |
@@ -162,7 +164,17 @@ Steps define a computational environment, along with the files to run on the env
162164

163165
| YAML key | Description |
164166
| ----- | ----- |
167+
| `compute_target` | The compute target to use for this step. The compute target can be an Azure Machine Learning Compute, Virtual Machine (such as the Data Science VM), or HDInsight. |
165168
| `script_name` | The name of the Python script (relative to `source_directory`). |
166169
| `source_directory` | Directory that contains the script, Conda environment, etc. |
167170
| `runconfig` | The path to a `.runconfig` file. This file is a YAML representation of the [RunConfiguration](https://docs.microsoft.com/python/api/azureml-core/azureml.core.runconfiguration?view=azure-ml-py) class. For more information on the structure of this file, see [TBD]. |
168171
| `allow_reuse` | Determines whether the step should reuse previous results when re-run with the same settings. |
172+
173+
## Inputs
174+
175+
| YAML key | Description |
176+
| ----- | ----- |
177+
| `type` | The type of input. Valid values are `mount` and `download`. |
178+
| `path_on_compute` | For `download` mode, the local path the step will read the data from. |
179+
| `overwrite` | For `download` mode, indicates whether to overwrite existing data. |
180+
| `source` | The data source. This can refer to [Parameters](#parameters)

0 commit comments

Comments
 (0)