You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -40,7 +40,7 @@ The source JSON schema can be found at https://azuremlschemas.azureedge.net/late
40
40
|`environment`| string or object |**Required (if not using `component` field).** The environment to use for the job. This can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> To reference an existing environment use the `azureml:<environment_name>:<environment_version>` syntax or `azureml:<environment_name>@latest` (to reference the latest version of an environment). <br><br> To define an environment inline please follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). Exclude the `name` and `version` properties as they are not supported for inline environments. |||
41
41
|`environment_variables`| object | Dictionary of environment variable key-value pairs to set on the process where the command is executed. |||
42
42
|`distribution`| object | The distribution configuration for distributed training scenarios. One of [MpiConfiguration](#mpiconfiguration), [PyTorchConfiguration](#pytorchconfiguration), or [TensorFlowConfiguration](#tensorflowconfiguration). |||
43
-
|`compute`| string | Name of the compute target to execute the job on. This can be either a reference to an existing compute in the workspace (using the `azureml:<compute_name>` syntax) or `local` to designate local execution. ||`local`|
43
+
|`compute`| string | Name of the compute target to execute the job on. This can be either a reference to an existing compute in the workspace (using the `azureml:<compute_name>` syntax) or `local` to designate local execution. **Note:** jobs in pipeline didn't support `local` as `compute`||`local`|
44
44
|`resources.instance_count`| integer | The number of nodes to use for the job. ||`1`|
45
45
|`resources.instance_type`| string | The instance type to use for the job. Applicable for jobs running on Azure Arc-enabled Kubernetes compute (where the compute target specified in the `compute` field is of `type: kubernentes`). If omitted, this will default to the default instance type for the Kubernetes cluster. For more information, see [Create and select Kubernetes instance types](how-to-attach-kubernetes-anywhere.md). |||
46
46
|`limits.timeout`| integer | The maximum time in seconds the job is allowed to run. Once this limit is reached the system will cancel the job. |||
|`type`| string | The type of job input. Specify `uri_file` for input data that points to a single file source, or `uri_folder` for input data that points to a folder source. |`uri_file`, `uri_folder`|`uri_folder`|
58
+
|`type`| string | The type of job input. Specify `uri_file` for input data that points to a single file source, or `uri_folder` for input data that points to a folder source. [Learn more about data access.](concept-data.md)|`uri_file`, `uri_folder`, `mltable`, `mlflow_model`|`uri_folder`|
60
59
|`path`| string | The path to the data to use as input. This can be specified in a few ways: <br><br> - A local path to the data source file or folder, e.g. `path: ./iris.csv`. The data will get uploaded during job submission. <br><br> - A URI of a cloud path to the file or folder to use as the input. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, `adl`. See [Core yaml syntax](reference-yaml-core-syntax.md) for more information on how to use the `azureml://` URI format. <br><br> - An existing registered Azure ML data asset to use as the input. To reference a registered data asset use the `azureml:<data_name>:<data_version>` syntax or `azureml:<data_name>@latest` (to reference the latest version of that data asset), e.g. `path: azureml:cifar10-data:1` or `path: azureml:cifar10-data@latest`. |||
61
60
|`mode`| string | Mode of how the data should be delivered to the compute target. <br><br> For read-only mount (`ro_mount`), the data will be consumed as a mount path. A folder will be mounted as a folder and a file will be mounted as a file. Azure ML will resolve the input to the mount path. <br><br> For `download` mode the data will be downloaded to the compute target. Azure ML will resolve the input to the downloaded path. <br><br> If you only want the URL of the storage location of the data artifact(s) rather than mounting or downloading the data itself, you can use the `direct` mode. This will pass in the URL of the storage location as the job input. Note that in this case you are fully responsible for handling credentials to access the storage. |`ro_mount`, `download`, `direct`|`ro_mount`|
62
61
63
62
### Job outputs
64
63
65
64
| Key | Type | Description | Allowed values | Default value |
|`type`| string | The type of job output. For the default `uri_folder` type, the output will correspond to a folder. |`uri_folder`|`uri_folder`|
66
+
|`type`| string | The type of job output. For the default `uri_folder` type, the output will correspond to a folder. |`uri_file`, `uri_folder`, `mltable`, `mlflow_model`|`uri_folder`|
68
67
|`mode`| string | Mode of how output file(s) will get delivered to the destination storage. For read-write mount mode (`rw_mount`) the output directory will be a mounted directory. For upload mode the file(s) written will get uploaded at the end of the job. |`rw_mount`, `upload`|`rw_mount`|
69
68
70
69
## Remarks
@@ -98,3 +97,4 @@ Examples are available in the [examples GitHub repository](https://github.com/Az
98
97
## Next steps
99
98
100
99
-[Install and use the CLI (v2)](how-to-configure-cli.md)
100
+
-[Create ML pipelines using components](how-to-create-component-pipelines-cli.md)
|`type`| string | The type of job input. Specify `uri_file` for input data that points to a single file source, or `uri_folder` for input data that points to a folder source. |`uri_file`, `uri_folder`|`uri_folder`|
208
+
|`type`| string | The type of job input. Specify `uri_file` for input data that points to a single file source, or `uri_folder` for input data that points to a folder source. [Learn more about data access.](concept-data.md)|`uri_file`, `uri_folder`, `mltable`, `mlflow_model`|`uri_folder`|
209
209
|`path`| string | The path to the data to use as input. This can be specified in a few ways: <br><br> - A local path to the data source file or folder, e.g. `path: ./iris.csv`. The data will get uploaded during job submission. <br><br> - A URI of a cloud path to the file or folder to use as the input. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, `adl`. See [Core yaml syntax](reference-yaml-core-syntax.md) for more information on how to use the `azureml://` URI format. <br><br> - An existing registered Azure ML data asset to use as the input. To reference a registered data asset use the `azureml:<data_name>:<data_version>` syntax or `azureml:<data_name>@latest` (to reference the latest version of that data asset), e.g. `path: azureml:cifar10-data:1` or `path: azureml:cifar10-data@latest`. |||
210
210
|`mode`| string | Mode of how the data should be delivered to the compute target. <br><br> For read-only mount (`ro_mount`), the data will be consumed as a mount path. A folder will be mounted as a folder and a file will be mounted as a file. Azure ML will resolve the input to the mount path. <br><br> For `download` mode the data will be downloaded to the compute target. Azure ML wil resolve the input to the downloaded path. <br><br> If you only want the URL of the storage location of the data artifact(s) rather than mounting or downloading the data itself, you can use the `direct` mode. This will pass in the URL of the storage location as the job input. Note that in this case you are fully responsible for handling credentials to access the storage. |`ro_mount`, `download`, `direct`|`ro_mount`|
211
211
212
212
### Job outputs
213
213
214
214
| Key | Type | Description | Allowed values | Default value |
|`type`| string | The type of job output. For the default `uri_folder` type, the output will correspond to a folder. |`uri_folder`|`uri_folder`|
216
+
|`type`| string | The type of job output. For the default `uri_folder` type, the output will correspond to a folder. |`uri_file`, `uri_folder`, `mltable`, `mlflow_model`|`uri_folder`|
217
217
|`mode`| string | Mode of how output file(s) will get delivered to the destination storage. For read-write mount mode (`rw_mount`) the output directory will be a mounted directory. For upload mode the file(s) written will get uploaded at the end of the job. |`rw_mount`, `upload`|`rw_mount`|
0 commit comments