You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
*`name` - A user-friendly name for this pipeline. The name can be used to identify pipeline jobs in the UI.
55
+
*`storage` - A location on DBFS or cloud storage where output data and metadata required for pipeline execution are stored. By default, tables are stored in a subdirectory of this location.
56
+
*`configuration` - An optional list of values to apply to the entire pipeline. Elements must be formatted as key:value pairs.
57
+
*`library` block - An array of notebooks containing the pipeline code and required artifacts. Syntax resembles [library](cluster.md#library-configuration-block) configuration block with the addition of special `notebook` type of library.
58
+
*`cluster` block - An array of specifications for the [clusters](cluster.md) to run the pipeline. If this is not specified, pipelines will automatically select a default cluster configuration for the pipeline.
59
+
*`continuous` - A flag indicating whether to run the pipeline continuously. The default value is `false`.
60
+
*`target` - The name of a database for persisting pipeline output data. Configuring the target setting allows you to view and query the pipeline output data from the Databricks UI.
61
+
62
+
## Import
63
+
64
+
The resource job can be imported using the id of the pipeline
0 commit comments