You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: CHANGELOG.md
+7Lines changed: 7 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,6 +6,13 @@
6
6
* Added support for preloading of Docker images into instance pools ([#663](https://github.com/databrickslabs/terraform-provider-databricks/issues/663))
7
7
* Added the `databricks_user` data source ([#648](https://github.com/databrickslabs/terraform-provider-databricks/pull/648))
8
8
* Fixed support for `spot_instance_policy` in SQLA Endpoints ([#665](https://github.com/databrickslabs/terraform-provider-databricks/issues/665))
9
+
* Added documentation for `databricks_pipeline` resource ([#673](https://github.com/databrickslabs/terraform-provider-databricks/pull/673))
10
+
* Made preview environment tests to run on a release basis
11
+
12
+
Updated dependency versions:
13
+
14
+
* Bump github.com/zclconf/go-cty from 1.8.2 to 1.8.3
15
+
* Bump github.com/aws/aws-sdk-go from 1.38.30 to 1.38.51
Copy file name to clipboardExpand all lines: docs/resources/pipeline.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -54,8 +54,8 @@ The following arguments are required:
54
54
*`name` - A user-friendly name for this pipeline. The name can be used to identify pipeline jobs in the UI.
55
55
*`storage` - A location on DBFS or cloud storage where output data and metadata required for pipeline execution are stored. By default, tables are stored in a subdirectory of this location.
56
56
*`configuration` - An optional list of values to apply to the entire pipeline. Elements must be formatted as key:value pairs.
57
-
*`library`block - An array of notebooks containing the pipeline code and required artifacts. Syntax resembles [library](cluster.md#library-configuration-block) configuration block with the addition of special `notebook` type of library.
58
-
*`cluster`block - An array of specifications for the [clusters](cluster.md) to run the pipeline. If this is not specified, pipelines will automatically select a default cluster configuration for the pipeline.
57
+
*`library`blocks - Specifies ipeline code and required artifacts. Syntax resembles [library](cluster.md#library-configuration-block) configuration block with the addition of a special `notebook` type of library.
58
+
*`cluster`blocks - [Clusters](cluster.md) to run the pipeline. If none is specified, pipelines will automatically select a default cluster configuration for the pipeline.
59
59
*`continuous` - A flag indicating whether to run the pipeline continuously. The default value is `false`.
60
60
*`target` - The name of a database for persisting pipeline output data. Configuring the target setting allows you to view and query the pipeline output data from the Databricks UI.
0 commit comments