You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Fix] Populate partitions when reading databricks_sql_table (#4486)
## Changes
- UC returns `partition_index` in the column struct, so we can calculate
`partitions` value based on that
- Resolves#3980
## Tests
<!--
How is this tested? Please see the checklist below and also describe any
other relevant tests
-->
- [x] `make test` run locally
- [x] relevant change in `docs/` folder
- [x] using Go SDK
---------
Co-authored-by: Alex Ott <[email protected]>
Copy file name to clipboardExpand all lines: NEXT_CHANGELOG.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,6 +10,7 @@
10
10
11
11
### Bug Fixes
12
12
13
+
* Populate `partitions` when reading `databricks_sql_table` ([#4486](https://github.com/databricks/terraform-provider-databricks/pull/4486)).
13
14
* Fix configuration drift when configuring `databricks_connection` to builtin Hive Metastore ([#4505](https://github.com/databricks/terraform-provider-databricks/pull/4505)).
14
15
* Only allow `authorized_paths` to be updated in the `options` field of `databricks_catalog` ([#4517](https://github.com/databricks/terraform-provider-databricks/pull/4517)).
15
16
* Mark `default_catalog_name` attribute in `databricks_metastore_assignment` as deprecated ([#4522](https://github.com/databricks/terraform-provider-databricks/pull/4522))
Copy file name to clipboardExpand all lines: docs/resources/sql_table.md
+15-12Lines changed: 15 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,7 +9,7 @@ A `databricks_sql_table` is contained within [databricks_schema](schema.md), and
9
9
10
10
This resource creates and updates the Unity Catalog table/view by executing the necessary SQL queries on a special auto-terminating cluster it would create for this operation. You could also specify a SQL warehouse or cluster for the queries to be executed on.
11
11
12
-
~> This resource doesn't handle complex cases of schema evolution due to the limitations of Terraform itself. If you need to implement schema evolution it's recommended to use specialized tools, such as, [Luquibase](https://medium.com/dbsql-sme-engineering/advanced-schema-management-on-databricks-with-liquibase-1900e9f7b9c0) and [Flyway](https://medium.com/dbsql-sme-engineering/databricks-schema-management-with-flyway-527c4a9f5d67).
12
+
~> This resource doesn't handle complex cases of schema evolution due to the limitations of Terraform itself. If you need to implement schema evolution it's recommended to use specialized tools, such as, [Liquibase](https://medium.com/dbsql-sme-engineering/advanced-schema-management-on-databricks-with-liquibase-1900e9f7b9c0) and [Flyway](https://medium.com/dbsql-sme-engineering/databricks-schema-management-with-flyway-527c4a9f5d67).
13
13
14
14
## Example Usage
15
15
@@ -162,15 +162,15 @@ The following arguments are supported:
162
162
*`storage_location` - (Optional) URL of storage location for Table data (required for EXTERNAL Tables). Not supported for `VIEW` or `MANAGED` table_type.
163
163
*`data_source_format` - (Optional) External tables are supported in multiple data source formats. The string constants identifying these formats are `DELTA`, `CSV`, `JSON`, `AVRO`, `PARQUET`, `ORC`, and `TEXT`. Change forces the creation of a new resource. Not supported for `MANAGED` tables or `VIEW`.
164
164
*`view_definition` - (Optional) SQL text defining the view (for `table_type == "VIEW"`). Not supported for `MANAGED` or `EXTERNAL` table_type.
165
-
*`cluster_id` - (Optional) All table CRUD operations must be executed on a running cluster or SQL warehouse. If a cluster_id is specified, it will be used to execute SQL commands to manage this table. If empty, a cluster will be created automatically with the name `terraform-sql-table`.
165
+
*`cluster_id` - (Optional) All table CRUD operations must be executed on a running cluster or SQL warehouse. If a cluster_id is specified, it will be used to execute SQL commands to manage this table. If empty, a cluster will be created automatically with the name `terraform-sql-table`. Conflicts with `warehouse_id`.
166
166
*`warehouse_id` - (Optional) All table CRUD operations must be executed on a running cluster or SQL warehouse. If a `warehouse_id` is specified, that SQL warehouse will be used to execute SQL commands to manage this table. Conflicts with `cluster_id`.
167
167
*`cluster_keys` - (Optional) a subset of columns to liquid cluster the table by. Conflicts with `partitions`.
168
+
*`partitions` - (Optional) a subset of columns to partition the table by. Change forces the creation of a new resource. Conflicts with `cluster_keys`.
168
169
*`storage_credential_name` - (Optional) For EXTERNAL Tables only: the name of storage credential to use. Change forces the creation of a new resource.
169
-
*`owner` - (Optional) User name/group name/sp application_id of the schema owner.
170
+
*`owner` - (Optional) User name/group name/sp application_id of the table owner.
170
171
*`comment` - (Optional) User-supplied free-form text. Changing the comment is not currently supported on the `VIEW` table type.
171
172
*`options` - (Optional) Map of user defined table options. Change forces creation of a new resource.
172
173
*`properties` - (Optional) A map of table properties.
173
-
*`partitions` - (Optional) a subset of columns to partition the table by. Change forces the creation of a new resource. Conflicts with `cluster_keys`. Change forces creation of a new resource.
174
174
175
175
### `column` configuration block
176
176
@@ -191,7 +191,7 @@ In addition to all the arguments above, the following attributes are exported:
The `databricks_table` resource has been deprecated in favor of `databricks_sql_table`. To migrate from `databricks_table` to `databricks_sql_table`:
203
+
203
204
1. Define a `databricks_sql_table` resource with arguments corresponding to `databricks_table`.
204
205
2. Add a `removed` block to remove the `databricks_table` resource without deleting the existing table by using the `lifecycle` block. If you're using Terraform version below v1.7.0, you will need to use the `terraform state rm` command instead.
205
206
3. Add an `import` block to add the `databricks_sql_table` resource, corresponding to the existing table. If you're using Terraform version below v1.5.0, you will need to use `terraform import` command instead.
206
207
207
208
For example, suppose we have the following `databricks_table` resource:
0 commit comments