Skip to content

Commit 01efc82

Browse files
authored
Document enable_serverless_compute API changes in databricks_sql_endpoint resource (#2137)
* fix "sql endpoint" in English descriptions (not the code itself) to say "SQL warehouse" which is Databricks current name for them. * change to lowercase serverless, per Marketing team for docs usage in these types of cases * Remove GlobalConfig configuring of workspace config for serverless. * Make more accurate enablement info for AWS only (not "enable for workspace" since we don't do that as of GA). Azure doesn't need special terms of use flags at account level : ``` If your account was created before October 1, 2021, your organization's owner or account administrator must [accept applicable terms of use](https://docs.databricks.com/sql/admin/serverless.html#accept-terms) before workspaces are enabled for serverless compute. A workspace must meet the [requirements](https://docs.databricks.com/sql/admin/serverless.html#requirements) and might require an update its instance profile role to [add a trust relationship](https://docs.databricks.com/sql/admin/serverless.html#aws-instance-profile-setup). For Azure, you must [enable your workspace for serverless SQL warehouse](https://learn.microsoft.com/azure/databricks/sql/admin/serverless). ``` * Update AWS specific instructions when we enable "auto enablement at account level" as part of GA ``` For AWS, Databricks strongly recommends that you always explicitly set this field. If omitted, the default is false for most workspaces. However, if this workspace used the SQL Warehouses API to create a warehouse between September 1, 2022 and April 6, 2023, the default remains the previous behavior which is default to true if the workspace is enabled for serverless and fits the requirements for serverless SQL warehouses. To avoid ambiguity, especially for organizations with many workspaces, Databricks recommends that you always set this field. ``` * update some headings to be consistent with Databricks heading style capitalization (though I wasn't sure about article "Data Source" whether that needs to remain, so I left it be) * For Azure, specify how default value works: For Azure, if serverless SQL warehouses are disabled for the workspace, the default is `false`. If serverless SQL warehouses are enabled for the workspace, the default is `true`. * In `exporter_test.go` remove EnableServerlessCompute from sql.GlobalConfigForRead * fix "sql endpoint" in English descriptions (not the code itself) to say "SQL warehouse" which is Databricks current name for them. * change to lowercase serverless, per Marketing team for docs usage in these types of cases * Remove GlobalConfig configuring of workspace config for serverless. * Make more accurate enablement info for AWS only (not "enable for workspace" since we don't do that as of GA). Azure doesn't need special terms of use flags at account level : ``` If your account was created before October 1, 2021, your organization's owner or account administrator must [accept applicable terms of use](https://docs.databricks.com/sql/admin/serverless.html#accept-terms) before workspaces are enabled for serverless compute. A workspace must meet the [requirements](https://docs.databricks.com/sql/admin/serverless.html#requirements) and might require an update its instance profile role to [add a trust relationship](https://docs.databricks.com/sql/admin/serverless.html#aws-instance-profile-setup). For Azure, you must [enable your workspace for serverless SQL warehouse](https://learn.microsoft.com/azure/databricks/sql/admin/serverless). ``` * Update AWS specific instructions when we enable "auto enablement at account level" as part of GA ``` For AWS, Databricks strongly recommends that you always explicitly set this field. If omitted, the default is false for most workspaces. However, if this workspace used the SQL Warehouses API to create a warehouse between September 1, 2022 and April 6, 2023, the default remains the previous behavior which is default to true if the workspace is enabled for serverless and fits the requirements for serverless SQL warehouses. To avoid ambiguity, especially for organizations with many workspaces, Databricks recommends that you always set this field. ``` * update some headings to be consistent with Databricks heading style capitalization (though I wasn't sure about article "Data Source" whether that needs to remain, so I left it be) * For Azure, specify how default value works: For Azure, if serverless SQL warehouses are disabled for the workspace, the default is `false`. If serverless SQL warehouses are enabled for the workspace, the default is `true`. * In `exporter_test.go` remove EnableServerlessCompute from sql.GlobalConfigForRead
1 parent c951108 commit 01efc82

File tree

5 files changed

+33
-25
lines changed

5 files changed

+33
-25
lines changed

docs/data-sources/sql_warehouse.md

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ subcategory: "Databricks SQL"
77

88
Retrieves information about a [databricks_sql_warehouse](../resources/sql_warehouse.md) using its id. This could be retrieved programmatically using [databricks_sql_warehouses](../data-sources/sql_warehouses.md) data source.
99

10-
## Example Usage
10+
## Example usage
1111

1212
Retrieve attributes of each SQL warehouses in a workspace
1313

@@ -22,11 +22,11 @@ data "databricks_sql_warehouse" "all" {
2222
2323
```
2424

25-
## Argument Reference
25+
## Argument reference
2626

27-
* `id` - (Required) The id of the SQL warehouse
27+
* `id` - (Required) The ID of the SQL warehouse
2828

29-
## Attribute Reference
29+
## Attribute reference
3030

3131
This data source exports the following attributes:
3232

@@ -38,14 +38,19 @@ This data source exports the following attributes:
3838
* `tags` - Databricks tags all warehouse resources with these tags.
3939
* `spot_instance_policy` - The spot policy to use for allocating instances to clusters: `COST_OPTIMIZED` or `RELIABILITY_OPTIMIZED`.
4040
* `enable_photon` - Whether to enable [Photon](https://databricks.com/product/delta-engine).
41-
* `enable_serverless_compute` - Whether this SQL warehouse is a Serverless warehouse. To use a Serverless SQL warehouse, you must enable Serverless SQL warehouses for the workspace.
41+
* `enable_serverless_compute` - Whether this SQL warehouse is a serverless SQL warehouse. If this value is true explicitly or through the default, you **must** also set `warehouse_type` field to `pro`.
42+
43+
- **For AWS**: If your account needs updated [terms of use](https://docs.databricks.com/sql/admin/serverless.html#accept-terms), workspace admins are prompted in the Databricks SQL UI. A workspace must meet the [requirements](https://docs.databricks.com/sql/admin/serverless.html#requirements) and might require an update its instance profile role to [add a trust relationship](https://docs.databricks.com/sql/admin/serverless.html#aws-instance-profile-setup).
44+
45+
- **For Azure**, you must [enable your workspace for serverless SQL warehouse](https://learn.microsoft.com/azure/databricks/sql/admin/serverless).
46+
* `warehouse_type` - SQL warehouse type. See for [AWS](https://docs.databricks.com/sql/index.html#warehouse-types) or [Azure](https://learn.microsoft.com/azure/databricks/sql/#warehouse-types). Set to `PRO` or `CLASSIC` (default). If you want to use serverless compute, you must set to `PRO` and **also** set the field `enable_serverless_compute` to `true`.
4247
* `channel` block, consisting of following fields:
4348
* `name` - Name of the Databricks SQL release channel. Possible values are: `CHANNEL_NAME_PREVIEW` and `CHANNEL_NAME_CURRENT`. Default is `CHANNEL_NAME_CURRENT`.
4449
* `jdbc_url` - JDBC connection string.
4550
* `odbc_params` - ODBC connection params: `odbc_params.hostname`, `odbc_params.path`, `odbc_params.protocol`, and `odbc_params.port`.
4651
* `data_source_id` - ID of the data source for this warehouse. This is used to bind an Databricks SQL query to an warehouse.
4752

48-
## Related Resources
53+
## Related resources
4954

5055
The following resources are often used in the same context:
5156

docs/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ Databricks SQL
4949
* Create [databricks_sql_endpoint](resources/sql_endpoint.md) controlled by [databricks_permissions](resources/permissions.md).
5050
* Manage [queries](resources/sql_query.md) and their [visualizations](resources/sql_visualization.md).
5151
* Manage [dashboards](resources/sql_dashboard.md) and their [widgets](resources/sql_widget.md).
52-
* Provide [global configuration for all SQL Endpoints](docs/resources/sql_global_config.md)
52+
* Provide [global configuration for all SQL warehouses](docs/resources/sql_global_config.md)
5353

5454
MLFlow
5555

docs/resources/permissions.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -527,9 +527,9 @@ resource "databricks_permissions" "token_usage" {
527527
}
528528
```
529529

530-
## SQL Endpoint Usage
530+
## SQL warehouse usage
531531

532-
[SQL endpoints](https://docs.databricks.com/sql/user/security/access-control/sql-endpoint-acl.html) have two possible permissions: `CAN_USE` and `CAN_MANAGE`:
532+
[SQL warehouses](https://docs.databricks.com/sql/user/security/access-control/sql-endpoint-acl.html) have two possible permissions: `CAN_USE` and `CAN_MANAGE`:
533533

534534
```hcl
535535
data "databricks_current_user" "me" {}
@@ -693,7 +693,7 @@ Exactly one of the following arguments is required:
693693
- `experiment_id` - [MLflow experiment](mlflow_experiment.md) id
694694
- `registered_model_id` - [MLflow registered model](mlflow_model.md) id
695695
- `authorization` - either [`tokens`](https://docs.databricks.com/administration-guide/access-control/tokens.html) or [`passwords`](https://docs.databricks.com/administration-guide/users-groups/single-sign-on/index.html#configure-password-permission).
696-
- `sql_endpoint_id` - [SQL endpoint](sql_endpoint.md) id
696+
- `sql_endpoint_id` - [SQL warehouse](sql_endpoint.md) id
697697
- `sql_dashboard_id` - [SQL dashboard](sql_dashboard.md) id
698698
- `sql_query_id` - [SQL query](sql_query.md) id
699699
- `sql_alert_id` - [SQL alert](https://docs.databricks.com/sql/user/security/access-control/alert-acl.html) id

docs/resources/sql_endpoint.md

Lines changed: 18 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ subcategory: "Databricks SQL"
33
---
44
# databricks_sql_endpoint Resource
55

6-
This resource is used to manage [Databricks SQL Endpoints](https://docs.databricks.com/sql/admin/sql-endpoints.html). To create [SQL endpoints](https://docs.databricks.com/sql/get-started/concepts.html) you must have `databricks_sql_access` on your [databricks_group](group.md#databricks_sql_access) or [databricks_user](user.md#databricks_sql_access).
6+
This resource is used to manage [Databricks SQL warehouses](https://docs.databricks.com/sql/admin/sql-endpoints.html). To create [SQL warehouses](https://docs.databricks.com/sql/get-started/concepts.html) you must have `databricks_sql_access` on your [databricks_group](group.md#databricks_sql_access) or [databricks_user](user.md#databricks_sql_access).
77

88
## Example usage
99

@@ -24,39 +24,43 @@ resource "databricks_sql_endpoint" "this" {
2424
}
2525
```
2626

27-
## Argument Reference
27+
## Argument reference
2828

2929
The following arguments are supported:
3030

31-
* `name` - (Required) Name of the SQL endpoint. Must be unique.
31+
* `name` - (Required) Name of the SQL warehouse. Must be unique.
3232
* `cluster_size` - (Required) The size of the clusters allocated to the endpoint: "2X-Small", "X-Small", "Small", "Medium", "Large", "X-Large", "2X-Large", "3X-Large", "4X-Large".
33-
* `min_num_clusters` - Minimum number of clusters available when a SQL endpoint is running. The default is `1`.
34-
* `max_num_clusters` - Maximum number of clusters available when a SQL endpoint is running. This field is required. If multi-cluster load balancing is not enabled, this is default to `1`.
35-
* `auto_stop_mins` - Time in minutes until an idle SQL endpoint terminates all clusters and stops. This field is optional. The default is 120, set to 0 to disable the auto stop.
33+
* `min_num_clusters` - Minimum number of clusters available when a SQL warehouse is running. The default is `1`.
34+
* `max_num_clusters` - Maximum number of clusters available when a SQL warehouse is running. This field is required. If multi-cluster load balancing is not enabled, this is default to `1`.
35+
* `auto_stop_mins` - Time in minutes until an idle SQL warehouse terminates all clusters and stops. This field is optional. The default is 120, set to 0 to disable the auto stop.
3636
* `tags` - Databricks tags all endpoint resources with these tags.
3737
* `spot_instance_policy` - The spot policy to use for allocating instances to clusters: `COST_OPTIMIZED` or `RELIABILITY_OPTIMIZED`. This field is optional. Default is `COST_OPTIMIZED`.
3838
* `enable_photon` - Whether to enable [Photon](https://databricks.com/product/delta-engine). This field is optional and is enabled by default.
39-
* `enable_serverless_compute` - Whether this SQL endpoint is a Serverless endpoint. To use a Serverless SQL endpoint, you must enable Serverless SQL endpoints for the workspace.
39+
* `enable_serverless_compute` - Whether this SQL warehouse is a serverless endpoint. If this value is true explicitly or through the default, you **must** also set `warehouse_type` field to `pro`.
40+
41+
- **For AWS**, Databricks strongly recommends that you always explicitly set this field. If omitted, the default is false for most workspaces. However, if this workspace used the SQL Warehouses API to create a warehouse between September 1, 2022 and April 30, 2023, the default remains the previous behavior which is default to true if the workspace is enabled for serverless and fits the requirements for serverless SQL warehouses. To avoid ambiguity, especially for organizations with many workspaces, Databricks recommends that you always set this field. If your account needs updated [terms of use](https://docs.databricks.com/sql/admin/serverless.html#accept-terms), workspace admins are prompted in the Databricks SQL UI. A workspace must meet the [requirements](https://docs.databricks.com/sql/admin/serverless.html#requirements) and might require an update its instance profile role to [add a trust relationship](https://docs.databricks.com/sql/admin/serverless.html#aws-instance-profile-setup).
42+
43+
- **For Azure**, you must [enable your workspace for serverless SQL warehouse](https://learn.microsoft.com/azure/databricks/sql/admin/serverless). For Azure, if serverless SQL warehouses are disabled for the workspace, the default is `false`. If serverless SQL warehouses are enabled for the workspace, the default is `true`.
4044
* `channel` block, consisting of following fields:
4145
* `name` - Name of the Databricks SQL release channel. Possible values are: `CHANNEL_NAME_PREVIEW` and `CHANNEL_NAME_CURRENT`. Default is `CHANNEL_NAME_CURRENT`.
42-
* `warehouse_type` - [SQL Warehouse Type](https://docs.databricks.com/sql/admin/sql-endpoints.html#switch-the-sql-warehouse-type-pro-classic-or-serverless): `PRO` or `CLASSIC` (default). If Serverless SQL is enabled, you can only specify `PRO`.
43-
44-
## Attribute Reference
46+
* `warehouse_type` - SQL warehouse type. See for [AWS](https://docs.databricks.com/sql/admin/sql-endpoints.html#switch-the-sql-warehouse-type-pro-classic-or-serverless) or [Azure](https://docs.databricks.com/sql/admin/sql-endpoints.html#switch-the-sql-warehouse-type-pro-classic-or-serverless). Set to `PRO` or `CLASSIC` (default). If you want to use serverless compute, you must set to `PRO` and **also** set the field `enable_serverless_compute` to `true`.
47+
48+
## Attribute reference
4549

4650
In addition to all arguments above, the following attributes are exported:
4751

4852
* `jdbc_url` - JDBC connection string.
4953
* `odbc_params` - ODBC connection params: `odbc_params.hostname`, `odbc_params.path`, `odbc_params.protocol`, and `odbc_params.port`.
5054
* `data_source_id` - ID of the data source for this endpoint. This is used to bind an Databricks SQL query to an endpoint.
5155

52-
## Access Control
56+
## Access control
5357

54-
* [databricks_permissions](permissions.md#Job-Endpoint-usage) can control which groups or individual users can *Can Use* or *Can Manage* SQL endpoints.
58+
* [databricks_permissions](permissions.md#Job-Endpoint-usage) can control which groups or individual users can *Can Use* or *Can Manage* SQL warehouses.
5559
* `databricks_sql_access` on [databricks_group](group.md#databricks_sql_access) or [databricks_user](user.md#databricks_sql_access).
5660

5761
## Timeouts
5862

59-
The `timeouts` block allows you to specify `create` timeouts. It usually takes 10-20 minutes to provision a Databricks SQL endpoint.
63+
The `timeouts` block allows you to specify `create` timeouts. It usually takes 10-20 minutes to provision a Databricks SQL warehouse.
6064

6165
```hcl
6266
timeouts {
@@ -72,7 +76,7 @@ You can import a `databricks_sql_endpoint` resource with ID like the following:
7276
$ terraform import databricks_sql_endpoint.this <endpoint-id>
7377
```
7478

75-
## Related Resources
79+
## Related resources
7680

7781
The following resources are often used in the same context:
7882

docs/resources/sql_global_config.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,6 @@ The following arguments are supported (see [documentation](https://docs.databric
4646

4747
* `security_policy` (Optional, String) - The policy for controlling access to datasets. Default value: `DATA_ACCESS_CONTROL`, consult documentation for list of possible values
4848
* `data_access_config` (Optional, Map) - Data access configuration for [databricks_sql_endpoint](sql_endpoint.md), such as configuration for an external Hive metastore, Hadoop Filesystem configuration, etc. Please note that the list of supported configuration properties is limited, so refer to the [documentation](https://docs.databricks.com/sql/admin/data-access-configuration.html#supported-properties) for a full list. Apply will fail if you're specifying not permitted configuration.
49-
* `enable_serverless_compute` (optional, Boolean) - Allows the possibility to create Serverless SQL warehouses. Default value: false.
5049
* `instance_profile_arn` (Optional, String) - [databricks_instance_profile](instance_profile.md) used to access storage from [databricks_sql_endpoint](sql_endpoint.md). Please note that this parameter is only for AWS, and will generate an error if used on other clouds.
5150
* `sql_config_params` (Optional, Map) - SQL Configuration Parameters let you override the default behavior for all sessions with all endpoints.
5251

0 commit comments

Comments
 (0)