You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Document enable_serverless_compute API changes in databricks_sql_endpoint resource (#2137)
* fix "sql endpoint" in English descriptions (not the code itself) to say "SQL warehouse" which is Databricks current name for them.
* change to lowercase serverless, per Marketing team for docs usage in these types of cases
* Remove GlobalConfig configuring of workspace config for serverless.
* Make more accurate enablement info for AWS only (not "enable for workspace" since we don't do that as of GA). Azure doesn't need special terms of use flags at account level :
```
If your account was created before October 1, 2021, your organization's owner or account administrator must [accept applicable terms of use](https://docs.databricks.com/sql/admin/serverless.html#accept-terms) before workspaces are enabled for serverless compute. A workspace must meet the [requirements](https://docs.databricks.com/sql/admin/serverless.html#requirements) and might require an update its instance profile role to [add a trust relationship](https://docs.databricks.com/sql/admin/serverless.html#aws-instance-profile-setup). For Azure, you must [enable your workspace for serverless SQL warehouse](https://learn.microsoft.com/azure/databricks/sql/admin/serverless).
```
* Update AWS specific instructions when we enable "auto enablement at account level" as part of GA
```
For AWS, Databricks strongly recommends that you always explicitly set this field. If omitted, the default is false for most workspaces. However, if this workspace used the SQL Warehouses API to create a warehouse between September 1, 2022 and April 6, 2023, the default remains the previous behavior which is default to true if the workspace is enabled for serverless and fits the requirements for serverless SQL warehouses. To avoid ambiguity, especially for organizations with many workspaces, Databricks recommends that you always set this field.
```
* update some headings to be consistent with Databricks heading style capitalization (though I wasn't sure about article "Data Source" whether that needs to remain, so I left it be)
* For Azure, specify how default value works: For Azure, if serverless SQL warehouses are disabled for the workspace, the default is `false`. If serverless SQL warehouses are enabled for the workspace, the default is `true`.
* In `exporter_test.go` remove EnableServerlessCompute from sql.GlobalConfigForRead
* fix "sql endpoint" in English descriptions (not the code itself) to say "SQL warehouse" which is Databricks current name for them.
* change to lowercase serverless, per Marketing team for docs usage in these types of cases
* Remove GlobalConfig configuring of workspace config for serverless.
* Make more accurate enablement info for AWS only (not "enable for workspace" since we don't do that as of GA). Azure doesn't need special terms of use flags at account level :
```
If your account was created before October 1, 2021, your organization's owner or account administrator must [accept applicable terms of use](https://docs.databricks.com/sql/admin/serverless.html#accept-terms) before workspaces are enabled for serverless compute. A workspace must meet the [requirements](https://docs.databricks.com/sql/admin/serverless.html#requirements) and might require an update its instance profile role to [add a trust relationship](https://docs.databricks.com/sql/admin/serverless.html#aws-instance-profile-setup). For Azure, you must [enable your workspace for serverless SQL warehouse](https://learn.microsoft.com/azure/databricks/sql/admin/serverless).
```
* Update AWS specific instructions when we enable "auto enablement at account level" as part of GA
```
For AWS, Databricks strongly recommends that you always explicitly set this field. If omitted, the default is false for most workspaces. However, if this workspace used the SQL Warehouses API to create a warehouse between September 1, 2022 and April 6, 2023, the default remains the previous behavior which is default to true if the workspace is enabled for serverless and fits the requirements for serverless SQL warehouses. To avoid ambiguity, especially for organizations with many workspaces, Databricks recommends that you always set this field.
```
* update some headings to be consistent with Databricks heading style capitalization (though I wasn't sure about article "Data Source" whether that needs to remain, so I left it be)
* For Azure, specify how default value works: For Azure, if serverless SQL warehouses are disabled for the workspace, the default is `false`. If serverless SQL warehouses are enabled for the workspace, the default is `true`.
* In `exporter_test.go` remove EnableServerlessCompute from sql.GlobalConfigForRead
Copy file name to clipboardExpand all lines: docs/data-sources/sql_warehouse.md
+11-6Lines changed: 11 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ subcategory: "Databricks SQL"
7
7
8
8
Retrieves information about a [databricks_sql_warehouse](../resources/sql_warehouse.md) using its id. This could be retrieved programmatically using [databricks_sql_warehouses](../data-sources/sql_warehouses.md) data source.
9
9
10
-
## Example Usage
10
+
## Example usage
11
11
12
12
Retrieve attributes of each SQL warehouses in a workspace
13
13
@@ -22,11 +22,11 @@ data "databricks_sql_warehouse" "all" {
22
22
23
23
```
24
24
25
-
## Argument Reference
25
+
## Argument reference
26
26
27
-
*`id` - (Required) The id of the SQL warehouse
27
+
*`id` - (Required) The ID of the SQL warehouse
28
28
29
-
## Attribute Reference
29
+
## Attribute reference
30
30
31
31
This data source exports the following attributes:
32
32
@@ -38,14 +38,19 @@ This data source exports the following attributes:
38
38
*`tags` - Databricks tags all warehouse resources with these tags.
39
39
*`spot_instance_policy` - The spot policy to use for allocating instances to clusters: `COST_OPTIMIZED` or `RELIABILITY_OPTIMIZED`.
40
40
*`enable_photon` - Whether to enable [Photon](https://databricks.com/product/delta-engine).
41
-
*`enable_serverless_compute` - Whether this SQL warehouse is a Serverless warehouse. To use a Serverless SQL warehouse, you must enable Serverless SQL warehouses for the workspace.
41
+
*`enable_serverless_compute` - Whether this SQL warehouse is a serverless SQL warehouse. If this value is true explicitly or through the default, you **must** also set `warehouse_type` field to `pro`.
42
+
43
+
-**For AWS**: If your account needs updated [terms of use](https://docs.databricks.com/sql/admin/serverless.html#accept-terms), workspace admins are prompted in the Databricks SQL UI. A workspace must meet the [requirements](https://docs.databricks.com/sql/admin/serverless.html#requirements) and might require an update its instance profile role to [add a trust relationship](https://docs.databricks.com/sql/admin/serverless.html#aws-instance-profile-setup).
44
+
45
+
-**For Azure**, you must [enable your workspace for serverless SQL warehouse](https://learn.microsoft.com/azure/databricks/sql/admin/serverless).
46
+
*`warehouse_type` - SQL warehouse type. See for [AWS](https://docs.databricks.com/sql/index.html#warehouse-types) or [Azure](https://learn.microsoft.com/azure/databricks/sql/#warehouse-types). Set to `PRO` or `CLASSIC` (default). If you want to use serverless compute, you must set to `PRO` and **also** set the field `enable_serverless_compute` to `true`.
42
47
*`channel` block, consisting of following fields:
43
48
*`name` - Name of the Databricks SQL release channel. Possible values are: `CHANNEL_NAME_PREVIEW` and `CHANNEL_NAME_CURRENT`. Default is `CHANNEL_NAME_CURRENT`.
44
49
*`jdbc_url` - JDBC connection string.
45
50
*`odbc_params` - ODBC connection params: `odbc_params.hostname`, `odbc_params.path`, `odbc_params.protocol`, and `odbc_params.port`.
46
51
*`data_source_id` - ID of the data source for this warehouse. This is used to bind an Databricks SQL query to an warehouse.
47
52
48
-
## Related Resources
53
+
## Related resources
49
54
50
55
The following resources are often used in the same context:
[SQL endpoints](https://docs.databricks.com/sql/user/security/access-control/sql-endpoint-acl.html) have two possible permissions: `CAN_USE` and `CAN_MANAGE`:
532
+
[SQL warehouses](https://docs.databricks.com/sql/user/security/access-control/sql-endpoint-acl.html) have two possible permissions: `CAN_USE` and `CAN_MANAGE`:
533
533
534
534
```hcl
535
535
data "databricks_current_user" "me" {}
@@ -693,7 +693,7 @@ Exactly one of the following arguments is required:
693
693
-`experiment_id` - [MLflow experiment](mlflow_experiment.md) id
694
694
-`registered_model_id` - [MLflow registered model](mlflow_model.md) id
695
695
-`authorization` - either [`tokens`](https://docs.databricks.com/administration-guide/access-control/tokens.html) or [`passwords`](https://docs.databricks.com/administration-guide/users-groups/single-sign-on/index.html#configure-password-permission).
696
-
-`sql_endpoint_id` - [SQL endpoint](sql_endpoint.md) id
696
+
-`sql_endpoint_id` - [SQL warehouse](sql_endpoint.md) id
697
697
-`sql_dashboard_id` - [SQL dashboard](sql_dashboard.md) id
698
698
-`sql_query_id` - [SQL query](sql_query.md) id
699
699
-`sql_alert_id` - [SQL alert](https://docs.databricks.com/sql/user/security/access-control/alert-acl.html) id
Copy file name to clipboardExpand all lines: docs/resources/sql_endpoint.md
+18-14Lines changed: 18 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ subcategory: "Databricks SQL"
3
3
---
4
4
# databricks_sql_endpoint Resource
5
5
6
-
This resource is used to manage [Databricks SQL Endpoints](https://docs.databricks.com/sql/admin/sql-endpoints.html). To create [SQL endpoints](https://docs.databricks.com/sql/get-started/concepts.html) you must have `databricks_sql_access` on your [databricks_group](group.md#databricks_sql_access) or [databricks_user](user.md#databricks_sql_access).
6
+
This resource is used to manage [Databricks SQL warehouses](https://docs.databricks.com/sql/admin/sql-endpoints.html). To create [SQL warehouses](https://docs.databricks.com/sql/get-started/concepts.html) you must have `databricks_sql_access` on your [databricks_group](group.md#databricks_sql_access) or [databricks_user](user.md#databricks_sql_access).
*`name` - (Required) Name of the SQL endpoint. Must be unique.
31
+
*`name` - (Required) Name of the SQL warehouse. Must be unique.
32
32
*`cluster_size` - (Required) The size of the clusters allocated to the endpoint: "2X-Small", "X-Small", "Small", "Medium", "Large", "X-Large", "2X-Large", "3X-Large", "4X-Large".
33
-
*`min_num_clusters` - Minimum number of clusters available when a SQL endpoint is running. The default is `1`.
34
-
*`max_num_clusters` - Maximum number of clusters available when a SQL endpoint is running. This field is required. If multi-cluster load balancing is not enabled, this is default to `1`.
35
-
*`auto_stop_mins` - Time in minutes until an idle SQL endpoint terminates all clusters and stops. This field is optional. The default is 120, set to 0 to disable the auto stop.
33
+
*`min_num_clusters` - Minimum number of clusters available when a SQL warehouse is running. The default is `1`.
34
+
*`max_num_clusters` - Maximum number of clusters available when a SQL warehouse is running. This field is required. If multi-cluster load balancing is not enabled, this is default to `1`.
35
+
*`auto_stop_mins` - Time in minutes until an idle SQL warehouse terminates all clusters and stops. This field is optional. The default is 120, set to 0 to disable the auto stop.
36
36
*`tags` - Databricks tags all endpoint resources with these tags.
37
37
*`spot_instance_policy` - The spot policy to use for allocating instances to clusters: `COST_OPTIMIZED` or `RELIABILITY_OPTIMIZED`. This field is optional. Default is `COST_OPTIMIZED`.
38
38
*`enable_photon` - Whether to enable [Photon](https://databricks.com/product/delta-engine). This field is optional and is enabled by default.
39
-
*`enable_serverless_compute` - Whether this SQL endpoint is a Serverless endpoint. To use a Serverless SQL endpoint, you must enable Serverless SQL endpoints for the workspace.
39
+
*`enable_serverless_compute` - Whether this SQL warehouse is a serverless endpoint. If this value is true explicitly or through the default, you **must** also set `warehouse_type` field to `pro`.
40
+
41
+
-**For AWS**, Databricks strongly recommends that you always explicitly set this field. If omitted, the default is false for most workspaces. However, if this workspace used the SQL Warehouses API to create a warehouse between September 1, 2022 and April 30, 2023, the default remains the previous behavior which is default to true if the workspace is enabled for serverless and fits the requirements for serverless SQL warehouses. To avoid ambiguity, especially for organizations with many workspaces, Databricks recommends that you always set this field. If your account needs updated [terms of use](https://docs.databricks.com/sql/admin/serverless.html#accept-terms), workspace admins are prompted in the Databricks SQL UI. A workspace must meet the [requirements](https://docs.databricks.com/sql/admin/serverless.html#requirements) and might require an update its instance profile role to [add a trust relationship](https://docs.databricks.com/sql/admin/serverless.html#aws-instance-profile-setup).
42
+
43
+
-**For Azure**, you must [enable your workspace for serverless SQL warehouse](https://learn.microsoft.com/azure/databricks/sql/admin/serverless). For Azure, if serverless SQL warehouses are disabled for the workspace, the default is `false`. If serverless SQL warehouses are enabled for the workspace, the default is `true`.
40
44
*`channel` block, consisting of following fields:
41
45
*`name` - Name of the Databricks SQL release channel. Possible values are: `CHANNEL_NAME_PREVIEW` and `CHANNEL_NAME_CURRENT`. Default is `CHANNEL_NAME_CURRENT`.
42
-
*`warehouse_type` - [SQL Warehouse Type](https://docs.databricks.com/sql/admin/sql-endpoints.html#switch-the-sql-warehouse-type-pro-classic-or-serverless): `PRO` or `CLASSIC` (default). If Serverless SQL is enabled, you can only specify`PRO`.
43
-
44
-
## Attribute Reference
46
+
*`warehouse_type` - SQL warehouse type. See for [AWS](https://docs.databricks.com/sql/admin/sql-endpoints.html#switch-the-sql-warehouse-type-pro-classic-or-serverless) or [Azure](https://docs.databricks.com/sql/admin/sql-endpoints.html#switch-the-sql-warehouse-type-pro-classic-or-serverless). Set to `PRO` or `CLASSIC` (default). If you want to use serverless compute, you must set to`PRO` and **also** set the field `enable_serverless_compute` to `true`.
47
+
48
+
## Attribute reference
45
49
46
50
In addition to all arguments above, the following attributes are exported:
47
51
48
52
*`jdbc_url` - JDBC connection string.
49
53
*`odbc_params` - ODBC connection params: `odbc_params.hostname`, `odbc_params.path`, `odbc_params.protocol`, and `odbc_params.port`.
50
54
*`data_source_id` - ID of the data source for this endpoint. This is used to bind an Databricks SQL query to an endpoint.
51
55
52
-
## Access Control
56
+
## Access control
53
57
54
-
*[databricks_permissions](permissions.md#Job-Endpoint-usage) can control which groups or individual users can *Can Use* or *Can Manage* SQL endpoints.
58
+
*[databricks_permissions](permissions.md#Job-Endpoint-usage) can control which groups or individual users can *Can Use* or *Can Manage* SQL warehouses.
55
59
*`databricks_sql_access` on [databricks_group](group.md#databricks_sql_access) or [databricks_user](user.md#databricks_sql_access).
56
60
57
61
## Timeouts
58
62
59
-
The `timeouts` block allows you to specify `create` timeouts. It usually takes 10-20 minutes to provision a Databricks SQL endpoint.
63
+
The `timeouts` block allows you to specify `create` timeouts. It usually takes 10-20 minutes to provision a Databricks SQL warehouse.
60
64
61
65
```hcl
62
66
timeouts {
@@ -72,7 +76,7 @@ You can import a `databricks_sql_endpoint` resource with ID like the following:
Copy file name to clipboardExpand all lines: docs/resources/sql_global_config.md
-1Lines changed: 0 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,7 +46,6 @@ The following arguments are supported (see [documentation](https://docs.databric
46
46
47
47
*`security_policy` (Optional, String) - The policy for controlling access to datasets. Default value: `DATA_ACCESS_CONTROL`, consult documentation for list of possible values
48
48
*`data_access_config` (Optional, Map) - Data access configuration for [databricks_sql_endpoint](sql_endpoint.md), such as configuration for an external Hive metastore, Hadoop Filesystem configuration, etc. Please note that the list of supported configuration properties is limited, so refer to the [documentation](https://docs.databricks.com/sql/admin/data-access-configuration.html#supported-properties) for a full list. Apply will fail if you're specifying not permitted configuration.
49
-
*`enable_serverless_compute` (optional, Boolean) - Allows the possibility to create Serverless SQL warehouses. Default value: false.
50
49
*`instance_profile_arn` (Optional, String) - [databricks_instance_profile](instance_profile.md) used to access storage from [databricks_sql_endpoint](sql_endpoint.md). Please note that this parameter is only for AWS, and will generate an error if used on other clouds.
51
50
*`sql_config_params` (Optional, Map) - SQL Configuration Parameters let you override the default behavior for all sessions with all endpoints.
0 commit comments