You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Support serverless workspaces in databricks_mws_workspaces (#4670)
## Changes
A new field `compute_mode` has been added to
`databricks_mws_workspaces`. This can be set to `SERVERLESS` to indicate
that a workspace should be serverless. Serverless workspaces are
documented at
https://docs.databricks.com/aws/en/admin/workspace/serverless-workspaces.
These workspaces need neither `credentials_id` or
`storage_configuration_id`, so the setup should be much easier.
## Tests
Added an integration test for creating serverless workspaces in AWS.
## Todos
- [x] Need to verify if serverless workspaces work in GCP. EDIT: not yet
supported.
- [x] Need to verify which regions users can specify for serverless
workspaces. EDIT: this will be documented shortly. I'll include a link.
- [x] The test does not work right now, I think our E2 account still
needs to be onboarded. EDIT: our account was onboarded and we verified
that this works.
Copy file name to clipboardExpand all lines: NEXT_CHANGELOG.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,6 +6,7 @@
6
6
7
7
* Add support for `power_bi_task` in jobs ([#4647](https://github.com/databricks/terraform-provider-databricks/pull/4647))
8
8
* Add support for `dashboard_task` in jobs ([#4646](https://github.com/databricks/terraform-provider-databricks/pull/4646))
9
+
* Add `compute_mode` to `databricks_mws_workspaces` to support creating serverless workspaces ([#4670](https://github.com/databricks/terraform-provider-databricks/pull/4670)).
Copy file name to clipboardExpand all lines: docs/resources/mws_workspaces.md
+25-7Lines changed: 25 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,7 +18,22 @@ This resource allows you to set up [workspaces on AWS](https://docs.databricks.c
18
18
19
19
## Example Usage
20
20
21
-
### Creating a Databricks on AWS workspace
21
+
### Creating a serverless workspace in AWS
22
+
23
+
Creating a serverless workspace does not require any prerequisite resources. Simply specify `compute_mode = "SERVERLESS"` when creating the workspace. Serverless workspaces must not include `credentials_id` or `storage_configuration_id`.
24
+
25
+
To use serverless workspaces, you must enroll in the [Default Storage preview](https://docs.databricks.com/aws/en/storage/express-storage).
By default, Databricks creates a VPC in your AWS account for each workspace. Databricks uses it for running clusters in the workspace. Optionally, you can use your VPC for the workspace, using the feature customer-managed VPC. Databricks recommends that you provide your VPC with [databricks_mws_networks](mws_networks.md) so that you can configure it according to your organization’s enterprise cloud standards while still conforming to Databricks requirements. You cannot migrate an existing workspace to your VPC. Please see the difference described through IAM policy actions [on this page](https://docs.databricks.com/administration-guide/account-api/iam-role.html).
107
+
By default, Databricks creates a VPC in your AWS account for each workspace. Databricks uses it for running clusters in the workspace. Optionally, you can use your VPC for the workspace, using the feature customer-managed VPC. Databricks recommends that you provide your VPC with [databricks_mws_networks](mws_networks.md) so that you can configure it according to your organization's enterprise cloud standards while still conforming to Databricks requirements. You cannot migrate an existing workspace to your VPC. Please see the difference described through IAM policy actions [on this page](https://docs.databricks.com/administration-guide/account-api/iam-role.html).
93
108
94
109
```hcl
95
110
variable "databricks_account_id" {
@@ -209,7 +224,7 @@ output "databricks_token" {
209
224
210
225
In order to create a [Databricks Workspace that leverages AWS PrivateLink](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html) please ensure that you have read and understood the [Enable Private Link](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html) documentation and then customise the example above with the relevant examples from [mws_vpc_endpoint](mws_vpc_endpoint.md), [mws_private_access_settings](mws_private_access_settings.md) and [mws_networks](mws_networks.md).
211
226
212
-
### Creating a Databricks on GCP workspace
227
+
### Creating a workspace on GCP
213
228
214
229
To get workspace running, you have to configure a network object:
215
230
@@ -270,11 +285,11 @@ output "databricks_token" {
270
285
271
286
In order to create a [Databricks Workspace that leverages GCP Private Service Connect](https://docs.gcp.databricks.com/administration-guide/cloud-configurations/gcp/private-service-connect.html) please ensure that you have read and understood the [Enable Private Service Connect](https://docs.gcp.databricks.com/administration-guide/cloud-configurations/gcp/private-service-connect.html) documentation and then customise the example above with the relevant examples from [mws_vpc_endpoint](mws_vpc_endpoint.md), [mws_private_access_settings](mws_private_access_settings.md) and [mws_networks](mws_networks.md).
272
287
273
-
#### Creating a Databricks on GCP workspace with Databricks-Managed VPC
288
+
#### Creating a workspace on GCP with Databricks-Managed VPC
By default, Databricks creates a VPC in your GCP project for each workspace. Databricks uses it for running clusters in the workspace. Optionally, you can use your VPC for the workspace, using the feature customer-managed VPC. Databricks recommends that you provide your VPC with [databricks_mws_networks](mws_networks.md) so that you can configure it according to your organization’s enterprise cloud standards while still conforming to Databricks requirements. You cannot migrate an existing workspace to your VPC.
292
+
By default, Databricks creates a VPC in your GCP project for each workspace. Databricks uses it for running clusters in the workspace. Optionally, you can use your VPC for the workspace, using the feature customer-managed VPC. Databricks recommends that you provide your VPC with [databricks_mws_networks](mws_networks.md) so that you can configure it according to your organization's enterprise cloud standards while still conforming to Databricks requirements. You cannot migrate an existing workspace to your VPC.
278
293
279
294
```hcl
280
295
variable "databricks_account_id" {
@@ -324,7 +339,8 @@ The following arguments are available:
324
339
*`workspace_name` - name of the workspace, will appear on UI.
325
340
*`network_id` - (Optional) `network_id` from [networks](mws_networks.md).
326
341
*`aws_region` - (AWS only) region of VPC.
327
-
*`storage_configuration_id` - (AWS only)`storage_configuration_id` from [storage configuration](mws_storage_configurations.md).
342
+
*`storage_configuration_id` - (AWS only, Optional) `storage_configuration_id` from [storage configuration](mws_storage_configurations.md). This must not be specified when `compute_mode` is set to `SERVERLESS`.
343
+
*`credentials_id` - (AWS only, Optional) `credentials_id` from [credentials](mws_credentials.md). This must not be specified when `compute_mode` is set to `SERVERLESS`.
328
344
*`managed_services_customer_managed_key_id` - (Optional) `customer_managed_key_id` from [customer managed keys](mws_customer_managed_keys.md) with `use_cases` set to `MANAGED_SERVICES`. This is used to encrypt the workspace's notebook and secret data in the control plane.
329
345
*`storage_customer_managed_key_id` - (Optional) `customer_managed_key_id` from [customer managed keys](mws_customer_managed_keys.md) with `use_cases` set to `STORAGE`. This is used to encrypt the DBFS Storage & Cluster Volumes.
330
346
*`location` - (GCP only) region of the subnet.
@@ -337,6 +353,7 @@ The following arguments are available:
337
353
*`private_access_settings_id` - (Optional) Canonical unique identifier of [databricks_mws_private_access_settings](mws_private_access_settings.md) in Databricks Account.
338
354
*`custom_tags` - (Optional / AWS only) - The custom tags key-value pairing that is attached to this workspace. These tags will be applied to clusters automatically in addition to any `default_tags` or `custom_tags` on a cluster level. Please note it can take up to an hour for custom_tags to be set due to scheduling on Control Plane. After custom tags are applied, they can be modified however they can never be completely removed.
339
355
*`pricing_tier` - (Optional) - The pricing tier of the workspace.
356
+
*`compute_mode` - (Optional) - The compute mode for the workspace. When unset, a classic workspace is created, and both `credentials_id` and `storage_configuration_id` must be specified. When set to `SERVERLESS`, the resulting workspace is a serverless workspace, and `credentials_id` and `storage_configuration_id` must not be set. The only allowed value for this is `SERVERLESS`. Changing this field requires recreation of the workspace.
340
357
341
358
### token block
342
359
@@ -369,6 +386,7 @@ In addition to all arguments above, the following attributes are exported:
369
386
*`workspace_url` - (String) URL of the workspace
370
387
*`custom_tags` - (Map) Custom Tags (if present) added to workspace
371
388
*`gcp_workspace_sa` - (String, GCP only) identifier of a service account created for the workspace in form of `db-<workspace-id>@prod-gcp-<region>.iam.gserviceaccount.com`
389
+
*`effective_compute_mode` - (String) The effective compute mode for the workspace. This is either `SERVERLESS` for serverless workspaces or `HYBRID` for classic workspaces.
0 commit comments