You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Doc] Removing GKE-specific fields in databricks_mws_workspaces and databricks_mws_networks (#4761)
## Changes
- Removing references to GKE-specific fields in
`databricks_mws_workspaces` and `databricks_mws_networks`
## Tests
<!--
How is this tested? Please see the checklist below and also describe any
other relevant tests
-->
- [x] relevant change in `docs/` folder
---------
Co-authored-by: Alex Ott <[email protected]>
Copy file name to clipboardExpand all lines: NEXT_CHANGELOG.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -16,6 +16,9 @@
16
16
17
17
### Documentation
18
18
19
+
* Mark GKE-related fields for `databricks_mws_workspaces` and `databricks_mws_networks` as deprecated([#4752](https://github.com/databricks/terraform-provider-databricks/pull/4752)).
Copy file name to clipboardExpand all lines: docs/guides/gcp-private-service-connect-workspace.md
+1-11Lines changed: 1 addition & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,7 +10,7 @@ Secure a workspace with private connectivity and mitigate data exfiltration risk
10
10
11
11
## Creating a GCP service account for Databricks Provisioning and Authenticate with Databricks account API
12
12
13
-
To work with Databricks in GCP in an automated way, please create a service account and manually add it in the [Accounts Console](https://accounts.gcp.databricks.com/users) as an account admin. Databricks account-level APIs can only be called by account owners and account admins, and can only be authenticated using Google-issued OIDC tokens. The simplest way to do this would be via [Google Cloud CLI](https://cloud.google.com/sdk/gcloud). For details, please refer to [Provisioning Databricks workspaces on GCP](gcp_workspace.md).
13
+
To work with Databricks in GCP in an automated way, please create a service account and manually add it in the [Accounts Console](https://accounts.gcp.databricks.com/users) as an account admin. Databricks account-level APIs can only be called by account owners and account admins, and can only be authenticated using Google-issued OIDC tokens. The simplest way to do this would be via [Google Cloud CLI](https://cloud.google.com/sdk/gcloud). For details, please refer to [Provisioning Databricks workspaces on GCP](gcp-workspace.md).
Copy file name to clipboardExpand all lines: docs/resources/mws_networks.md
+3-16Lines changed: 3 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,8 @@ Use this resource to [configure VPC](https://docs.databricks.com/administration-
9
9
10
10
-> This resource can only be used with an account-level provider!
11
11
12
+
~> The `gke_cluster_service_ip_range` and `gke_pod_service_ip_range` arguments in `gcp_managed_network_config` are now deprecated and no longer supported. If you have already created a workspace using these fields, it is safe to remove them from your Terraform template.
13
+
12
14
* Databricks must have access to at least two subnets for each workspace, with each subnet in a different Availability Zone. You cannot specify more than one Databricks workspace subnet per Availability Zone in the Create network configuration API call. You can have more than one subnet per Availability Zone as part of your network setup, but you can choose only one subnet per Availability Zone for the Databricks workspace.
13
15
* Databricks assigns two IP addresses per node, one for management traffic and one for Spark applications. The total number of instances for each subnet is equal to half of the available IP addresses.
14
16
* Each subnet must have a netmask between /17 and /25.
@@ -26,9 +28,8 @@ Please follow this [complete runnable example](../guides/aws-workspace.md) with
26
28
27
29
Use this resource to [configure VPC](https://docs.gcp.databricks.com/administration-guide/cloud-configurations/gcp/customer-managed-vpc.html) & subnet for new workspaces within GCP. It is essential to understand that this will require you to configure your provider separately for the multiple workspaces resources.
28
30
29
-
* Databricks must have access to a subnet in the same region as the workspace, of which IP range will be used to allocate your workspace's GKE cluster nodes.
31
+
* Databricks must have access to a subnet in the same region as the workspace, of which IP range will be used to allocate your workspace's GCE cluster nodes.
30
32
* The subnet must have a netmask between /29 and /9.
31
-
* Databricks must have access to 2 secondary IP ranges, one between /21 to /9 for workspace's GKE cluster pods, and one between /27 to /16 for workspace's GKE cluster services.
32
33
* Subnet must have outbound access to the public network using a [gcp_compute_router_nat](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_router_nat) or other similar customer-managed appliance infrastructure.
33
34
34
35
Please follow this [complete runnable example](../guides/gcp-workspace.md) with new VPC and new workspace setup. Please pay special attention to the fact that there you have two different instances of a databricks provider - one for deploying workspaces (with `host="https://accounts.gcp.databricks.com/"`) and another for the workspace you've created with `databricks_mws_workspaces` resource. If you want both creations of workspaces & clusters within the same Terraform module (essentially the same directory), you should use the provider aliasing feature of Terraform. We strongly recommend having one terraform module to create workspace + PAT token and the rest in different modules.
@@ -201,8 +190,6 @@ The following arguments are available:
201
190
*`vpc_id` - The ID of the VPC associated with this network. VPC IDs can be used in multiple network configurations.
202
191
*`subnet_id` - The ID of the subnet associated with this network.
203
192
*`subnet_region` - The Google Cloud region of the workspace data plane. For example, `us-east4`.
204
-
*`pod_ip_range_name` - The name of the secondary IP range for pods. A Databricks-managed GKE cluster uses this IP range for its pods. This secondary IP range can only be used by one workspace.
205
-
*`service_ip_range_name` - The name of the secondary IP range for services. A Databricks-managed GKE cluster uses this IP range for its services. This secondary IP range can only be used by one workspace.
Copy file name to clipboardExpand all lines: docs/resources/mws_workspaces.md
+3-13Lines changed: 3 additions & 13 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,9 @@ This resource allows you to set up [workspaces on AWS](https://docs.databricks.c
7
7
8
8
-> This resource can only be used with an account-level provider!
9
9
10
-
-> On Azure you need to use [azurerm_databricks_workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/databricks_workspace) resource to create Azure Databricks workspaces.
10
+
~> The `gke_config` argument is now deprecated and no longer supported. If you have already created a workspace using these fields, it is safe to remove them from your Terraform template.
11
+
12
+
~> On Azure you need to use [azurerm_databricks_workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/databricks_workspace) resource to create Azure Databricks workspaces.
@@ -340,9 +333,6 @@ The following arguments are available:
340
333
*`cloud_resource_container` - (GCP only) A block that specifies GCP workspace configurations, consisting of following blocks:
341
334
*`gcp` - A block that consists of the following field:
342
335
*`project_id` - The Google Cloud project ID, which the workspace uses to instantiate cloud resources for your workspace.
343
-
*`gke_config` - (GCP only) A block that specifies GKE configuration for the Databricks workspace:
344
-
*`connectivity_type`: Specifies the network connectivity types for the GKE nodes and the GKE master network. Possible values are: `PRIVATE_NODE_PUBLIC_MASTER`, `PUBLIC_NODE_PUBLIC_MASTER`.
345
-
*`master_ip_range`: The IP range from which to allocate GKE cluster master resources. This field will be ignored if GKE private cluster is not enabled. It must be exactly as big as `/28`.
346
336
*`private_access_settings_id` - (Optional) Canonical unique identifier of [databricks_mws_private_access_settings](mws_private_access_settings.md) in Databricks Account.
347
337
*`custom_tags` - (Optional / AWS only) - The custom tags key-value pairing that is attached to this workspace. These tags will be applied to clusters automatically in addition to any `default_tags` or `custom_tags` on a cluster level. Please note it can take up to an hour for custom_tags to be set due to scheduling on Control Plane. After custom tags are applied, they can be modified however they can never be completely removed.
348
338
*`pricing_tier` - (Optional) - The pricing tier of the workspace.
0 commit comments