Skip to content

Commit a35c020

Browse files
nkvuongalexott
andauthored
[Doc] Removing GKE-specific fields in databricks_mws_workspaces and databricks_mws_networks (#4761)
## Changes - Removing references to GKE-specific fields in `databricks_mws_workspaces` and `databricks_mws_networks` ## Tests <!-- How is this tested? Please see the checklist below and also describe any other relevant tests --> - [x] relevant change in `docs/` folder --------- Co-authored-by: Alex Ott <[email protected]>
1 parent be1d6f3 commit a35c020

File tree

6 files changed

+11
-56
lines changed

6 files changed

+11
-56
lines changed

NEXT_CHANGELOG.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,9 @@
1616

1717
### Documentation
1818

19+
* Mark GKE-related fields for `databricks_mws_workspaces` and `databricks_mws_networks` as deprecated([#4752](https://github.com/databricks/terraform-provider-databricks/pull/4752)).
20+
21+
1922
### Exporter
2023

2124
### Internal Changes

docs/guides/gcp-private-service-connect-workspace.md

Lines changed: 1 addition & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ Secure a workspace with private connectivity and mitigate data exfiltration risk
1010

1111
## Creating a GCP service account for Databricks Provisioning and Authenticate with Databricks account API
1212

13-
To work with Databricks in GCP in an automated way, please create a service account and manually add it in the [Accounts Console](https://accounts.gcp.databricks.com/users) as an account admin. Databricks account-level APIs can only be called by account owners and account admins, and can only be authenticated using Google-issued OIDC tokens. The simplest way to do this would be via [Google Cloud CLI](https://cloud.google.com/sdk/gcloud). For details, please refer to [Provisioning Databricks workspaces on GCP](gcp_workspace.md).
13+
To work with Databricks in GCP in an automated way, please create a service account and manually add it in the [Accounts Console](https://accounts.gcp.databricks.com/users) as an account admin. Databricks account-level APIs can only be called by account owners and account admins, and can only be authenticated using Google-issued OIDC tokens. The simplest way to do this would be via [Google Cloud CLI](https://cloud.google.com/sdk/gcloud). For details, please refer to [Provisioning Databricks workspaces on GCP](gcp-workspace.md).
1414

1515
## Creating a VPC network
1616

@@ -35,14 +35,6 @@ resource "google_compute_subnetwork" "network-with-private-secondary-ip-ranges"
3535
ip_cidr_range = "10.0.0.0/16"
3636
region = "us-central1"
3737
network = google_compute_network.dbx_private_vpc.id
38-
secondary_ip_range {
39-
range_name = "pods"
40-
ip_cidr_range = "10.1.0.0/16"
41-
}
42-
secondary_ip_range {
43-
range_name = "svc"
44-
ip_cidr_range = "10.2.0.0/20"
45-
}
4638
private_ip_google_access = true
4739
}
4840
@@ -89,8 +81,6 @@ resource "databricks_mws_networks" "this" {
8981
vpc_id = google_compute_network.dbx_private_vpc.name
9082
subnet_id = google_compute_subnetwork.network-with-private-secondary-ip-ranges.name
9183
subnet_region = google_compute_subnetwork.network-with-private-secondary-ip-ranges.region
92-
pod_ip_range_name = "pods"
93-
service_ip_range_name = "svc"
9484
}
9585
vpc_endpoints {
9686
dataplane_relay = [databricks_mws_vpc_endpoint.relay_vpce.vpc_endpoint_id]

docs/guides/gcp-workspace.md

Lines changed: 0 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -171,14 +171,6 @@ resource "google_compute_subnetwork" "network-with-private-secondary-ip-ranges"
171171
ip_cidr_range = "10.0.0.0/16"
172172
region = "us-central1"
173173
network = google_compute_network.dbx_private_vpc.id
174-
secondary_ip_range {
175-
range_name = "pods"
176-
ip_cidr_range = "10.1.0.0/16"
177-
}
178-
secondary_ip_range {
179-
range_name = "svc"
180-
ip_cidr_range = "10.2.0.0/20"
181-
}
182174
private_ip_google_access = true
183175
}
184176
@@ -232,10 +224,6 @@ resource "databricks_mws_workspaces" "this" {
232224
}
233225
234226
network_id = databricks_mws_networks.this.network_id
235-
gke_config {
236-
connectivity_type = "PRIVATE_NODE_PUBLIC_MASTER"
237-
master_ip_range = "10.3.0.0/28"
238-
}
239227
240228
token {
241229
comment = "Terraform"

docs/resources/mws_networks.md

Lines changed: 3 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ Use this resource to [configure VPC](https://docs.databricks.com/administration-
99

1010
-> This resource can only be used with an account-level provider!
1111

12+
~> The `gke_cluster_service_ip_range` and `gke_pod_service_ip_range` arguments in `gcp_managed_network_config` are now deprecated and no longer supported. If you have already created a workspace using these fields, it is safe to remove them from your Terraform template.
13+
1214
* Databricks must have access to at least two subnets for each workspace, with each subnet in a different Availability Zone. You cannot specify more than one Databricks workspace subnet per Availability Zone in the Create network configuration API call. You can have more than one subnet per Availability Zone as part of your network setup, but you can choose only one subnet per Availability Zone for the Databricks workspace.
1315
* Databricks assigns two IP addresses per node, one for management traffic and one for Spark applications. The total number of instances for each subnet is equal to half of the available IP addresses.
1416
* Each subnet must have a netmask between /17 and /25.
@@ -26,9 +28,8 @@ Please follow this [complete runnable example](../guides/aws-workspace.md) with
2628

2729
Use this resource to [configure VPC](https://docs.gcp.databricks.com/administration-guide/cloud-configurations/gcp/customer-managed-vpc.html) & subnet for new workspaces within GCP. It is essential to understand that this will require you to configure your provider separately for the multiple workspaces resources.
2830

29-
* Databricks must have access to a subnet in the same region as the workspace, of which IP range will be used to allocate your workspace's GKE cluster nodes.
31+
* Databricks must have access to a subnet in the same region as the workspace, of which IP range will be used to allocate your workspace's GCE cluster nodes.
3032
* The subnet must have a netmask between /29 and /9.
31-
* Databricks must have access to 2 secondary IP ranges, one between /21 to /9 for workspace's GKE cluster pods, and one between /27 to /16 for workspace's GKE cluster services.
3233
* Subnet must have outbound access to the public network using a [gcp_compute_router_nat](https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/compute_router_nat) or other similar customer-managed appliance infrastructure.
3334

3435
Please follow this [complete runnable example](../guides/gcp-workspace.md) with new VPC and new workspace setup. Please pay special attention to the fact that there you have two different instances of a databricks provider - one for deploying workspaces (with `host="https://accounts.gcp.databricks.com/"`) and another for the workspace you've created with `databricks_mws_workspaces` resource. If you want both creations of workspaces & clusters within the same Terraform module (essentially the same directory), you should use the provider aliasing feature of Terraform. We strongly recommend having one terraform module to create workspace + PAT token and the rest in different modules.
@@ -118,14 +119,6 @@ resource "google_compute_subnetwork" "network-with-private-secondary-ip-ranges"
118119
ip_cidr_range = "10.0.0.0/16"
119120
region = "us-central1"
120121
network = google_compute_network.dbx_private_vpc.id
121-
secondary_ip_range {
122-
range_name = "pods"
123-
ip_cidr_range = "10.1.0.0/16"
124-
}
125-
secondary_ip_range {
126-
range_name = "svc"
127-
ip_cidr_range = "10.2.0.0/20"
128-
}
129122
private_ip_google_access = true
130123
}
131124
@@ -151,8 +144,6 @@ resource "databricks_mws_networks" "this" {
151144
vpc_id = google_compute_network.dbx_private_vpc.name
152145
subnet_id = google_compute_subnetwork.network_with_private_secondary_ip_ranges.name
153146
subnet_region = google_compute_subnetwork.network_with_private_secondary_ip_ranges.region
154-
pod_ip_range_name = "pods"
155-
service_ip_range_name = "svc"
156147
}
157148
}
158149
```
@@ -168,8 +159,6 @@ resource "databricks_mws_networks" "this" {
168159
vpc_id = google_compute_network.dbx_private_vpc.name
169160
subnet_id = google_compute_subnetwork.network_with_private_secondary_ip_ranges.name
170161
subnet_region = google_compute_subnetwork.network_with_private_secondary_ip_ranges.region
171-
pod_ip_range_name = "pods"
172-
service_ip_range_name = "svc"
173162
}
174163
vpc_endpoints {
175164
dataplane_relay = [databricks_mws_vpc_endpoint.relay.vpc_endpoint_id]
@@ -201,8 +190,6 @@ The following arguments are available:
201190
* `vpc_id` - The ID of the VPC associated with this network. VPC IDs can be used in multiple network configurations.
202191
* `subnet_id` - The ID of the subnet associated with this network.
203192
* `subnet_region` - The Google Cloud region of the workspace data plane. For example, `us-east4`.
204-
* `pod_ip_range_name` - The name of the secondary IP range for pods. A Databricks-managed GKE cluster uses this IP range for its pods. This secondary IP range can only be used by one workspace.
205-
* `service_ip_range_name` - The name of the secondary IP range for services. A Databricks-managed GKE cluster uses this IP range for its services. This secondary IP range can only be used by one workspace.
206193

207194
## Attribute Reference
208195

docs/resources/mws_private_access_settings.md

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -56,10 +56,7 @@ resource "databricks_mws_workspaces" "this" {
5656
project_id = var.google_project
5757
}
5858
}
59-
gke_config {
60-
connectivity_type = "PRIVATE_NODE_PUBLIC_MASTER"
61-
master_ip_range = "10.3.0.0/28"
62-
}
59+
6360
network_id = databricks_mws_networks.this.network_id
6461
private_access_settings_id = databricks_mws_private_access_settings.pas.private_access_settings_id
6562
pricing_tier = "PREMIUM"

docs/resources/mws_workspaces.md

Lines changed: 3 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,9 @@ This resource allows you to set up [workspaces on AWS](https://docs.databricks.c
77

88
-> This resource can only be used with an account-level provider!
99

10-
-> On Azure you need to use [azurerm_databricks_workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/databricks_workspace) resource to create Azure Databricks workspaces.
10+
~> The `gke_config` argument is now deprecated and no longer supported. If you have already created a workspace using these fields, it is safe to remove them from your Terraform template.
11+
12+
~> On Azure you need to use [azurerm_databricks_workspace](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/databricks_workspace) resource to create Azure Databricks workspaces.
1113

1214
## Example Usage
1315

@@ -262,10 +264,6 @@ resource "databricks_mws_workspaces" "this" {
262264
}
263265
264266
network_id = databricks_mws_networks.this.network_id
265-
gke_config {
266-
connectivity_type = "PRIVATE_NODE_PUBLIC_MASTER"
267-
master_ip_range = "10.3.0.0/28"
268-
}
269267
270268
token {}
271269
}
@@ -307,11 +305,6 @@ resource "databricks_mws_workspaces" "this" {
307305
}
308306
}
309307
310-
gke_config {
311-
connectivity_type = "PRIVATE_NODE_PUBLIC_MASTER"
312-
master_ip_range = "10.3.0.0/28"
313-
}
314-
315308
token {}
316309
}
317310
@@ -340,9 +333,6 @@ The following arguments are available:
340333
* `cloud_resource_container` - (GCP only) A block that specifies GCP workspace configurations, consisting of following blocks:
341334
* `gcp` - A block that consists of the following field:
342335
* `project_id` - The Google Cloud project ID, which the workspace uses to instantiate cloud resources for your workspace.
343-
* `gke_config` - (GCP only) A block that specifies GKE configuration for the Databricks workspace:
344-
* `connectivity_type`: Specifies the network connectivity types for the GKE nodes and the GKE master network. Possible values are: `PRIVATE_NODE_PUBLIC_MASTER`, `PUBLIC_NODE_PUBLIC_MASTER`.
345-
* `master_ip_range`: The IP range from which to allocate GKE cluster master resources. This field will be ignored if GKE private cluster is not enabled. It must be exactly as big as `/28`.
346336
* `private_access_settings_id` - (Optional) Canonical unique identifier of [databricks_mws_private_access_settings](mws_private_access_settings.md) in Databricks Account.
347337
* `custom_tags` - (Optional / AWS only) - The custom tags key-value pairing that is attached to this workspace. These tags will be applied to clusters automatically in addition to any `default_tags` or `custom_tags` on a cluster level. Please note it can take up to an hour for custom_tags to be set due to scheduling on Control Plane. After custom tags are applied, they can be modified however they can never be completely removed.
348338
* `pricing_tier` - (Optional) - The pricing tier of the workspace.

0 commit comments

Comments
 (0)