Skip to content

Commit e6270d3

Browse files
authored
direct: Fixed handling clusters with instance pools (#3832)
## Changes Fixed handling clusters with instance pools ## Why Terraform does a bunch of config modifications for clusters that are part of instance pools; here, we do the same in direct https://github.com/databricks/terraform-provider-databricks/blob/main/clusters/resource_cluster.go#L213 ## Tests Added an acceptance test <!-- If your PR needs to be included in the release notes for next release, add a separate entry in NEXT_CHANGELOG.md as part of your PR. -->
1 parent c4a66fb commit e6270d3

File tree

12 files changed

+196
-0
lines changed

12 files changed

+196
-0
lines changed
Lines changed: 42 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,42 @@
1+
bundle:
2+
name: test-deploy-cluster-instance-pool
3+
4+
resources:
5+
clusters:
6+
# expecting aws_attributes to be empty since instance_pool_id is specified
7+
# expecting enable_elastic_disk to be false since instance_pool_id is specified
8+
cluster1:
9+
cluster_name: test-cluster-1
10+
spark_version: 13.3.x-scala2.12
11+
instance_pool_id: ip-1234567890
12+
enable_elastic_disk: true
13+
aws_attributes:
14+
availability: "ON_DEMAND"
15+
ebs_volume_type: "GENERAL_PURPOSE_SSD"
16+
num_workers: 2
17+
spark_conf:
18+
"spark.executor.memory": "2g"
19+
# expecting azure_attributes to be empty since instance_pool_id is specified
20+
# expecting enable_elastic_disk to be false since instance_pool_id is specified
21+
cluster2:
22+
cluster_name: test-cluster-2
23+
spark_version: 13.3.x-scala2.12
24+
instance_pool_id: ip-1234567890
25+
enable_elastic_disk: true
26+
azure_attributes:
27+
spot_bid_max_price: 20
28+
num_workers: 2
29+
spark_conf:
30+
"spark.executor.memory": "2g"
31+
# expecting gcp_attributes to be empty since instance_pool_id is specified
32+
# expecting enable_elastic_disk to be false since instance_pool_id is specified
33+
cluster3:
34+
cluster_name: test-cluster-3
35+
spark_version: 13.3.x-scala2.12
36+
instance_pool_id: ip-1234567890
37+
enable_elastic_disk: true
38+
gcp_attributes:
39+
local_ssd_count: 2
40+
num_workers: 2
41+
spark_conf:
42+
"spark.executor.memory": "2g"

acceptance/bundle/resources/clusters/deploy/instance_pool/out.test.toml

Lines changed: 5 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.
Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
2+
>>> [CLI] bundle deploy
3+
Uploading bundle files to /Workspace/Users/[USERNAME]/.bundle/test-deploy-cluster-instance-pool/default/files...
4+
Deploying resources...
5+
Updating deployment state...
6+
Deployment complete!
7+
8+
>>> print_requests.py //clusters/create
9+
{
10+
"method": "POST",
11+
"path": "/api/2.1/clusters/create",
12+
"body": {
13+
"autotermination_minutes": 60,
14+
"aws_attributes": {},
15+
"cluster_name": "test-cluster-1",
16+
"instance_pool_id": "ip-[NUMID]",
17+
"num_workers": 2,
18+
"spark_conf": {
19+
"spark.executor.memory": "2g"
20+
},
21+
"spark_version": "13.3.x-scala2.12"
22+
}
23+
}
24+
{
25+
"method": "POST",
26+
"path": "/api/2.1/clusters/create",
27+
"body": {
28+
"autotermination_minutes": 60,
29+
"azure_attributes": {},
30+
"cluster_name": "test-cluster-2",
31+
"instance_pool_id": "ip-[NUMID]",
32+
"num_workers": 2,
33+
"spark_conf": {
34+
"spark.executor.memory": "2g"
35+
},
36+
"spark_version": "13.3.x-scala2.12"
37+
}
38+
}
39+
{
40+
"method": "POST",
41+
"path": "/api/2.1/clusters/create",
42+
"body": {
43+
"autotermination_minutes": 60,
44+
"cluster_name": "test-cluster-3",
45+
"gcp_attributes": {},
46+
"instance_pool_id": "ip-[NUMID]",
47+
"num_workers": 2,
48+
"spark_conf": {
49+
"spark.executor.memory": "2g"
50+
},
51+
"spark_version": "13.3.x-scala2.12"
52+
}
53+
}
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
trace $CLI bundle deploy
2+
3+
trace print_requests.py //clusters/create | jq -s 'sort_by(.body.cluster_name)[]'
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
Local = true
2+
Cloud = false
3+
RecordRequests = true
4+
5+
Ignore = [
6+
"databricks.yml",
7+
]
8+
9+
[[Repls]]
10+
Old = "[0-9]{4}-[0-9]{6}-[0-9a-z]{8}"
11+
New = "[CLUSTER-ID]"
Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
bundle:
2+
name: test-deploy-cluster-instance-pool
3+
4+
resources:
5+
clusters:
6+
cluster1:
7+
cluster_name: test-cluster
8+
spark_version: 13.3.x-scala2.12
9+
node_type_id: i3.xlarge
10+
instance_pool_id: ip-1234567890
11+
enable_elastic_disk: true
12+
aws_attributes:
13+
availability: "ON_DEMAND"
14+
ebs_volume_type: "GENERAL_PURPOSE_SSD"
15+
num_workers: 2
16+
spark_conf:
17+
"spark.executor.memory": "2g"

acceptance/bundle/resources/clusters/deploy/instance_pool_and_node_type/out.test.toml

Lines changed: 5 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
2+
>>> errcode [CLI] bundle deploy
3+
Uploading bundle files to /Workspace/Users/[USERNAME]/.bundle/test-deploy-cluster-instance-pool/default/files...
4+
Deploying resources...
5+
Updating deployment state...
6+
Deployment complete!
7+
8+
>>> print_requests.py //clusters/create
9+
{
10+
"method": "POST",
11+
"path": "/api/2.1/clusters/create",
12+
"body": {
13+
"autotermination_minutes": 60,
14+
"aws_attributes": {},
15+
"cluster_name": "test-cluster",
16+
"instance_pool_id": "ip-[NUMID]",
17+
"num_workers": 2,
18+
"spark_conf": {
19+
"spark.executor.memory": "2g"
20+
},
21+
"spark_version": "13.3.x-scala2.12"
22+
}
23+
}
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
trace errcode $CLI bundle deploy
2+
trace print_requests.py //clusters/create
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
Local = true
2+
Cloud = false
3+
RecordRequests = true
4+
5+
Ignore = [
6+
"databricks.yml",
7+
]
8+
9+
[[Repls]]
10+
Old = "[0-9]{4}-[0-9]{6}-[0-9a-z]{8}"
11+
New = "[CLUSTER-ID]"

0 commit comments

Comments
 (0)