Skip to content

Commit e7a3c07

Browse files
authored
direct: Added retries for cluster update calls (#3837)
## Changes Added retries for cluster update calls ## Why Copying the logic from TF to make sure that when we issue an update from direct, it waits for the cluster to reach an expected state first https://github.com/databricks/terraform-provider-databricks/blob/main/clusters/resource_cluster.go#L624-L635 ## Tests Added an acceptance test <!-- If your PR needs to be included in the release notes for next release, add a separate entry in NEXT_CHANGELOG.md as part of your PR. -->
1 parent dee32f6 commit e7a3c07

File tree

12 files changed

+109
-79
lines changed

12 files changed

+109
-79
lines changed

acceptance/bundle/resources/clusters/deploy/data_security_mode/out.direct-exp.txt

Lines changed: 1 addition & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -8,13 +8,5 @@ Deployment complete!
88
>>> errcode [CLI] bundle deploy
99
Uploading bundle files to /Workspace/Users/[USERNAME]/.bundle/[UNIQUE_NAME]/files...
1010
Deploying resources...
11-
Error: cannot update resources.clusters.test_cluster: updating id=[CLUSTER-ID]: Cluster [CLUSTER-ID] is in unexpected state Pending. (400 INVALID_STATE)
12-
13-
Endpoint: POST [DATABRICKS_URL]/api/2.1/clusters/edit
14-
HTTP Status: 400 Bad Request
15-
API error_code: INVALID_STATE
16-
API message: Cluster [CLUSTER-ID] is in unexpected state Pending.
17-
1811
Updating deployment state...
19-
20-
Exit code: 1
12+
Deployment complete!

acceptance/bundle/resources/clusters/deploy/data_security_mode/out.requests.direct-exp.txt

Lines changed: 0 additions & 31 deletions
This file was deleted.

acceptance/bundle/resources/clusters/deploy/data_security_mode/out.requests.terraform.txt

Lines changed: 0 additions & 30 deletions
This file was deleted.

acceptance/bundle/resources/clusters/deploy/data_security_mode/script

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,11 @@ envsubst < databricks.yml.tmpl > databricks.yml
22

33
cleanup() {
44
trace $CLI bundle destroy --auto-approve
5-
rm -f out.requests.txt
65
}
76
trap cleanup EXIT
87

98
trace $CLI bundle deploy > out.$DATABRICKS_BUNDLE_ENGINE.txt 2>&1
10-
print_requests.py --get //clusters > out.requests.$DATABRICKS_BUNDLE_ENGINE.txt
119

1210
trace $CLI bundle plan >> out.plan.$DATABRICKS_BUNDLE_ENGINE.txt 2>&1
1311

1412
trace errcode $CLI bundle deploy >> out.$DATABRICKS_BUNDLE_ENGINE.txt 2>&1
15-
print_requests.py --get //clusters > out.requests.$DATABRICKS_BUNDLE_ENGINE.txt

acceptance/bundle/resources/clusters/deploy/data_security_mode/test.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
Local = false
22
Cloud = true
3-
RecordRequests = true
3+
RecordRequests = false
44

55
Ignore = [
66
"databricks.yml",
Lines changed: 15 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
bundle:
2+
name: test-deploy-cluster-update-after-create
3+
4+
workspace:
5+
root_path: ~/.bundle/$UNIQUE_NAME
6+
7+
resources:
8+
clusters:
9+
test_cluster:
10+
cluster_name: test-cluster-$UNIQUE_NAME
11+
spark_version: $DEFAULT_SPARK_VERSION
12+
node_type_id: $NODE_TYPE_ID
13+
num_workers: 2
14+
spark_conf:
15+
"spark.executor.memory": "2g"
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
print("Hello World!")

acceptance/bundle/resources/clusters/deploy/update-after-create/out.test.toml

Lines changed: 5 additions & 0 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.
Lines changed: 34 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,34 @@
1+
2+
>>> [CLI] bundle deploy
3+
Uploading bundle files to /Workspace/Users/[USERNAME]/.bundle/[UNIQUE_NAME]/files...
4+
Deploying resources...
5+
Updating deployment state...
6+
Deployment complete!
7+
8+
=== Cluster should exist after bundle deployment:
9+
{
10+
"cluster_name": "test-cluster-[UNIQUE_NAME]",
11+
"num_workers": 2
12+
}
13+
14+
=== Updating cluster should call update API
15+
16+
>>> [CLI] bundle plan
17+
update clusters.test_cluster
18+
19+
Plan: 0 to add, 1 to change, 0 to delete, 0 unchanged
20+
21+
>>> [CLI] bundle deploy
22+
Uploading bundle files to /Workspace/Users/[USERNAME]/.bundle/[UNIQUE_NAME]/files...
23+
Deploying resources...
24+
Updating deployment state...
25+
Deployment complete!
26+
27+
>>> [CLI] bundle destroy --auto-approve
28+
The following resources will be deleted:
29+
delete cluster test_cluster
30+
31+
All files and directories at the following location will be deleted: /Workspace/Users/[USERNAME]/.bundle/[UNIQUE_NAME]
32+
33+
Deleting files...
34+
Destroy complete!
Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,18 @@
1+
envsubst < databricks.yml.tmpl > databricks.yml
2+
3+
cleanup() {
4+
trace $CLI bundle destroy --auto-approve
5+
rm -f out.requests.txt
6+
}
7+
trap cleanup EXIT
8+
9+
trace $CLI bundle deploy
10+
11+
title "Cluster should exist after bundle deployment:\n"
12+
CLUSTER_ID=$($CLI bundle summary -o json | jq -r '.resources.clusters.test_cluster.id')
13+
$CLI clusters get "${CLUSTER_ID}" | jq '{cluster_name,num_workers}'
14+
15+
title "Updating cluster should call update API\n"
16+
update_file.py databricks.yml "\"spark.executor.memory\": \"2g\"" "\"spark.executor.memory\": \"4g\""
17+
trace $CLI bundle plan
18+
trace $CLI bundle deploy

0 commit comments

Comments
 (0)