Skip to content

Commit cdef6f9

Browse files
authored
Prepare 0.3.0 release (#464)
* Simplified instance pool tests * Fixed concurrent testing for instance profiles * Added 0.3.0 migration instructions * Fix minor inconsistencies * Verified state upgraders for notebooks and files * Add global rate limit to HTTP calls * Fixed redundant multiple mounting clusters (#445) * Sort outputs in group data source (#467) * Use correct Azure environment for management endpoint (#470) * Increased code coverage * Added databricks_current_user data * Send empty JSON body while deleting IP ACL (#426) * Locking single EC2 Instance profile for tests * Use Terraform 0.14+ as primary testing binary Co-authored-by: Serge Smertin <[email protected]>
1 parent d7e3bd9 commit cdef6f9

File tree

83 files changed

+1803
-1149
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

83 files changed

+1803
-1149
lines changed

.gitignore

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -333,4 +333,6 @@ website/public/**
333333

334334
.vscode/private.env
335335
tf.log
336-
*.env
336+
*.env
337+
338+
scripts/tt

CHANGELOG.md

Lines changed: 6 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,9 @@
22

33
## 0.3.0
44

5+
* Added global timeout for resource provisioning, which defaults to 15 minutes.
6+
* Added [databricks_current_user] to simplify applying the same Terraform configuration by different users in the shared workspace for testing purposes.
7+
* Added client-side rate limiting to release the pressure on backend APIs and prevent client blocking ([#465](https://github.com/databrickslabs/terraform-provider-databricks/pull/465))
58
* Member usernames, group names and instance profile names in `databricks_group` data source are now sorted and providing consistent behavior between runs ([#449](https://github.com/databrickslabs/terraform-provider-databricks/issues/449))
69
* Fixed redundant multiple mounting clusters ([#445](https://github.com/databrickslabs/terraform-provider-databricks/issues/445))
710
* Added optional parameter azure_environment to provider config which defaults to public ([#437](https://github.com/databrickslabs/terraform-provider-databricks/pull/437)).
@@ -14,11 +17,10 @@
1417
* Added validation for secret scope name in `databricks_secret`, `databricks_secret_scope` and `databricks_secret_acl`. Non-compliant names may cause errors.
1518
* Added [databricks_spark_version](https://github.com/databrickslabs/terraform-provider-databricks/issues/347) data source.
1619
* Fixed support for [single node clusters](https://docs.databricks.com/clusters/single-node.html) support by allowing [`num_workers` to be `0`](https://github.com/databrickslabs/terraform-provider-databricks/pull/454).
17-
* All resource imports are now making call to corresponding Databricks API by default ([#471](https://github.com/databrickslabs/terraform-provider-databricks/issues/471))
20+
* Fixed bug in destruction of IP access lists ([#426](https://github.com/databrickslabs/terraform-provider-databricks/issues/426)).
21+
* All resource imports are now making call to corresponding Databricks API by default ([#471](https://github.com/databrickslabs/terraform-provider-databricks/issues/471)).
1822

1923
**Behavior changes**
20-
21-
* Added optional parameter `azure_environment` to provider config which defaults to `public`.
2224
* Removed deprecated `library_jar`, `library_egg`, `library_whl`, `library_pypi`, `library_cran`, and `library_maven` from `databricks_cluster` and `databricks_job` in favor of more API-transparent [library](https://registry.terraform.io/providers/databrickslabs/databricks/latest/docs/resources/cluster#library-configuration-block) configuration block.
2325
* Removed deprecated `notebook_path` and `notebook_base_parameters` from `databricks_job` in favor of [notebook_task](https://registry.terraform.io/providers/databrickslabs/databricks/latest/docs/resources/job#notebook_task-configuration-block) configuration block.
2426
* Removed deprecated `jar_uri`, `jar_main_class_name`, and `jar_parameters` from `databricks_job` in favor of [spark_jar_task](https://registry.terraform.io/providers/databrickslabs/databricks/latest/docs/resources/job#spark_jar_task-configuration-block) configuration block.
@@ -28,7 +30,7 @@
2830
* Removed deprecated `databricks_scim_group` resource in favor of [databricks_group](https://registry.terraform.io/providers/databrickslabs/databricks/latest/docs/resources/group).
2931
* Removed deprecated `databricks_default_user_roles` data source in favor of [databricks_group](https://registry.terraform.io/providers/databrickslabs/databricks/latest/docs/data-sources/group#attribute-reference) data source.
3032
* Removed deprecated `basic_auth` and `azure_auth` provider configuration blocks in favor of [documented authentication methods](https://registry.terraform.io/providers/databrickslabs/databricks/latest/docs).
31-
* `format`, `overwrite`, and `mkdirs` were removed from `databricks_notebook`. TODO: handle RESOURCE_ALREADY_EXISTS for mkdirs.
33+
* `format`, `overwrite`, and `mkdirs` were removed from `databricks_notebook`. To follow expected behavior of Terraform, notebooks are always overwritten.
3234
* `skip_validation` from `databricks_instance_profile` was removed and is always set to `true` for subsequent requests.
3335
* `databricks_mws_workspace` got `verify_workspace_runnning` removed and now validates all every deployment. In case deployment failed, it removes workspace that failed and returns error message with explanation.
3436
* `default_tags` were removed from `databricks_instance_pool`. `disk_spec` got new attribute `disk_type`, that contains `azure_disk_volume_type` and `ebs_volume_type`. This change is made to closer reflect API structure.

CONTRIBUTING.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -109,6 +109,7 @@ $ docker run -it -v $(pwd):/workpace -w /workpace databricks-terraform apply
109109
* Consider test functions as scenarios, that you are debugging from IDE when specific issues arise. Test tables are discouraged. Single-use functions in tests are discouraged, unless resource definitions they make are longer than 80 lines.
110110
* All tests should be capable of repeatedly running on "dirty" environment, which means not requiring a new clean environment every time the test runs.
111111
* All tests should re-use compute resources whenever possible.
112+
* Prefer `require.NoError` (stops the test on error) to `assert.NoError` (continues the test on error) when checking the results.
112113

113114
## Code conventions
114115

@@ -129,7 +130,7 @@ Eventually, all of resources would be automatically checked for a unit test pres
129130

130131
```go
131132
for name, resource := range p.ResourcesMap {
132-
if name != "databricks_scim_user" {
133+
if name != "databricks_user" {
133134
continue
134135
}
135136
//...

Makefile

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -24,10 +24,13 @@ build:
2424
@echo "✓ Building source code with go build ..."
2525
@go build -mod vendor -v -o terraform-provider-databricks
2626

27-
install: build
28-
@echo "✓ Installing provider into ~/.terraform.d/plugins ..."
27+
install12: build
28+
@echo "✓ Installing provider for Terraform 0.12 into ~/.terraform.d/plugins ..."
2929
@test -d $(HOME)/.terraform.d/plugins && rm $(HOME)/.terraform.d/plugins/terraform-provider-databricks* || mkdir -p $(HOME)/.terraform.d/plugins
3030
@cp terraform-provider-databricks $(HOME)/.terraform.d/plugins
31+
32+
install: build
33+
@echo "✓ Installing provider for Terraform 0.13+ into ~/.terraform.d/plugins ..."
3134
@mkdir -p '$(HOME)/.terraform.d/plugins/registry.terraform.io/databrickslabs/databricks/$(shell ./terraform-provider-databricks version)/$(shell go version | awk '{print $$4}' | sed 's#/#_#')'
3235
@cp terraform-provider-databricks '$(HOME)/.terraform.d/plugins/registry.terraform.io/databrickslabs/databricks/$(shell ./terraform-provider-databricks version)/$(shell go version | awk '{print $$4}' | sed 's#/#_#')/terraform-provider-databricks'
3336
@echo "✓ Use the following configuration to enable the version you've built"

README.md

Lines changed: 36 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,11 @@
22

33
![Resources](docs/resources.png)
44

5-
End-to-end workspace creation on [AWS](scripts/awsmt-integration) or [Azure](scripts/azvnet-integration/databricks.tf)
5+
[AWS](docs/guides/aws-workspace.md) tutorial
6+
| [Azure](docs/guides/azure-workspace.md) tutorial
7+
| [End-to-end](docs/guides/workspace-management.md) tutorial
8+
| Migration from [0.2.x to 0.3.x](docs/guides/migration-0.3.x.md)
9+
| [Changelog](CHANGELOG.md)
610
| [Authentication](docs/index.md)
711
| [databricks_aws_s3_mount](docs/resources/aws_s3_mount.md)
812
| [databricks_aws_assume_role_policy](docs/data-sources/aws_assume_role_policy.md) data
@@ -13,6 +17,7 @@ End-to-end workspace creation on [AWS](scripts/awsmt-integration) or [Azure](scr
1317
| [databricks_azure_blob_mount](docs/resources/azure_blob_mount.md)
1418
| [databricks_cluster](docs/resources/cluster.md)
1519
| [databricks_cluster_policy](docs/resources/cluster_policy.md)
20+
| [databricks_current_user](docs/data-sources/current_user.md)
1621
| [databricks_dbfs_file](docs/resources/dbfs_file.md)
1722
| [databricks_dbfs_file_paths](docs/data-sources/dbfs_file_paths.md) data
1823
| [databricks_dbfs_file](docs/data-sources/dbfs_file.md) data
@@ -44,7 +49,6 @@ End-to-end workspace creation on [AWS](scripts/awsmt-integration) or [Azure](scr
4449
| [databricks_user_instance_profile](docs/resources/user_instance_profile.md)
4550
| [databricks_workspace_conf](docs/resources/workspace_conf.md)
4651
| [Contributing and Development Guidelines](CONTRIBUTING.md)
47-
| [Changelog](CHANGELOG.md)
4852

4953
[![build](https://github.com/databrickslabs/terraform-provider-databricks/workflows/build/badge.svg?branch=master)](https://github.com/databrickslabs/terraform-provider-databricks/actions?query=workflow%3Abuild+branch%3Amaster) [![codecov](https://codecov.io/gh/databrickslabs/terraform-provider-databricks/branch/master/graph/badge.svg)](https://codecov.io/gh/databrickslabs/terraform-provider-databricks)
5054

@@ -75,24 +79,44 @@ provider "databricks" {
7579
token = "<your PAT token>"
7680
}
7781
82+
data "databricks_current_user" "me" {}
83+
data "databricks_spark_version" "latest" {}
7884
data "databricks_node_type" "smallest" {
7985
local_disk = true
8086
}
8187
82-
data "databricks_spark_version" "latest_lts" {
83-
long_term_support = true
88+
resource "databricks_notebook" "this" {
89+
path = "${data.databricks_current_user.me.home}/Terraform"
90+
language = "PYTHON"
91+
content_base64 = base64encode(<<-EOT
92+
# created from ${abspath(path.module)}
93+
display(spark.range(10))
94+
EOT
95+
)
8496
}
8597
86-
resource "databricks_cluster" "shared_autoscaling" {
87-
cluster_name = "Shared Autoscaling"
88-
spark_version = data.databricks_spark_version.latest_lts.id
89-
node_type_id = data.databricks_node_type.smallest.id
90-
autotermination_minutes = 20
98+
resource "databricks_job" "this" {
99+
name = "Terraform Demo (${data.databricks_current_user.me.alphanumeric})"
91100
92-
autoscale {
93-
min_workers = 1
94-
max_workers = 50
101+
new_cluster {
102+
num_workers = 1
103+
spark_version = data.databricks_spark_version.latest.id
104+
node_type_id = data.databricks_node_type.smallest.id
95105
}
106+
107+
notebook_task {
108+
notebook_path = databricks_notebook.this.path
109+
}
110+
111+
email_notifications {}
112+
}
113+
114+
output "notebook_url" {
115+
value = databricks_notebook.this.url
116+
}
117+
118+
output "job_url" {
119+
value = databricks_job.this.url
96120
}
97121
```
98122

access/resource_ipaccesslist.go

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ func (a ipAccessListsAPI) Update(objectID string, ur ipAccessListUpdateRequest)
7575
}
7676

7777
func (a ipAccessListsAPI) Delete(objectID string) (err error) {
78-
err = a.client.Delete(a.context, "/ip-access-lists/"+objectID, nil)
78+
err = a.client.Delete(a.context, "/ip-access-lists/"+objectID, map[string]interface{}{})
7979
return
8080
}
8181

access/resource_ipaccesslist_test.go

Lines changed: 33 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,12 +3,15 @@ package access
33
// REST API: https://docs.databricks.com/dev-tools/api/latest/ip-access-list.html#operation/create-list
44

55
import (
6+
"context"
67
"net/http"
8+
"os"
79
"testing"
810

911
"github.com/databrickslabs/databricks-terraform/common"
1012
"github.com/databrickslabs/databricks-terraform/internal/qa"
1113
"github.com/stretchr/testify/assert"
14+
"github.com/stretchr/testify/require"
1215
)
1316

1417
var (
@@ -21,6 +24,36 @@ var (
2124
TestingIPAddressesState = []interface{}{"1.2.3.4", "1.2.4.0/24"}
2225
)
2326

27+
func TestAccIPACL(t *testing.T) {
28+
cloud := os.Getenv("CLOUD_ENV")
29+
if cloud == "" {
30+
t.Skip("Acceptance tests skipped unless env 'CLOUD_ENV' is set")
31+
}
32+
client := common.NewClientFromEnvironment()
33+
ctx := context.Background()
34+
ipAccessListsAPI := NewIPAccessListsAPI(ctx, client)
35+
res, err := ipAccessListsAPI.Create(createIPAccessListRequest{
36+
Label: qa.RandomName("ipacl-"),
37+
ListType: "BLOCK",
38+
IPAddresses: TestingIPAddresses,
39+
})
40+
require.NoError(t, err)
41+
defer func() {
42+
err = ipAccessListsAPI.Delete(res.ListID)
43+
require.NoError(t, err)
44+
}()
45+
err = ipAccessListsAPI.Update(res.ListID, ipAccessListUpdateRequest{
46+
Label: res.Label,
47+
ListType: res.ListType,
48+
Enabled: true,
49+
IPAddresses: []string{"4.3.2.1"},
50+
})
51+
require.NoError(t, err)
52+
updated, err := ipAccessListsAPI.Read(res.ListID)
53+
require.NoError(t, err)
54+
assert.Equal(t, "4.3.2.1", updated.IPAddresses[0])
55+
}
56+
2457
func TestIPACLCreate(t *testing.T) {
2558
d, err := qa.ResourceFixture{
2659
Fixtures: []qa.HTTPFixture{

common/azure_auth.go

Lines changed: 14 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -69,18 +69,19 @@ type TokenInfo struct {
6969
var authorizerMutex sync.Mutex
7070

7171
func (aa *AzureAuth) getAzureEnvironment() (azure.Environment, error) {
72+
// Used for unit testing purposes
73+
if aa.azureManagementEndpoint != "" {
74+
return azure.Environment{
75+
ResourceManagerEndpoint: aa.azureManagementEndpoint,
76+
}, nil
77+
}
78+
7279
if aa.Environment == "" {
7380
return azure.PublicCloud, nil
7481
}
7582

7683
envName := fmt.Sprintf("AZURE%sCLOUD", strings.ToUpper(aa.Environment))
77-
env, err := azure.EnvironmentFromName(envName)
78-
79-
if err != nil {
80-
return env, err
81-
}
82-
83-
return env, nil
84+
return azure.EnvironmentFromName(envName)
8485
}
8586

8687
func (aa *AzureAuth) resourceID() string {
@@ -276,14 +277,15 @@ func (aa *AzureAuth) ensureWorkspaceURL(ctx context.Context,
276277
return fmt.Errorf("Somehow resource id is not set")
277278
}
278279
log.Println("[DEBUG] Getting Workspace ID via management token.")
279-
endpoint := "https://management.azure.com"
280-
if aa.azureManagementEndpoint != "" {
281-
// sets endpoint specified in unit test
282-
endpoint = aa.azureManagementEndpoint
280+
env, err := aa.getAzureEnvironment()
281+
if err != nil {
282+
return err
283283
}
284+
// All azure endpoints typically end with a trailing slash removing it because resourceID starts with slash
285+
managementResourceUrl := strings.TrimSuffix(env.ResourceManagerEndpoint, "/") + resourceID
284286
var workspace azureDatabricksWorkspace
285287
resp, err := aa.databricksClient.genericQuery(ctx, http.MethodGet,
286-
endpoint+resourceID,
288+
managementResourceUrl,
287289
map[string]string{
288290
"api-version": "2018-04-01",
289291
}, func(r *http.Request) error {

common/azure_auth_test.go

Lines changed: 31 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -107,6 +107,16 @@ func TestEnsureWorkspaceURL_CornerCases(t *testing.T) {
107107
aa.databricksClient = &DatabricksClient{}
108108
err = aa.ensureWorkspaceURL(context.Background(), nil)
109109
assert.EqualError(t, err, "Somehow resource id is not set")
110+
111+
aa = AzureAuth{
112+
Environment: "xyz",
113+
SubscriptionID: "a",
114+
ResourceGroup: "b",
115+
WorkspaceName: "c",
116+
databricksClient: &DatabricksClient{},
117+
}
118+
err = aa.ensureWorkspaceURL(context.Background(), nil)
119+
assert.EqualError(t, err, "autorest/azure: There is no cloud environment matching the name \"AZUREXYZCLOUD\"")
110120
}
111121

112122
func TestAcquirePAT_CornerCases(t *testing.T) {
@@ -132,18 +142,6 @@ func TestAcquirePAT_CornerCases(t *testing.T) {
132142
assert.Equal(t, "...", auth.TokenValue)
133143
}
134144

135-
func TestAzureAuth_isClientSecretSet(t *testing.T) {
136-
aa := AzureAuth{}
137-
assert.False(t, aa.IsClientSecretSet())
138-
139-
aa.ClientID = "a"
140-
assert.False(t, aa.IsClientSecretSet())
141-
aa.ClientSecret = "b"
142-
assert.False(t, aa.IsClientSecretSet())
143-
aa.TenantID = "c"
144-
assert.True(t, aa.IsClientSecretSet())
145-
}
146-
147145
func TestAzureAuth_ensureWorkspaceURL(t *testing.T) {
148146
aa := AzureAuth{}
149147

@@ -167,7 +165,8 @@ func TestAzureAuth_ensureWorkspaceURL(t *testing.T) {
167165
defer server.Close()
168166

169167
aa.ResourceID = "/subscriptions/a/resourceGroups/b/providers/Microsoft.Databricks/workspaces/c"
170-
aa.azureManagementEndpoint = server.URL
168+
// resource management endpoints end with a trailing slash in url
169+
aa.azureManagementEndpoint = fmt.Sprintf("%s/", server.URL)
171170

172171
client := DatabricksClient{InsecureSkipVerify: true}
173172
client.configureHTTPCLient()
@@ -249,7 +248,8 @@ func TestAzureAuth_configureWithClientSecret(t *testing.T) {
249248
aa.ClientID = "a"
250249
aa.ClientSecret = "b"
251250
aa.TenantID = "c"
252-
aa.azureManagementEndpoint = server.URL
251+
// resource management endpoints end with a trailing slash in url
252+
aa.azureManagementEndpoint = fmt.Sprintf("%s/", server.URL)
253253
auth, err = aa.configureWithClientSecret()
254254
assert.NotNil(t, auth)
255255
assert.NoError(t, err)
@@ -272,6 +272,23 @@ func TestAzureAuth_configureWithClientSecret(t *testing.T) {
272272
assert.Len(t, zi.Zones, 3)
273273
}
274274

275+
func TestAzureEnvironment_WithAzureManagementEndpoint(t *testing.T) {
276+
fakeEndpoint := "http://google.com"
277+
aa := AzureAuth{azureManagementEndpoint: fakeEndpoint}
278+
env, err := aa.getAzureEnvironment()
279+
assert.Nil(t, err)
280+
// This value should be populated with azureManagementEndpoint for testing
281+
assert.Equal(t, env.ResourceManagerEndpoint, fakeEndpoint)
282+
// The rest should be nill
283+
assert.Equal(t, env.ActiveDirectoryEndpoint, "")
284+
285+
// Making the azureManagementEndpoint empty should yield PublicCloud
286+
aa.azureManagementEndpoint = ""
287+
env, err = aa.getAzureEnvironment()
288+
assert.Nil(t, err)
289+
assert.Equal(t, azure.PublicCloud, env)
290+
}
291+
275292
func TestAzureEnvironment(t *testing.T) {
276293
aa := AzureAuth{}
277294
env, err := aa.getAzureEnvironment()

0 commit comments

Comments
 (0)