Skip to content

Commit 70a47c6

Browse files
authored
Prepare v0.3.11 (#903)
1 parent 7b12f73 commit 70a47c6

File tree

9 files changed

+46
-30
lines changed

9 files changed

+46
-30
lines changed

CHANGELOG.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,16 @@
55
* Added `databricks_sql_global_config` resource to provide global configuration for SQL Endpoints ([#855](https://github.com/databrickslabs/terraform-provider-databricks/issues/855))
66
* Added `databricks_mount` resource to mount arbitrary cloud storage ([#497](https://github.com/databrickslabs/terraform-provider-databricks/issues/497))
77
* Improved implementation of `databricks_repo` by creating the parent folder structure ([#895](https://github.com/databrickslabs/terraform-provider-databricks/pull/895))
8+
* Fixed `databricks_job` error related [to randomized job IDs](https://docs.databricks.com/release-notes/product/2021/august.html#jobs-service-stability-and-scalability-improvements) ([#901](https://github.com/databrickslabs/terraform-provider-databricks/issues/901))
9+
* Replace `databricks_group` on name change ([#890](https://github.com/databrickslabs/terraform-provider-databricks/pull/890))
10+
* Names of scopes in `databricks_secret_scope` can have `/` characters in them ([#892](https://github.com/databrickslabs/terraform-provider-databricks/pull/892))
11+
12+
**Deprecations**
13+
* `databricks_aws_s3_mount`, `databricks_azure_adls_gen1_mount`, `databricks_azure_adls_gen2_mount`, and `databricks_azure_blob_mount` are deprecated in favor of `databricks_mount`.
14+
15+
Updated dependency versions:
16+
17+
* Bump google.golang.org/api from 0.59.0 to 0.60.0
818

919
## 0.3.10
1020

docs/resources/mount.md

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,6 @@ subcategory: "Storage"
33
---
44
# databricks_mount Resource
55

6-
-> **Note** This resource has an evolving API, which may change in future versions of the provider.
7-
86
This resource will mount your cloud storage account on `dbfs:/mnt/yourname`. Right now it supports mounting AWS S3, Azure (Blob Storage, ADLS Gen1 & Gen2), Google Cloud Storage. It is important to understand that this will start up the [cluster](cluster.md) if the cluster is terminated. The read and refresh terraform command will require a cluster and may take some time to validate the mount. If `cluster_id` is not specified, it will create the smallest possible cluster with name equal to or starting with `terraform-mount` for the shortest possible amount of time.
97

108
This resource provides two ways of mounting a storage account:
@@ -21,9 +19,9 @@ This resource provides two ways of mounting a storage account:
2119

2220
* `cluster_id` - (Optional, String) Cluster to use for mounting. If no cluster is specified, a new cluster will be created and will mount the bucket for all of the clusters in this workspace. If the cluster is not running - it's going to be started, so be aware to set auto-termination rules on it.
2321
* `name` - (Optional, String) Name, under which mount will be accessible in `dbfs:/mnt/<MOUNT_NAME>`. If not specified, provider will try to infer it from depending on the resource type:
24-
* bucket name for AWS S3 and Google Cloud Storage
25-
* container name for ADLS Gen2 and Azure Blob Storage
26-
* storage resource name for ADLS Gen1
22+
* `bucket_name` for AWS S3 and Google Cloud Storage
23+
* `container_name` for ADLS Gen2 and Azure Blob Storage
24+
* `storage_resource_name` for ADLS Gen1
2725
* `uri` - (Optional, String) the URI for accessing specific storage (`s3a://....`, `abfss://....`, `gs://....`, etc.)
2826
* `extra_configs` - (Optional, String map) configuration parameters that are necessary for mounting of specific storage
2927
* `resource_id` - (Optional, String) resource ID for given storage account. Could be used to fill defaults, such as storage account & container names on Azure.
@@ -33,8 +31,8 @@ This resource provides two ways of mounting a storage account:
3331

3432
```hcl
3533
locals {
36-
tenant_id = "8f35a392-f2ae-4280-9796-f1864a10eeec"
37-
client_id = "d1b2a25b-86c4-451a-a0eb-0808be121957"
34+
tenant_id = "00000000-1111-2222-3333-444444444444"
35+
client_id = "55555555-6666-7777-8888-999999999999"
3836
secret_scope = "some-kv"
3937
secret_key = "some-sp-secret"
4038
container = "test"
@@ -82,10 +80,9 @@ data "azurerm_databricks_workspace" "this" {
8280
8381
# it works only with AAD token!
8482
provider "databricks" {
85-
azure_workspace_resource_id = data.azurerm_databricks_workspace.this.id
83+
host = data.azurerm_databricks_workspace.this.workspace_url
8684
}
8785
88-
8986
data "databricks_node_type" "smallest" {
9087
local_disk = true
9188
}

docs/resources/sql_global_config.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ subcategory: "Databricks SQL"
55

66
-> **Public Preview** This feature is in [Public Preview](https://docs.databricks.com/release-notes/release-types.html).
77

8-
This resource configures the security policy, instance profile (AWS only), and data access properties for all SQL endpoints of workspace. *Please note that changing parameters of this resources will restart all running SQL endpoints.* To use this resource you need to be an administrator.
8+
This resource configures the security policy, [databricks_instance_profile](instance_profile.md), and data access properties for all [databricks_sql_endpoint](sql_endpoint.md) of workspace. *Please note that changing parameters of this resources will restart all running [databricks_sql_endpoint](sql_endpoint.md).* To use this resource you need to be an administrator.
99

1010
## Example usage
1111

@@ -24,8 +24,8 @@ resource "databricks_sql_global_config" "this" {
2424
The following arguments are supported (see [documentation](https://docs.databricks.com/sql/api/sql-endpoints.html#global-edit) for more details):
2525

2626
* `security_policy` (Optional, String) - The policy for controlling access to datasets. Default value: `DATA_ACCESS_CONTROL`, consult documentation for list of possible values
27-
* `data_access_config` (Optional, Map) - data access configuration for SQL Endpoints, such as configuration for an external Hive metastore, Hadoop Filesystem configuration, etc. Please note that the list of supported configuration properties is limited, so refer to the [documentation](https://docs.databricks.com/sql/admin/data-access-configuration.html#supported-properties) for a full list. Apply will fail if you're specifying not permitted configuration.
28-
* `instance_profile_arn` (Optional, String) - Instance profile used to access storage from SQL endpoints. Please note that this parameter is only for AWS, and will generate an error if used on other clouds.
27+
* `data_access_config` (Optional, Map) - data access configuration for [databricks_sql_endpoint](sql_endpoint.md), such as configuration for an external Hive metastore, Hadoop Filesystem configuration, etc. Please note that the list of supported configuration properties is limited, so refer to the [documentation](https://docs.databricks.com/sql/admin/data-access-configuration.html#supported-properties) for a full list. Apply will fail if you're specifying not permitted configuration.
28+
* `instance_profile_arn` (Optional, String) - [databricks_instance_profile](instance_profile.md) used to access storage from [databricks_sql_endpoint](sql_endpoint.md). Please note that this parameter is only for AWS, and will generate an error if used on other clouds.
2929

3030
## Import
3131

sqlanalytics/resource_sql_global_config.go

Lines changed: 5 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -14,34 +14,31 @@ type ConfPair struct {
1414
Value string `json:"value"`
1515
}
1616

17-
// GlobalConfig ...
17+
// GlobalConfig used to generate Terraform resource schema and bind to resource data
1818
type GlobalConfig struct {
1919
SecurityPolicy string `json:"security_policy,omitempty" tf:"default:DATA_ACCESS_CONTROL"`
2020
DataAccessConfig map[string]string `json:"data_access_config,omitempty"`
2121
InstanceProfileARN string `json:"instance_profile_arn,omitempty"`
2222
EnableServerlessCompute bool `json:"enable_serverless_compute,omitempty" tf:"default:false"`
2323
}
2424

25-
// GlobalConfigForRead ...
25+
// GlobalConfigForRead used to talk to REST API
2626
type GlobalConfigForRead struct {
2727
SecurityPolicy string `json:"security_policy"`
2828
DataAccessConfig []ConfPair `json:"data_access_config"`
2929
InstanceProfileARN string `json:"instance_profile_arn,omitempty"`
3030
EnableServerlessCompute bool `json:"enable_serverless_compute,omitempty"`
3131
}
3232

33-
// NewSqlGlobalConfigAPI ...
3433
func NewSqlGlobalConfigAPI(ctx context.Context, m interface{}) globalConfigAPI {
3534
return globalConfigAPI{m.(*common.DatabricksClient), ctx}
3635
}
3736

38-
// sAPI ...
3937
type globalConfigAPI struct {
4038
client *common.DatabricksClient
4139
context context.Context
4240
}
4341

44-
// Set ...
4542
func (a globalConfigAPI) Set(gc GlobalConfig) error {
4643
data := map[string]interface{}{
4744
"security_policy": gc.SecurityPolicy,
@@ -84,15 +81,13 @@ func (a globalConfigAPI) Get() (GlobalConfig, error) {
8481
return gc, nil
8582
}
8683

87-
// ResourceSQLGlobalConfig ...
8884
func ResourceSQLGlobalConfig() *schema.Resource {
8985
s := common.StructToSchema(GlobalConfig{}, func(
9086
m map[string]*schema.Schema) map[string]*schema.Schema {
9187
m["instance_profile_arn"].Default = ""
9288
return m
9389
})
94-
95-
set_func := func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
90+
setGlobalConfig := func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
9691
var gc GlobalConfig
9792
if err := common.DataToStructPointer(d, s, &gc); err != nil {
9893
return err
@@ -103,9 +98,8 @@ func ResourceSQLGlobalConfig() *schema.Resource {
10398
d.SetId("global")
10499
return nil
105100
}
106-
107101
return common.Resource{
108-
Create: set_func,
102+
Create: setGlobalConfig,
109103
Read: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
110104
gc, err := NewSqlGlobalConfigAPI(ctx, c).Get()
111105
if err != nil {
@@ -114,7 +108,7 @@ func ResourceSQLGlobalConfig() *schema.Resource {
114108
err = common.StructToData(gc, s, d)
115109
return err
116110
},
117-
Update: set_func,
111+
Update: setGlobalConfig,
118112
Delete: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
119113
return NewSqlGlobalConfigAPI(ctx, c).Set(GlobalConfig{SecurityPolicy: "DATA_ACCESS_CONTROL"})
120114
},

storage/adls_gen1_mount.go

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ func (m AzureADLSGen1Mount) Config(client *common.DatabricksClient) map[string]s
4646

4747
// ResourceAzureAdlsGen1Mount creates the resource
4848
func ResourceAzureAdlsGen1Mount() *schema.Resource {
49-
return commonMountResource(AzureADLSGen1Mount{}, map[string]*schema.Schema{
49+
return deprecatedMountTesource(commonMountResource(AzureADLSGen1Mount{}, map[string]*schema.Schema{
5050
"cluster_id": {
5151
Type: schema.TypeString,
5252
Optional: true,
@@ -106,5 +106,5 @@ func ResourceAzureAdlsGen1Mount() *schema.Resource {
106106
Required: true,
107107
ForceNew: true,
108108
},
109-
})
109+
}))
110110
}

storage/adls_gen2_mount.go

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ func (m AzureADLSGen2Mount) Config(client *common.DatabricksClient) map[string]s
4848

4949
// ResourceAzureAdlsGen2Mount creates the resource
5050
func ResourceAzureAdlsGen2Mount() *schema.Resource {
51-
return commonMountResource(AzureADLSGen2Mount{}, map[string]*schema.Schema{
51+
return deprecatedMountTesource(commonMountResource(AzureADLSGen2Mount{}, map[string]*schema.Schema{
5252
"cluster_id": {
5353
Type: schema.TypeString,
5454
Optional: true,
@@ -106,5 +106,5 @@ func ResourceAzureAdlsGen2Mount() *schema.Resource {
106106
Required: true,
107107
ForceNew: true,
108108
},
109-
})
109+
}))
110110
}

storage/aws_s3_mount.go

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,10 @@ func (m AWSIamMount) Config(client *common.DatabricksClient) map[string]string {
3939
func ResourceAWSS3Mount() *schema.Resource {
4040
tpl := AWSIamMount{}
4141
r := &schema.Resource{
42+
DeprecationMessage: "Resource is deprecated and will be removed in further versions. " +
43+
"Please rewrite configuration using `databricks_mount` resource. More info at " +
44+
"https://registry.terraform.io/providers/databrickslabs/databricks/latest/docs/" +
45+
"resources/mount#migration-from-other-mount-resources",
4246
Schema: map[string]*schema.Schema{
4347
"cluster_id": {
4448
Type: schema.TypeString,

storage/azure_blob_mount.go

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ func (m AzureBlobMount) Config(client *common.DatabricksClient) map[string]strin
4747

4848
// ResourceAzureBlobMount creates the resource
4949
func ResourceAzureBlobMount() *schema.Resource {
50-
return commonMountResource(AzureBlobMount{}, map[string]*schema.Schema{
50+
return deprecatedMountTesource(commonMountResource(AzureBlobMount{}, map[string]*schema.Schema{
5151
"cluster_id": {
5252
Type: schema.TypeString,
5353
Optional: true,
@@ -97,5 +97,5 @@ func ResourceAzureBlobMount() *schema.Resource {
9797
Sensitive: true,
9898
ForceNew: true,
9999
},
100-
})
100+
}))
101101
}

storage/mounts.go

Lines changed: 12 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,10 @@ func (mp MountPoint) Mount(mo Mount, client *common.DatabricksClient) (source st
100100
}
101101

102102
func commonMountResource(tpl Mount, s map[string]*schema.Schema) *schema.Resource {
103-
resource := &schema.Resource{Schema: s, SchemaVersion: 2}
103+
resource := &schema.Resource{
104+
SchemaVersion: 2,
105+
Schema: s,
106+
}
104107
// nolint should be a bigger context-aware refactor
105108
resource.CreateContext = mountCreate(tpl, resource)
106109
resource.ReadContext = mountRead(tpl, resource)
@@ -111,6 +114,14 @@ func commonMountResource(tpl Mount, s map[string]*schema.Schema) *schema.Resourc
111114
return resource
112115
}
113116

117+
func deprecatedMountTesource(r *schema.Resource) *schema.Resource {
118+
r.DeprecationMessage = "Resource is deprecated and will be removed in further versions. " +
119+
"Please rewrite configuration using `databricks_mount` resource. More info at " +
120+
"https://registry.terraform.io/providers/databrickslabs/databricks/latest/docs/" +
121+
"resources/mount#migration-from-other-mount-resources"
122+
return r
123+
}
124+
114125
// NewMountPoint returns new mount point config
115126
func NewMountPoint(executor common.CommandExecutor, name, clusterID string) MountPoint {
116127
return MountPoint{

0 commit comments

Comments
 (0)