Skip to content

Commit 836ac2b

Browse files
authored
[Feature] Add unified provider support for SDKv2 compatible plugin framework resources and data sources (#5115)
## Changes <!-- Summary of your changes that are easy to understand --> Add unified provider support for the following resources: - Quality Monitor - Library - Shares These use types.List to be compatible with SDKv2 Also noticed that we don't have documentation for library resource, added it as well. Main documentation: #5122 ## Tests <!-- How is this tested? Please see the checklist below and also describe any other relevant tests --> Integration tests
1 parent cf1aa45 commit 836ac2b

File tree

10 files changed

+482
-66
lines changed

10 files changed

+482
-66
lines changed

NEXT_CHANGELOG.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@
88

99
### New Features and Improvements
1010

11+
* Add `provider_config` support for SDKv2 compatible plugin framework resources and data sources([#5115](https://github.com/databricks/terraform-provider-databricks/pull/5115))
1112
* Optimize `databricks_grant` and `databricks_grants` to not call the `Update` API if the requested permissions are already granted ([#5095](https://github.com/databricks/terraform-provider-databricks/pull/5095))
1213
* Added `expected_workspace_status` to `databricks_mws_workspaces` to support creating workspaces in provisioning status ([#5019](https://github.com/databricks/terraform-provider-databricks/pull/5019))
1314

docs/resources/library.md

Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -127,6 +127,38 @@ resource "databricks_library" "rkeops" {
127127
}
128128
```
129129

130+
## Argument Reference
131+
132+
The following arguments are supported:
133+
134+
* `cluster_id` - (Required) ID of the [databricks_cluster](cluster.md) to install the library on.
135+
136+
You must specify exactly **one** of the following library types:
137+
138+
* `jar` - (Optional) Path to the JAR library. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs. For example: `/Workspace/path/to/library.jar`, `/Volumes/path/to/library.jar` or `s3://my-bucket/library.jar`. If S3 is used, make sure the cluster has read access to the library. You may need to launch the cluster with an IAM role to access the S3 URI.
139+
140+
* `egg` - (Optional, Deprecated) Path to the EGG library. Installing Python egg files is deprecated and is not supported in Databricks Runtime 14.0 and above. Use `whl` or `pypi` instead.
141+
142+
* `whl` - (Optional) Path to the wheel library. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs. For example: `/Workspace/path/to/library.whl`, `/Volumes/path/to/library.whl` or `s3://my-bucket/library.whl`. If S3 is used, make sure the cluster has read access to the library. You may need to launch the cluster with an IAM role to access the S3 URI.
143+
144+
* `requirements` - (Optional) Path to the requirements.txt file. Only Workspace paths and Unity Catalog Volumes paths are supported. For example: `/Workspace/path/to/requirements.txt` or `/Volumes/path/to/requirements.txt`. Requires a cluster with DBR 15.0+.
145+
146+
* `maven` - (Optional) Configuration block for a Maven library. The block consists of the following fields:
147+
* `coordinates` - (Required) Gradle-style Maven coordinates. For example: `org.jsoup:jsoup:1.7.2`.
148+
* `repo` - (Optional) Maven repository to install the Maven package from. If omitted, both Maven Central Repository and Spark Packages are searched.
149+
* `exclusions` - (Optional) List of dependencies to exclude. For example: `["slf4j:slf4j", "*:hadoop-client"]`. See [Maven dependency exclusions](https://maven.apache.org/guides/introduction/introduction-to-optional-and-excludes-dependencies.html) for more information.
150+
151+
* `pypi` - (Optional) Configuration block for a PyPI library. The block consists of the following fields:
152+
* `package` - (Required) The name of the PyPI package to install. An optional exact version specification is also supported. For example: `simplejson` or `simplejson==3.8.0`.
153+
* `repo` - (Optional) The repository where the package can be found. If not specified, the default pip index is used.
154+
155+
* `cran` - (Optional) Configuration block for a CRAN library. The block consists of the following fields:
156+
* `package` - (Required) The name of the CRAN package to install.
157+
* `repo` - (Optional) The repository where the package can be found. If not specified, the default CRAN repo is used.
158+
159+
* `provider_config` - (Optional) Configuration block for management through the account provider. This block consists of the following fields:
160+
* `workspace_id` - (Required) Workspace ID that the resource belongs to. This workspace must be part of the account that the provider is configured with.
161+
130162
## Import
131163

132164
!> Importing this resource is not currently supported.

docs/resources/quality_monitor.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ subcategory: "Unity Catalog"
33
---
44
# databricks_quality_monitor Resource
55

6-
This resource allows you to manage [Lakehouse Monitors](https://docs.databricks.com/en/lakehouse-monitoring/index.html) in Databricks.
6+
This resource allows you to manage [Lakehouse Monitors](https://docs.databricks.com/en/lakehouse-monitoring/index.html) in Databricks.
77

88
-> This resource can only be used with a workspace-level provider!
99

@@ -120,6 +120,8 @@ table.
120120
* `skip_builtin_dashboard` - Whether to skip creating a default dashboard summarizing data quality metrics. (Can't be updated after creation).
121121
* `slicing_exprs` - List of column expressions to slice data with for targeted analysis. The data is grouped by each expression independently, resulting in a separate slice for each predicate and its complements. For high-cardinality columns, only the top 100 unique values by frequency will generate slices.
122122
* `warehouse_id` - Optional argument to specify the warehouse for dashboard creation. If not specified, the first running warehouse will be used. (Can't be updated after creation)
123+
* `provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
124+
* `workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.
123125

124126
## Attribute Reference
125127

@@ -129,7 +131,7 @@ In addition to all arguments above, the following attributes are exported:
129131
* `monitor_version` - The version of the monitor config (e.g. 1,2,3). If negative, the monitor may be corrupted
130132
* `drift_metrics_table_name` - The full name of the drift metrics table. Format: __catalog_name__.__schema_name__.__table_name__.
131133
* `profile_metrics_table_name` - The full name of the profile metrics table. Format: __catalog_name__.__schema_name__.__table_name__.
132-
* `status` - Status of the Monitor
134+
* `status` - Status of the Monitor
133135
* `dashboard_id` - The ID of the generated dashboard.
134136

135137
## Related Resources

docs/resources/share.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -85,6 +85,8 @@ The following arguments are required:
8585
* `name` - (Required) Name of share. Change forces creation of a new resource.
8686
* `owner` - (Optional) User name/group name/sp application_id of the share owner.
8787
* `comment` - (Optional) User-supplied free-form text.
88+
* `provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
89+
* `workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.
8890

8991
### object Configuration Block
9092

internal/providers/pluginfw/products/library/resource_library.go

Lines changed: 44 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ import (
44
"context"
55
"errors"
66
"fmt"
7+
"reflect"
78
"time"
89

910
"github.com/databricks/databricks-sdk-go/apierr"
@@ -16,6 +17,7 @@ import (
1617
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/tfschema"
1718
"github.com/databricks/terraform-provider-databricks/internal/service/compute_tf"
1819
"github.com/databricks/terraform-provider-databricks/libraries"
20+
"github.com/hashicorp/terraform-plugin-framework-validators/listvalidator"
1921
"github.com/hashicorp/terraform-plugin-framework/diag"
2022
"github.com/hashicorp/terraform-plugin-framework/path"
2123
"github.com/hashicorp/terraform-plugin-framework/resource"
@@ -76,6 +78,13 @@ type LibraryExtended struct {
7678
compute_tf.Library_SdkV2
7779
ClusterId types.String `tfsdk:"cluster_id"`
7880
ID types.String `tfsdk:"id"` // Adding ID field to stay compatible with SDKv2
81+
tfschema.Namespace_SdkV2
82+
}
83+
84+
func (l LibraryExtended) GetComplexFieldTypes(ctx context.Context) map[string]reflect.Type {
85+
attrs := l.Library_SdkV2.GetComplexFieldTypes(ctx)
86+
attrs["provider_config"] = reflect.TypeOf(tfschema.ProviderConfig{})
87+
return attrs
7988
}
8089

8190
type LibraryResource struct {
@@ -107,6 +116,7 @@ func (r *LibraryResource) Schema(ctx context.Context, req resource.SchemaRequest
107116
c.SetOptional("id")
108117
c.SetComputed("id")
109118
c.SetDeprecated(clusters.EggDeprecationWarning, "egg")
119+
c.AddValidator(listvalidator.SizeAtMost(1), "provider_config")
110120
return c
111121
})
112122
resp.Schema = schema.Schema{
@@ -124,13 +134,20 @@ func (r *LibraryResource) Configure(ctx context.Context, req resource.ConfigureR
124134

125135
func (r *LibraryResource) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) {
126136
ctx = pluginfwcontext.SetUserAgentInResourceContext(ctx, resourceName)
127-
w, diags := r.Client.GetWorkspaceClient()
137+
var libraryTfSDK LibraryExtended
138+
resp.Diagnostics.Append(req.Plan.Get(ctx, &libraryTfSDK)...)
139+
if resp.Diagnostics.HasError() {
140+
return
141+
}
142+
143+
workspaceID, diags := tfschema.GetWorkspaceID_SdkV2(ctx, libraryTfSDK.ProviderConfig)
128144
resp.Diagnostics.Append(diags...)
129145
if resp.Diagnostics.HasError() {
130146
return
131147
}
132-
var libraryTfSDK LibraryExtended
133-
resp.Diagnostics.Append(req.Plan.Get(ctx, &libraryTfSDK)...)
148+
149+
w, diags := r.Client.GetWorkspaceClientForUnifiedProviderWithDiagnostics(ctx, workspaceID)
150+
resp.Diagnostics.Append(diags...)
134151
if resp.Diagnostics.HasError() {
135152
return
136153
}
@@ -171,21 +188,30 @@ func (r *LibraryResource) Create(ctx context.Context, req resource.CreateRequest
171188
}
172189

173190
installedLib.ID = types.StringValue(libGoSDK.String())
191+
installedLib.ProviderConfig = libraryTfSDK.ProviderConfig
174192
resp.Diagnostics.Append(resp.State.Set(ctx, installedLib)...)
175193
}
176194

177195
func (r *LibraryResource) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) {
178196
ctx = pluginfwcontext.SetUserAgentInResourceContext(ctx, resourceName)
179-
w, diags := r.Client.GetWorkspaceClient()
197+
var libraryTfSDK LibraryExtended
198+
resp.Diagnostics.Append(req.State.Get(ctx, &libraryTfSDK)...)
199+
if resp.Diagnostics.HasError() {
200+
return
201+
}
202+
203+
workspaceID, diags := tfschema.GetWorkspaceID_SdkV2(ctx, libraryTfSDK.ProviderConfig)
180204
resp.Diagnostics.Append(diags...)
181205
if resp.Diagnostics.HasError() {
182206
return
183207
}
184-
var libraryTfSDK LibraryExtended
185-
resp.Diagnostics.Append(req.State.Get(ctx, &libraryTfSDK)...)
208+
209+
w, diags := r.Client.GetWorkspaceClientForUnifiedProviderWithDiagnostics(ctx, workspaceID)
210+
resp.Diagnostics.Append(diags...)
186211
if resp.Diagnostics.HasError() {
187212
return
188213
}
214+
189215
var libGoSDK compute.Library
190216
resp.Diagnostics.Append(converters.TfSdkToGoSdkStruct(ctx, libraryTfSDK, &libGoSDK)...)
191217
if resp.Diagnostics.HasError() {
@@ -209,6 +235,7 @@ func (r *LibraryResource) Read(ctx context.Context, req resource.ReadRequest, re
209235
return
210236
}
211237

238+
installedLib.ProviderConfig = libraryTfSDK.ProviderConfig
212239
resp.Diagnostics.Append(resp.State.Set(ctx, installedLib)...)
213240
}
214241

@@ -218,16 +245,24 @@ func (r *LibraryResource) Update(ctx context.Context, req resource.UpdateRequest
218245

219246
func (r *LibraryResource) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) {
220247
ctx = pluginfwcontext.SetUserAgentInResourceContext(ctx, resourceName)
221-
w, diags := r.Client.GetWorkspaceClient()
248+
var libraryTfSDK LibraryExtended
249+
resp.Diagnostics.Append(req.State.Get(ctx, &libraryTfSDK)...)
250+
if resp.Diagnostics.HasError() {
251+
return
252+
}
253+
254+
workspaceID, diags := tfschema.GetWorkspaceID_SdkV2(ctx, libraryTfSDK.ProviderConfig)
222255
resp.Diagnostics.Append(diags...)
223256
if resp.Diagnostics.HasError() {
224257
return
225258
}
226-
var libraryTfSDK LibraryExtended
227-
resp.Diagnostics.Append(req.State.Get(ctx, &libraryTfSDK)...)
259+
260+
w, diags := r.Client.GetWorkspaceClientForUnifiedProviderWithDiagnostics(ctx, workspaceID)
261+
resp.Diagnostics.Append(diags...)
228262
if resp.Diagnostics.HasError() {
229263
return
230264
}
265+
231266
clusterID := libraryTfSDK.ClusterId.ValueString()
232267
var libGoSDK compute.Library
233268
resp.Diagnostics.Append(converters.TfSdkToGoSdkStruct(ctx, libraryTfSDK, &libGoSDK)...)

0 commit comments

Comments
 (0)