Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions NEXT_CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@

### New Features and Improvements

* Add `provider_config` support for SDKv2 compatible plugin framework resources and data sources([#5115](https://github.com/databricks/terraform-provider-databricks/pull/5115))
* Optimize `databricks_grant` and `databricks_grants` to not call the `Update` API if the requested permissions are already granted ([#5095](https://github.com/databricks/terraform-provider-databricks/pull/5095))
* Added `expected_workspace_status` to `databricks_mws_workspaces` to support creating workspaces in provisioning status ([#5019](https://github.com/databricks/terraform-provider-databricks/pull/5019))

Expand Down
32 changes: 32 additions & 0 deletions docs/resources/library.md
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,38 @@ resource "databricks_library" "rkeops" {
}
```

## Argument Reference

The following arguments are supported:

* `cluster_id` - (Required) ID of the [databricks_cluster](cluster.md) to install the library on.

You must specify exactly **one** of the following library types:

* `jar` - (Optional) Path to the JAR library. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs. For example: `/Workspace/path/to/library.jar`, `/Volumes/path/to/library.jar` or `s3://my-bucket/library.jar`. If S3 is used, make sure the cluster has read access to the library. You may need to launch the cluster with an IAM role to access the S3 URI.

* `egg` - (Optional, Deprecated) Path to the EGG library. Installing Python egg files is deprecated and is not supported in Databricks Runtime 14.0 and above. Use `whl` or `pypi` instead.

* `whl` - (Optional) Path to the wheel library. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs. For example: `/Workspace/path/to/library.whl`, `/Volumes/path/to/library.whl` or `s3://my-bucket/library.whl`. If S3 is used, make sure the cluster has read access to the library. You may need to launch the cluster with an IAM role to access the S3 URI.

* `requirements` - (Optional) Path to the requirements.txt file. Only Workspace paths and Unity Catalog Volumes paths are supported. For example: `/Workspace/path/to/requirements.txt` or `/Volumes/path/to/requirements.txt`. Requires a cluster with DBR 15.0+.

* `maven` - (Optional) Configuration block for a Maven library. The block consists of the following fields:
* `coordinates` - (Required) Gradle-style Maven coordinates. For example: `org.jsoup:jsoup:1.7.2`.
* `repo` - (Optional) Maven repository to install the Maven package from. If omitted, both Maven Central Repository and Spark Packages are searched.
* `exclusions` - (Optional) List of dependencies to exclude. For example: `["slf4j:slf4j", "*:hadoop-client"]`. See [Maven dependency exclusions](https://maven.apache.org/guides/introduction/introduction-to-optional-and-excludes-dependencies.html) for more information.

* `pypi` - (Optional) Configuration block for a PyPI library. The block consists of the following fields:
* `package` - (Required) The name of the PyPI package to install. An optional exact version specification is also supported. For example: `simplejson` or `simplejson==3.8.0`.
* `repo` - (Optional) The repository where the package can be found. If not specified, the default pip index is used.

* `cran` - (Optional) Configuration block for a CRAN library. The block consists of the following fields:
* `package` - (Required) The name of the CRAN package to install.
* `repo` - (Optional) The repository where the package can be found. If not specified, the default CRAN repo is used.

* `provider_config` - (Optional) Configuration block for management through the account provider. This block consists of the following fields:
* `workspace_id` - (Required) Workspace ID that the resource belongs to. This workspace must be part of the account that the provider is configured with.

## Import

!> Importing this resource is not currently supported.
Expand Down
6 changes: 4 additions & 2 deletions docs/resources/quality_monitor.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ subcategory: "Unity Catalog"
---
# databricks_quality_monitor Resource

This resource allows you to manage [Lakehouse Monitors](https://docs.databricks.com/en/lakehouse-monitoring/index.html) in Databricks.
This resource allows you to manage [Lakehouse Monitors](https://docs.databricks.com/en/lakehouse-monitoring/index.html) in Databricks.

-> This resource can only be used with a workspace-level provider!

Expand Down Expand Up @@ -120,6 +120,8 @@ table.
* `skip_builtin_dashboard` - Whether to skip creating a default dashboard summarizing data quality metrics. (Can't be updated after creation).
* `slicing_exprs` - List of column expressions to slice data with for targeted analysis. The data is grouped by each expression independently, resulting in a separate slice for each predicate and its complements. For high-cardinality columns, only the top 100 unique values by frequency will generate slices.
* `warehouse_id` - Optional argument to specify the warehouse for dashboard creation. If not specified, the first running warehouse will be used. (Can't be updated after creation)
* `provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
* `workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.

## Attribute Reference

Expand All @@ -129,7 +131,7 @@ In addition to all arguments above, the following attributes are exported:
* `monitor_version` - The version of the monitor config (e.g. 1,2,3). If negative, the monitor may be corrupted
* `drift_metrics_table_name` - The full name of the drift metrics table. Format: __catalog_name__.__schema_name__.__table_name__.
* `profile_metrics_table_name` - The full name of the profile metrics table. Format: __catalog_name__.__schema_name__.__table_name__.
* `status` - Status of the Monitor
* `status` - Status of the Monitor
* `dashboard_id` - The ID of the generated dashboard.

## Related Resources
Expand Down
2 changes: 2 additions & 0 deletions docs/resources/share.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,8 @@ The following arguments are required:
* `name` - (Required) Name of share. Change forces creation of a new resource.
* `owner` - (Optional) User name/group name/sp application_id of the share owner.
* `comment` - (Optional) User-supplied free-form text.
* `provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
* `workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.

### object Configuration Block

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ import (
"context"
"errors"
"fmt"
"reflect"
"time"

"github.com/databricks/databricks-sdk-go/apierr"
Expand All @@ -16,6 +17,7 @@ import (
"github.com/databricks/terraform-provider-databricks/internal/providers/pluginfw/tfschema"
"github.com/databricks/terraform-provider-databricks/internal/service/compute_tf"
"github.com/databricks/terraform-provider-databricks/libraries"
"github.com/hashicorp/terraform-plugin-framework-validators/listvalidator"
"github.com/hashicorp/terraform-plugin-framework/diag"
"github.com/hashicorp/terraform-plugin-framework/path"
"github.com/hashicorp/terraform-plugin-framework/resource"
Expand Down Expand Up @@ -76,6 +78,13 @@ type LibraryExtended struct {
compute_tf.Library_SdkV2
ClusterId types.String `tfsdk:"cluster_id"`
ID types.String `tfsdk:"id"` // Adding ID field to stay compatible with SDKv2
tfschema.Namespace_SdkV2
}

func (l LibraryExtended) GetComplexFieldTypes(ctx context.Context) map[string]reflect.Type {
attrs := l.Library_SdkV2.GetComplexFieldTypes(ctx)
attrs["provider_config"] = reflect.TypeOf(tfschema.ProviderConfig{})
return attrs
}

type LibraryResource struct {
Expand Down Expand Up @@ -107,6 +116,7 @@ func (r *LibraryResource) Schema(ctx context.Context, req resource.SchemaRequest
c.SetOptional("id")
c.SetComputed("id")
c.SetDeprecated(clusters.EggDeprecationWarning, "egg")
c.AddValidator(listvalidator.SizeAtMost(1), "provider_config")
return c
})
resp.Schema = schema.Schema{
Expand All @@ -124,13 +134,20 @@ func (r *LibraryResource) Configure(ctx context.Context, req resource.ConfigureR

func (r *LibraryResource) Create(ctx context.Context, req resource.CreateRequest, resp *resource.CreateResponse) {
ctx = pluginfwcontext.SetUserAgentInResourceContext(ctx, resourceName)
w, diags := r.Client.GetWorkspaceClient()
var libraryTfSDK LibraryExtended
resp.Diagnostics.Append(req.Plan.Get(ctx, &libraryTfSDK)...)
if resp.Diagnostics.HasError() {
return
}

workspaceID, diags := tfschema.GetWorkspaceID_SdkV2(ctx, libraryTfSDK.ProviderConfig)
resp.Diagnostics.Append(diags...)
if resp.Diagnostics.HasError() {
return
}
var libraryTfSDK LibraryExtended
resp.Diagnostics.Append(req.Plan.Get(ctx, &libraryTfSDK)...)

w, diags := r.Client.GetWorkspaceClientForUnifiedProviderWithDiagnostics(ctx, workspaceID)
resp.Diagnostics.Append(diags...)
if resp.Diagnostics.HasError() {
return
}
Expand Down Expand Up @@ -171,21 +188,30 @@ func (r *LibraryResource) Create(ctx context.Context, req resource.CreateRequest
}

installedLib.ID = types.StringValue(libGoSDK.String())
installedLib.ProviderConfig = libraryTfSDK.ProviderConfig
resp.Diagnostics.Append(resp.State.Set(ctx, installedLib)...)
}

func (r *LibraryResource) Read(ctx context.Context, req resource.ReadRequest, resp *resource.ReadResponse) {
ctx = pluginfwcontext.SetUserAgentInResourceContext(ctx, resourceName)
w, diags := r.Client.GetWorkspaceClient()
var libraryTfSDK LibraryExtended
resp.Diagnostics.Append(req.State.Get(ctx, &libraryTfSDK)...)
if resp.Diagnostics.HasError() {
return
}

workspaceID, diags := tfschema.GetWorkspaceID_SdkV2(ctx, libraryTfSDK.ProviderConfig)
resp.Diagnostics.Append(diags...)
if resp.Diagnostics.HasError() {
return
}
var libraryTfSDK LibraryExtended
resp.Diagnostics.Append(req.State.Get(ctx, &libraryTfSDK)...)

w, diags := r.Client.GetWorkspaceClientForUnifiedProviderWithDiagnostics(ctx, workspaceID)
resp.Diagnostics.Append(diags...)
if resp.Diagnostics.HasError() {
return
}

var libGoSDK compute.Library
resp.Diagnostics.Append(converters.TfSdkToGoSdkStruct(ctx, libraryTfSDK, &libGoSDK)...)
if resp.Diagnostics.HasError() {
Expand All @@ -209,6 +235,7 @@ func (r *LibraryResource) Read(ctx context.Context, req resource.ReadRequest, re
return
}

installedLib.ProviderConfig = libraryTfSDK.ProviderConfig
resp.Diagnostics.Append(resp.State.Set(ctx, installedLib)...)
}

Expand All @@ -218,16 +245,24 @@ func (r *LibraryResource) Update(ctx context.Context, req resource.UpdateRequest

func (r *LibraryResource) Delete(ctx context.Context, req resource.DeleteRequest, resp *resource.DeleteResponse) {
ctx = pluginfwcontext.SetUserAgentInResourceContext(ctx, resourceName)
w, diags := r.Client.GetWorkspaceClient()
var libraryTfSDK LibraryExtended
resp.Diagnostics.Append(req.State.Get(ctx, &libraryTfSDK)...)
if resp.Diagnostics.HasError() {
return
}

workspaceID, diags := tfschema.GetWorkspaceID_SdkV2(ctx, libraryTfSDK.ProviderConfig)
resp.Diagnostics.Append(diags...)
if resp.Diagnostics.HasError() {
return
}
var libraryTfSDK LibraryExtended
resp.Diagnostics.Append(req.State.Get(ctx, &libraryTfSDK)...)

w, diags := r.Client.GetWorkspaceClientForUnifiedProviderWithDiagnostics(ctx, workspaceID)
resp.Diagnostics.Append(diags...)
if resp.Diagnostics.HasError() {
return
}

clusterID := libraryTfSDK.ClusterId.ValueString()
var libGoSDK compute.Library
resp.Diagnostics.Append(converters.TfSdkToGoSdkStruct(ctx, libraryTfSDK, &libGoSDK)...)
Expand Down
Loading
Loading