You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Feature] Add unified provider support for SDKv2 compatible plugin framework resources and data sources (#5115)
## Changes
<!-- Summary of your changes that are easy to understand -->
Add unified provider support for the following resources:
- Quality Monitor
- Library
- Shares
These use types.List to be compatible with SDKv2
Also noticed that we don't have documentation for library resource,
added it as well.
Main documentation:
#5122
## Tests
<!--
How is this tested? Please see the checklist below and also describe any
other relevant tests
-->
Integration tests
Copy file name to clipboardExpand all lines: NEXT_CHANGELOG.md
+1Lines changed: 1 addition & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,6 +8,7 @@
8
8
9
9
### New Features and Improvements
10
10
11
+
* Add `provider_config` support for SDKv2 compatible plugin framework resources and data sources([#5115](https://github.com/databricks/terraform-provider-databricks/pull/5115))
11
12
* Optimize `databricks_grant` and `databricks_grants` to not call the `Update` API if the requested permissions are already granted ([#5095](https://github.com/databricks/terraform-provider-databricks/pull/5095))
12
13
* Added `expected_workspace_status` to `databricks_mws_workspaces` to support creating workspaces in provisioning status ([#5019](https://github.com/databricks/terraform-provider-databricks/pull/5019))
*`cluster_id` - (Required) ID of the [databricks_cluster](cluster.md) to install the library on.
135
+
136
+
You must specify exactly **one** of the following library types:
137
+
138
+
*`jar` - (Optional) Path to the JAR library. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs. For example: `/Workspace/path/to/library.jar`, `/Volumes/path/to/library.jar` or `s3://my-bucket/library.jar`. If S3 is used, make sure the cluster has read access to the library. You may need to launch the cluster with an IAM role to access the S3 URI.
139
+
140
+
*`egg` - (Optional, Deprecated) Path to the EGG library. Installing Python egg files is deprecated and is not supported in Databricks Runtime 14.0 and above. Use `whl` or `pypi` instead.
141
+
142
+
*`whl` - (Optional) Path to the wheel library. Supported URIs include Workspace paths, Unity Catalog Volumes paths, and S3 URIs. For example: `/Workspace/path/to/library.whl`, `/Volumes/path/to/library.whl` or `s3://my-bucket/library.whl`. If S3 is used, make sure the cluster has read access to the library. You may need to launch the cluster with an IAM role to access the S3 URI.
143
+
144
+
*`requirements` - (Optional) Path to the requirements.txt file. Only Workspace paths and Unity Catalog Volumes paths are supported. For example: `/Workspace/path/to/requirements.txt` or `/Volumes/path/to/requirements.txt`. Requires a cluster with DBR 15.0+.
145
+
146
+
*`maven` - (Optional) Configuration block for a Maven library. The block consists of the following fields:
147
+
*`coordinates` - (Required) Gradle-style Maven coordinates. For example: `org.jsoup:jsoup:1.7.2`.
148
+
*`repo` - (Optional) Maven repository to install the Maven package from. If omitted, both Maven Central Repository and Spark Packages are searched.
149
+
*`exclusions` - (Optional) List of dependencies to exclude. For example: `["slf4j:slf4j", "*:hadoop-client"]`. See [Maven dependency exclusions](https://maven.apache.org/guides/introduction/introduction-to-optional-and-excludes-dependencies.html) for more information.
150
+
151
+
*`pypi` - (Optional) Configuration block for a PyPI library. The block consists of the following fields:
152
+
*`package` - (Required) The name of the PyPI package to install. An optional exact version specification is also supported. For example: `simplejson` or `simplejson==3.8.0`.
153
+
*`repo` - (Optional) The repository where the package can be found. If not specified, the default pip index is used.
154
+
155
+
*`cran` - (Optional) Configuration block for a CRAN library. The block consists of the following fields:
156
+
*`package` - (Required) The name of the CRAN package to install.
157
+
*`repo` - (Optional) The repository where the package can be found. If not specified, the default CRAN repo is used.
158
+
159
+
*`provider_config` - (Optional) Configuration block for management through the account provider. This block consists of the following fields:
160
+
*`workspace_id` - (Required) Workspace ID that the resource belongs to. This workspace must be part of the account that the provider is configured with.
161
+
130
162
## Import
131
163
132
164
!> Importing this resource is not currently supported.
Copy file name to clipboardExpand all lines: docs/resources/quality_monitor.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ subcategory: "Unity Catalog"
3
3
---
4
4
# databricks_quality_monitor Resource
5
5
6
-
This resource allows you to manage [Lakehouse Monitors](https://docs.databricks.com/en/lakehouse-monitoring/index.html) in Databricks.
6
+
This resource allows you to manage [Lakehouse Monitors](https://docs.databricks.com/en/lakehouse-monitoring/index.html) in Databricks.
7
7
8
8
-> This resource can only be used with a workspace-level provider!
9
9
@@ -120,6 +120,8 @@ table.
120
120
*`skip_builtin_dashboard` - Whether to skip creating a default dashboard summarizing data quality metrics. (Can't be updated after creation).
121
121
*`slicing_exprs` - List of column expressions to slice data with for targeted analysis. The data is grouped by each expression independently, resulting in a separate slice for each predicate and its complements. For high-cardinality columns, only the top 100 unique values by frequency will generate slices.
122
122
*`warehouse_id` - Optional argument to specify the warehouse for dashboard creation. If not specified, the first running warehouse will be used. (Can't be updated after creation)
123
+
*`provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
124
+
*`workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.
123
125
124
126
## Attribute Reference
125
127
@@ -129,7 +131,7 @@ In addition to all arguments above, the following attributes are exported:
129
131
*`monitor_version` - The version of the monitor config (e.g. 1,2,3). If negative, the monitor may be corrupted
130
132
*`drift_metrics_table_name` - The full name of the drift metrics table. Format: __catalog_name__.__schema_name__.__table_name__.
131
133
*`profile_metrics_table_name` - The full name of the profile metrics table. Format: __catalog_name__.__schema_name__.__table_name__.
132
-
*`status` - Status of the Monitor
134
+
*`status` - Status of the Monitor
133
135
*`dashboard_id` - The ID of the generated dashboard.
*`provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
89
+
*`workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.
0 commit comments