You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Doc] Correctly use native markdown callouts supported by TF Registry (#4073)
## Changes
<!-- Summary of your changes that are easy to understand -->
TF registry supports a [number of special
callouts](https://developer.hashicorp.com/terraform/registry/providers/docs#callouts)
to highlight paragraphs. These callouts automatically add text like
`**Note**` or `**Warning**` so we don't need to add them ourselves.
Also, make consistent usage of informational callouts (`->`), important
callouts (`~>`, that uses yellow background for paragraph), and warnings
(`!>`).
## Tests
<!--
How is this tested? Please see the checklist below and also describe any
other relevant tests
-->
- [ ] `make test` run locally
- [x] relevant change in `docs/` folder
- [ ] covered with integration tests in `internal/acceptance`
- [ ] relevant acceptance tests are passing
- [ ] using Go SDK
Copy file name to clipboardExpand all lines: docs/resources/access_control_rule_set.md
+3-3Lines changed: 3 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,13 +4,13 @@ subcategory: "Security"
4
4
5
5
# databricks_access_control_rule_set Resource
6
6
7
-
-> **Note**This resource can be used with an account or workspace-level provider.
7
+
-> This resource can be used with an account or workspace-level provider.
8
8
9
9
This resource allows you to manage access rules on Databricks account level resources. For convenience we allow accessing this resource through the Databricks account and workspace.
10
10
11
-
-> **Note**Currently, we only support managing access rules on service principal, group and account resources through `databricks_access_control_rule_set`.
11
+
-> Currently, we only support managing access rules on service principal, group and account resources through `databricks_access_control_rule_set`.
12
12
13
-
-> **Warning**`databricks_access_control_rule_set` cannot be used to manage access rules for resources supported by [databricks_permissions](permissions.md). Refer to its documentation for more information.
13
+
!>`databricks_access_control_rule_set` cannot be used to manage access rules for resources supported by [databricks_permissions](permissions.md). Refer to its documentation for more information.
Copy file name to clipboardExpand all lines: docs/resources/artifact_allowlist.md
+2-3Lines changed: 2 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,10 +3,9 @@ subcategory: "Unity Catalog"
3
3
---
4
4
# databricks_artifact_allowlist Resource
5
5
6
-
-> **Note**
7
-
It is required to define all allowlist for an artifact type in a single resource, otherwise Terraform cannot guarantee config drift prevention.
6
+
~> It is required to define all allowlist for an artifact type in a single resource, otherwise Terraform cannot guarantee config drift prevention.
8
7
9
-
-> **Note**This resource can only be used with a workspace-level provider!
8
+
-> This resource can only be used with a workspace-level provider!
10
9
11
10
In Databricks Runtime 13.3 and above, you can add libraries and init scripts to the allowlist in UC so that users can leverage these artifacts on compute configured with shared access mode.
-> **Note**This resource can only be used with a workspace-level provider!
7
+
-> This resource can only be used with a workspace-level provider!
8
8
9
9
The `databricks_automatic_cluster_update_workspace_setting` resource allows you to control whether automatic cluster update is enabled for the current workspace. By default, it is turned off. Enabling this feature on a workspace requires that you add the Enhanced Security and Compliance add-on.
Copy file name to clipboardExpand all lines: docs/resources/budget.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,9 +3,9 @@ subcategory: "FinOps"
3
3
---
4
4
# databricks_budget Resource
5
5
6
-
-> **Note**Initialize provider with `alias = "account"`, and `host` pointing to the account URL, like, `host = "https://accounts.cloud.databricks.com"`. Use `provider = databricks.account` for all account-level resources.
6
+
-> Initialize provider with `alias = "account"`, and `host` pointing to the account URL, like, `host = "https://accounts.cloud.databricks.com"`. Use `provider = databricks.account` for all account-level resources.
7
7
8
-
-> **Public Preview**This feature is in [Public Preview](https://docs.databricks.com/release-notes/release-types.html).
8
+
-> This feature is in [Public Preview](https://docs.databricks.com/release-notes/release-types.html).
9
9
10
10
This resource allows you to manage [Databricks Budgets](https://docs.databricks.com/en/admin/account-settings/budgets.html).
Copy file name to clipboardExpand all lines: docs/resources/catalog_workspace_binding.md
+3-5Lines changed: 3 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,17 +3,15 @@ subcategory: "Unity Catalog"
3
3
---
4
4
# databricks_catalog_workspace_binding Resource
5
5
6
-
-> **NOTE**This resource has been deprecated and will be removed soon. Please use the [databricks_workspace_binding resource](./workspace_binding.md) instead.
6
+
~> This resource has been deprecated and will be removed soon. Please use the [databricks_workspace_binding resource](./workspace_binding.md) instead.
7
7
8
8
If you use workspaces to isolate user data access, you may want to limit catalog access to specific workspaces in your account, also known as workspace-catalog binding
9
9
10
10
By default, Databricks assigns the catalog to all workspaces attached to the current metastore. By using `databricks_catalog_workspace_binding`, the catalog will be unassigned from all workspaces and only assigned explicitly using this resource.
11
11
12
-
-> **Note**
13
-
To use this resource the catalog must have its isolation mode set to `ISOLATED` in the [`databricks_catalog`](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/catalog#isolation_mode) resource. Alternatively, the isolation mode can be set using the UI or API by following [this guide](https://docs.databricks.com/data-governance/unity-catalog/create-catalogs.html#configuration).
12
+
-> To use this resource the catalog must have its isolation mode set to `ISOLATED` in the [`databricks_catalog`](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/catalog#isolation_mode) resource. Alternatively, the isolation mode can be set using the UI or API by following [this guide](https://docs.databricks.com/data-governance/unity-catalog/create-catalogs.html#configuration).
14
13
15
-
-> **Note**
16
-
If the catalog's isolation mode was set to `ISOLATED` using Terraform then the catalog will have been automatically bound to the workspace it was created from.
14
+
-> If the catalog's isolation mode was set to `ISOLATED` using Terraform then the catalog will have been automatically bound to the workspace it was created from.
Copy file name to clipboardExpand all lines: docs/resources/cluster.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ subcategory: "Compute"
5
5
6
6
This resource allows you to manage [Databricks Clusters](https://docs.databricks.com/clusters/index.html).
7
7
8
-
-> **Note**In case of [`Cannot access cluster ####-######-####### that was terminated or unpinned more than 30 days ago`](https://github.com/databricks/terraform-provider-databricks/issues/1197#issuecomment-1069386670) errors, please upgrade to v0.5.5 or later. If for some reason you cannot upgrade the version of provider, then the other viable option to unblock the apply pipeline is [`terraform state rm path.to.databricks_cluster.resource`](https://www.terraform.io/cli/commands/state/rm) command.
8
+
-> In case of [`Cannot access cluster ####-######-####### that was terminated or unpinned more than 30 days ago`](https://github.com/databricks/terraform-provider-databricks/issues/1197#issuecomment-1069386670) errors, please upgrade to v0.5.5 or later. If for some reason you cannot upgrade the version of provider, then the other viable option to unblock the apply pipeline is [`terraform state rm path.to.databricks_cluster.resource`](https://www.terraform.io/cli/commands/state/rm) command.
-> **Note** This is a legacy cluster type, not related to the real serverless compute. See [Clusters UI changes and cluster access modes](https://docs.databricks.com/archive/compute/cluster-ui-preview.html#legacy) for information on what access mode to use when creating new clusters.
133
+
~> This is a legacy cluster type, not related to the real serverless compute. See [Clusters UI changes and cluster access modes](https://docs.databricks.com/archive/compute/cluster-ui-preview.html#legacy) for information on what access mode to use when creating new clusters.
134
134
135
135
To create High-Concurrency cluster, following settings should be provided:
To install libraries, one must specify each library in a separate configuration block. Each different type of library has a slightly different syntax. It's possible to set only one type of library within one config block. Otherwise, the plan will fail with an error.
165
165
166
-
-> **Note**Please consider using [databricks_library](library.md) resource for a more flexible setup.
166
+
-> Please consider using [databricks_library](library.md) resource for a more flexible setup.
167
167
168
168
Installing JAR artifacts on a cluster. Location can be anything, that is DBFS or mounted object store (s3, adls, ...)
-> **Note** The underlying API is experimental and may change in the future.
487
+
~> The underlying API is experimental and may change in the future.
488
488
489
489
It's possible to mount NFS (Network File System) resources into the Spark containers inside the cluster. You can specify one or more `cluster_mount_info` blocks describing the mount. This block has following attributes:
Copy file name to clipboardExpand all lines: docs/resources/connection.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ subcategory: "Unity Catalog"
3
3
---
4
4
# databricks_connection (Resource)
5
5
6
-
-> **Note**This resource can only be used with a workspace-level provider!
6
+
-> This resource can only be used with a workspace-level provider!
7
7
8
8
Lakehouse Federation is the query federation platform for Databricks. Databricks uses Unity Catalog to manage query federation. To make a dataset available for read-only querying using Lakehouse Federation, you create the following:
-> **Note**DBFS files would only be changed, if Terraform stage did change. This means that any manual changes to managed file won't be overwritten by Terraform, if there's no local change.
52
+
-> DBFS files would only be changed, if Terraform stage did change. This means that any manual changes to managed file won't be overwritten by Terraform, if there's no local change.
0 commit comments