Skip to content

Commit 704db81

Browse files
authored
[Doc] Correctly use native markdown callouts supported by TF Registry (#4073)
## Changes <!-- Summary of your changes that are easy to understand --> TF registry supports a [number of special callouts](https://developer.hashicorp.com/terraform/registry/providers/docs#callouts) to highlight paragraphs. These callouts automatically add text like `**Note**` or `**Warning**` so we don't need to add them ourselves. Also, make consistent usage of informational callouts (`->`), important callouts (`~>`, that uses yellow background for paragraph), and warnings (`!>`). ## Tests <!-- How is this tested? Please see the checklist below and also describe any other relevant tests --> - [ ] `make test` run locally - [x] relevant change in `docs/` folder - [ ] covered with integration tests in `internal/acceptance` - [ ] relevant acceptance tests are passing - [ ] using Go SDK
1 parent 60b8a6f commit 704db81

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

75 files changed

+138
-149
lines changed

docs/resources/access_control_rule_set.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,13 +4,13 @@ subcategory: "Security"
44

55
# databricks_access_control_rule_set Resource
66

7-
-> **Note** This resource can be used with an account or workspace-level provider.
7+
-> This resource can be used with an account or workspace-level provider.
88

99
This resource allows you to manage access rules on Databricks account level resources. For convenience we allow accessing this resource through the Databricks account and workspace.
1010

11-
-> **Note** Currently, we only support managing access rules on service principal, group and account resources through `databricks_access_control_rule_set`.
11+
-> Currently, we only support managing access rules on service principal, group and account resources through `databricks_access_control_rule_set`.
1212

13-
-> **Warning** `databricks_access_control_rule_set` cannot be used to manage access rules for resources supported by [databricks_permissions](permissions.md). Refer to its documentation for more information.
13+
!> `databricks_access_control_rule_set` cannot be used to manage access rules for resources supported by [databricks_permissions](permissions.md). Refer to its documentation for more information.
1414

1515
## Service principal rule set usage
1616

docs/resources/artifact_allowlist.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,9 @@ subcategory: "Unity Catalog"
33
---
44
# databricks_artifact_allowlist Resource
55

6-
-> **Note**
7-
It is required to define all allowlist for an artifact type in a single resource, otherwise Terraform cannot guarantee config drift prevention.
6+
~> It is required to define all allowlist for an artifact type in a single resource, otherwise Terraform cannot guarantee config drift prevention.
87

9-
-> **Note** This resource can only be used with a workspace-level provider!
8+
-> This resource can only be used with a workspace-level provider!
109

1110
In Databricks Runtime 13.3 and above, you can add libraries and init scripts to the allowlist in UC so that users can leverage these artifacts on compute configured with shared access mode.
1211

docs/resources/automatic_cluster_update_setting.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ subcategory: "Settings"
44

55
# databricks_automatic_cluster_update_workspace_setting Resource
66

7-
-> **Note** This resource can only be used with a workspace-level provider!
7+
-> This resource can only be used with a workspace-level provider!
88

99
The `databricks_automatic_cluster_update_workspace_setting` resource allows you to control whether automatic cluster update is enabled for the current workspace. By default, it is turned off. Enabling this feature on a workspace requires that you add the Enhanced Security and Compliance add-on.
1010

docs/resources/budget.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,9 @@ subcategory: "FinOps"
33
---
44
# databricks_budget Resource
55

6-
-> **Note** Initialize provider with `alias = "account"`, and `host` pointing to the account URL, like, `host = "https://accounts.cloud.databricks.com"`. Use `provider = databricks.account` for all account-level resources.
6+
-> Initialize provider with `alias = "account"`, and `host` pointing to the account URL, like, `host = "https://accounts.cloud.databricks.com"`. Use `provider = databricks.account` for all account-level resources.
77

8-
-> **Public Preview** This feature is in [Public Preview](https://docs.databricks.com/release-notes/release-types.html).
8+
-> This feature is in [Public Preview](https://docs.databricks.com/release-notes/release-types.html).
99

1010
This resource allows you to manage [Databricks Budgets](https://docs.databricks.com/en/admin/account-settings/budgets.html).
1111

docs/resources/catalog.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ subcategory: "Unity Catalog"
33
---
44
# databricks_catalog Resource
55

6-
-> **Note** This resource can only be used with a workspace-level provider!
6+
-> This resource can only be used with a workspace-level provider!
77

88
Within a metastore, Unity Catalog provides a 3-level namespace for organizing data: Catalogs, Databases (also called Schemas), and Tables / Views.
99

docs/resources/catalog_workspace_binding.md

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,17 +3,15 @@ subcategory: "Unity Catalog"
33
---
44
# databricks_catalog_workspace_binding Resource
55

6-
-> **NOTE**This resource has been deprecated and will be removed soon. Please use the [databricks_workspace_binding resource](./workspace_binding.md) instead.
6+
~> This resource has been deprecated and will be removed soon. Please use the [databricks_workspace_binding resource](./workspace_binding.md) instead.
77

88
If you use workspaces to isolate user data access, you may want to limit catalog access to specific workspaces in your account, also known as workspace-catalog binding
99

1010
By default, Databricks assigns the catalog to all workspaces attached to the current metastore. By using `databricks_catalog_workspace_binding`, the catalog will be unassigned from all workspaces and only assigned explicitly using this resource.
1111

12-
-> **Note**
13-
To use this resource the catalog must have its isolation mode set to `ISOLATED` in the [`databricks_catalog`](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/catalog#isolation_mode) resource. Alternatively, the isolation mode can be set using the UI or API by following [this guide](https://docs.databricks.com/data-governance/unity-catalog/create-catalogs.html#configuration).
12+
-> To use this resource the catalog must have its isolation mode set to `ISOLATED` in the [`databricks_catalog`](https://registry.terraform.io/providers/databricks/databricks/latest/docs/resources/catalog#isolation_mode) resource. Alternatively, the isolation mode can be set using the UI or API by following [this guide](https://docs.databricks.com/data-governance/unity-catalog/create-catalogs.html#configuration).
1413

15-
-> **Note**
16-
If the catalog's isolation mode was set to `ISOLATED` using Terraform then the catalog will have been automatically bound to the workspace it was created from.
14+
-> If the catalog's isolation mode was set to `ISOLATED` using Terraform then the catalog will have been automatically bound to the workspace it was created from.
1715

1816
## Example Usage
1917

docs/resources/cluster.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ subcategory: "Compute"
55

66
This resource allows you to manage [Databricks Clusters](https://docs.databricks.com/clusters/index.html).
77

8-
-> **Note** In case of [`Cannot access cluster ####-######-####### that was terminated or unpinned more than 30 days ago`](https://github.com/databricks/terraform-provider-databricks/issues/1197#issuecomment-1069386670) errors, please upgrade to v0.5.5 or later. If for some reason you cannot upgrade the version of provider, then the other viable option to unblock the apply pipeline is [`terraform state rm path.to.databricks_cluster.resource`](https://www.terraform.io/cli/commands/state/rm) command.
8+
-> In case of [`Cannot access cluster ####-######-####### that was terminated or unpinned more than 30 days ago`](https://github.com/databricks/terraform-provider-databricks/issues/1197#issuecomment-1069386670) errors, please upgrade to v0.5.5 or later. If for some reason you cannot upgrade the version of provider, then the other viable option to unblock the apply pipeline is [`terraform state rm path.to.databricks_cluster.resource`](https://www.terraform.io/cli/commands/state/rm) command.
99

1010
```hcl
1111
data "databricks_node_type" "smallest" {
@@ -130,7 +130,7 @@ resource "databricks_cluster" "single_node" {
130130

131131
### (Legacy) High-Concurrency clusters
132132

133-
-> **Note** This is a legacy cluster type, not related to the real serverless compute. See [Clusters UI changes and cluster access modes](https://docs.databricks.com/archive/compute/cluster-ui-preview.html#legacy) for information on what access mode to use when creating new clusters.
133+
~> This is a legacy cluster type, not related to the real serverless compute. See [Clusters UI changes and cluster access modes](https://docs.databricks.com/archive/compute/cluster-ui-preview.html#legacy) for information on what access mode to use when creating new clusters.
134134

135135
To create High-Concurrency cluster, following settings should be provided:
136136

@@ -163,7 +163,7 @@ resource "databricks_cluster" "cluster_with_table_access_control" {
163163

164164
To install libraries, one must specify each library in a separate configuration block. Each different type of library has a slightly different syntax. It's possible to set only one type of library within one config block. Otherwise, the plan will fail with an error.
165165

166-
-> **Note** Please consider using [databricks_library](library.md) resource for a more flexible setup.
166+
-> Please consider using [databricks_library](library.md) resource for a more flexible setup.
167167

168168
Installing JAR artifacts on a cluster. Location can be anything, that is DBFS or mounted object store (s3, adls, ...)
169169

@@ -484,7 +484,7 @@ resource "databricks_cluster" "this" {
484484

485485
### cluster_mount_info blocks (experimental)
486486

487-
-> **Note** The underlying API is experimental and may change in the future.
487+
~> The underlying API is experimental and may change in the future.
488488

489489
It's possible to mount NFS (Network File System) resources into the Spark containers inside the cluster. You can specify one or more `cluster_mount_info` blocks describing the mount. This block has following attributes:
490490

docs/resources/compliance_security_profile_setting.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,9 @@ subcategory: "Settings"
44

55
# databricks_compliance_security_profile_workspace_setting Resource
66

7-
-> **Note** This resource can only be used with a workspace-level provider!
7+
-> This resource can only be used with a workspace-level provider!
88

9-
-> **Note** This setting can NOT be disabled once it is enabled.
9+
~> This setting can NOT be disabled once it is enabled.
1010

1111
The `databricks_compliance_security_profile_workspace_setting` resource allows you to control whether to enable the
1212
compliance security profile for the current workspace. Enabling it on a workspace is permanent. By default, it is

docs/resources/connection.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ subcategory: "Unity Catalog"
33
---
44
# databricks_connection (Resource)
55

6-
-> **Note** This resource can only be used with a workspace-level provider!
6+
-> This resource can only be used with a workspace-level provider!
77

88
Lakehouse Federation is the query federation platform for Databricks. Databricks uses Unity Catalog to manage query federation. To make a dataset available for read-only querying using Lakehouse Federation, you create the following:
99

docs/resources/dbfs_file.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -49,7 +49,7 @@ resource "databricks_library" "app" {
4949

5050
## Argument Reference
5151

52-
-> **Note** DBFS files would only be changed, if Terraform stage did change. This means that any manual changes to managed file won't be overwritten by Terraform, if there's no local change.
52+
-> DBFS files would only be changed, if Terraform stage did change. This means that any manual changes to managed file won't be overwritten by Terraform, if there's no local change.
5353

5454
The following arguments are supported:
5555

0 commit comments

Comments
 (0)