Skip to content

Commit 6e7d1a2

Browse files
authored
Fix #288 - Added importers for SCIM resource (#290)
* Fix #288 - Added importers for SCIM resource And updated documentation for all resources, making it more clear to end user. * Fix lint Co-authored-by: Serge Smertin <[email protected]>
1 parent a7de374 commit 6e7d1a2

22 files changed

+65
-79
lines changed

docs/resources/aws_s3_mount.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
# databricks_aws_s3_mount Resource
22

3-
**This resource has evolving API, which may change in future versions of provider.**
3+
-> **Note** This resource has evolving API, which may change in future versions of provider.
44

5-
This resource will mount your S3 bucket on `dbfs:/mnt/yourname`. It is important to understand that this will start up the cluster if the cluster is terminated. The read and refresh terraform command will require a cluster and make take some time to validate mount. If cluster_id is not specified, it will create the smallest possible cluster called `terraform-mount` for shortest possible amount of time.
5+
This resource will mount your S3 bucket on `dbfs:/mnt/yourname`. It is important to understand that this will start up the [cluster](cluster.md) if the cluster is terminated. The read and refresh terraform command will require a cluster and make take some time to validate mount. If cluster_id is not specified, it will create the smallest possible cluster called `terraform-mount` for shortest possible amount of time.
66

77
## Example Usage
88

@@ -17,7 +17,7 @@ resource "databricks_s3_mount" "this" {
1717
}
1818
```
1919

20-
Full end-to-end actions required to securely mount S3 bucket on all clusters with the same instance profile:
20+
Full end-to-end actions required to securely mount S3 bucket on all clusters with the same [instance profile](instance_profile.md):
2121

2222
```hcl
2323
// Step 1: Create bucket
@@ -127,8 +127,8 @@ resource "databricks_s3_mount" "this" {
127127

128128
The following arguments are required:
129129

130-
* `cluster_id` - (Optional) (String) Cluster to use for mounting. If no cluster is specified, new cluster will be created and will mount the bucket for all of the clusters in this workspace. If cluster is specified, mount will be visible for all clusters with the same [instance profile](./instance_profile.md). If cluster is not running - it's going to be started, so be aware to set autotermination rules on it.
131-
* `instance_profile` - (Optional) (String) ARN of registeted instance profile for data access.
130+
* `cluster_id` - (Optional) (String) [Cluster](cluster.md) to use for mounting. If no cluster is specified, new cluster will be created and will mount the bucket for all of the clusters in this workspace. If cluster is specified, mount will be visible for all clusters with the same [instance profile](./instance_profile.md). If cluster is not running - it's going to be started, so be aware to set autotermination rules on it.
131+
* `instance_profile` - (Optional) (String) ARN of registeted [instance profile](instance_profile.md) for data access.
132132
* `mount_name` - (Required) (String) Name, under which mount will be accessible in `dbfs:/mnt/<MOUNT_NAME>` or locally on each instance through FUSE mount `/dbfs/mnt/<MOUNT_NAME>`.
133133
* `s3_bucket_name` - (Required) (String) S3 bucket name to be mounted.
134134

docs/resources/azure_adls_gen1_mount.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
# databricks_azure_adls_gen1_mount Resource
22

3-
**This resource has evolving API, which may change in future versions of provider.**
3+
-> **Note** This resource has evolving API, which may change in future versions of provider.
44

5-
This resource will mount your ADLS v1 bucket on `dbfs:/mnt/yourname`. It is important to understand that this will start up the cluster if the cluster is terminated. The read and refresh terraform command will require a cluster and make take some time to validate mount. If cluster_id is not specified, it will create the smallest possible cluster called `terraform-mount` for shortest possible amount of time.
5+
This resource will mount your ADLS v1 bucket on `dbfs:/mnt/yourname`. It is important to understand that this will start up the [cluster](cluster.md) if the cluster is terminated. The read and refresh terraform command will require a cluster and make take some time to validate mount. If cluster_id is not specified, it will create the smallest possible cluster called `terraform-mount` for shortest possible amount of time.
66

77

88
## Example Usage

docs/resources/azure_adls_gen2_mount.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
# databricks_azure_adls_gen2_mount Resource
22

3-
**This resource has evolving API, which may change in future versions of provider.**
3+
-> **Note** This resource has evolving API, which may change in future versions of provider.
44

5-
This resource will mount your ADLS v2 bucket on `dbfs:/mnt/yourname`. It is important to understand that this will start up the cluster if the cluster is terminated. The read and refresh terraform command will require a cluster and make take some time to validate mount. If cluster_id is not specified, it will create the smallest possible cluster called `terraform-mount` for shortest possible amount of time.
5+
This resource will mount your ADLS v2 bucket on `dbfs:/mnt/yourname`. It is important to understand that this will start up the [cluster](cluster.md) if the cluster is terminated. The read and refresh terraform command will require a cluster and make take some time to validate mount. If cluster_id is not specified, it will create the smallest possible cluster called `terraform-mount` for shortest possible amount of time.
66

77
## Example Usage
88

docs/resources/azure_blob_mount.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
# databricks_azure_blob_mount Resource
22

3-
**This resource has evolving API, which may change in future versions of provider.**
3+
-> **Note** This resource has evolving API, which may change in future versions of provider.
44

5-
This resource will mount your Azure Blob Storage bucket on `dbfs:/mnt/yourname`. It is important to understand that this will start up the cluster if the cluster is terminated. The read and refresh terraform command will require a cluster and make take some time to validate mount. If cluster_id is not specified, it will create the smallest possible cluster called `terraform-mount` for shortest possible amount of time. This resource will help you create, get and delete a azure blob storage mount using SAS token or storage account access keys.
5+
This resource will mount your Azure Blob Storage bucket on `dbfs:/mnt/yourname`. It is important to understand that this will start up the [cluster](cluster.md) if the cluster is terminated. The read and refresh terraform command will require a cluster and make take some time to validate mount. If cluster_id is not specified, it will create the smallest possible cluster called `terraform-mount` for shortest possible amount of time. This resource will help you create, get and delete a azure blob storage mount using SAS token or storage account access keys.
66

77

88
## Example Usage

docs/resources/cluster.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,6 @@ disable automatic termination. _It is highly recommended to have this setting pr
2929
* `single_user_name` - (Optional) The optional user name of the user to assign to an interactive cluster. This is required when using standard AAD Passthrough for Azure Datalake Storage (ADLS) with a single-user cluster (i.e. not high-concurrency clusters).
3030
* `idempotency_token` - (Optional) An optional token that can be used to guarantee the idempotency of cluster creation requests. If an active cluster with the provided token already exists, the request will not create a new cluster, but it will return the ID of the existing cluster instead. The existence of a cluster with the same token is not checked against terminated clusters. If you specify the idempotency token, upon failure you can retry until the request succeeds. Databricks will guarantee that exactly one cluster will be launched with that idempotency token. This token should have at most 64 characters.
3131
* `ssh_public_keys` - (Optional) SSH public key contents that will be added to each Spark node in this cluster. The corresponding private keys can be used to login with the user name ubuntu on port 2200. Up to 10 keys can be specified.
32-
TODO: add example
3332
* `spark_env_vars` - (Optional) Map with environment variable key-value pairs to fine tune Spark clusters. Key-value pair of the form (X,Y) are exported as is (i.e., export X='Y') while launching the driver and workers. To specify an additional set of SPARK_DAEMON_JAVA_OPTS, we recommend appending them to $SPARK_DAEMON_JAVA_OPTS as shown in the example below. This ensures that all default databricks managed environmental variables are included as well.
3433
* `custom_tags` - (Optional) Additional tags for cluster resources. Databricks will tag all cluster resources (e.g., AWS instances and EBS volumes) with these tags in addition to `default_tags`.
3534
* `spark_conf` - (Optional) Map with key-value pairs to fine tune Spark clusters, where you can provide custom pSpark configuration properties](https://spark.apache.org/docs/latest/configuration.html) in a cluster configuration. You can also pass in a string of extra JVM options to the driver and the executors via `spark.driver.extraJavaOptions` and `spark.executor.extraJavaOptions` respectively. It is advised to keep all common configurations in [Cluster Policies](cluster_policy.md) to maintain control of the environments launched.

docs/resources/cluster_policy.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,23 +1,23 @@
11
# databricks_cluster_policy Resource
22

3-
This resource creates a cluster policy, which limits the ability to create clusters based on a set of rules. The policy rules limit the attributes or attribute values available for cluster creation. Cluster policies have ACLs that limit their use to specific users and groups. Only admin users can create, edit, and delete policies. Admin users also have access to all policies.
3+
This resource creates a [cluster](cluster.md) policy, which limits the ability to create clusters based on a set of rules. The policy rules limit the attributes or attribute values available for [cluster](cluster.md) creation. [cluster](cluster.md) policies have ACLs that limit their use to specific users and groups. Only admin users can create, edit, and delete policies. Admin users also have access to all policies.
44

55
Cluster policies let you:
66

77
* Limit users to create clusters with prescribed settings.
88
* Simplify the user interface and enable more users to create their own clusters (by fixing and hiding some values).
9-
* Control cost by limiting per cluster maximum cost (by setting limits on attributes whose values contribute to hourly price).
9+
* Control cost by limiting per [cluster](cluster.md) maximum cost (by setting limits on attributes whose values contribute to hourly price).
1010

1111
Cluster policy permissions limit which policies a user can select in the Policy drop-down when the user creates a cluster:
1212

1313
* If no policies have been created in the workspace, the Policy drop-down does not display.
14-
* A user who has cluster create permission can select the Free form policy and create fully-configurable clusters.
15-
* A user who has both cluster create permission and access to cluster policies can select the Free form policy and policies they have access to.
16-
* A user that has access to only cluster policies, can select the policies they have access to.
14+
* A user who has [cluster](cluster.md) create permission can select the `Free form` policy and create fully-configurable clusters.
15+
* A user who has both [cluster](cluster.md) create permission and access to [cluster](cluster.md) policies can select the Free form policy and policies they have access to.
16+
* A user that has access to only [cluster](cluster.md) policies, can select the policies they have access to.
1717

1818
## Example Usage
1919

20-
Let us take a look at an example of how you can manage two teams: Marketing and Data Engineering. In the following scenario we want the marketing team to have a really good query experience, so we enabled delta cache for them. On the other hand we want the data engineering team to be able to utilize bigger clusters so we increased the dbus per hour that they can spend. This strategy allows your marketing users and data engineering users to use Databricks in a self service manner but have a different experience in regards to security and performance. And down the line if you need to add more global settings you can propagate them through the “base cluster policy”.
20+
Let us take a look at an example of how you can manage two teams: Marketing and Data Engineering. In the following scenario we want the marketing team to have a really good query experience, so we enabled delta cache for them. On the other hand we want the data engineering team to be able to utilize bigger clusters so we increased the dbus per hour that they can spend. This strategy allows your marketing users and data engineering users to use Databricks in a self service manner but have a different experience in regards to security and performance. And down the line if you need to add more global settings you can propagate them through the “base [cluster](cluster.md) policy”.
2121

2222
`modules/base-cluster-policy/main.tf` could look like:
2323

docs/resources/group.md

Lines changed: 6 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,29 +1,23 @@
11
# databricks_group Resource
22

3-
This resource allows you to create groups in Databricks. You can also associate Databricks users to the following groups.
4-
This is an alternative to `databricks_scim_group` and useful if you are using an application to sync users & groups with SCIM
5-
api.
6-
7-
-> **Note** You must be a Databricks administrator API token to make SCIM api calls.
3+
This resource allows you to create groups in Databricks. You can also associate Databricks users to the following groups. This is an alternative to `databricks_scim_group` and useful if you are using an application to sync users & groups with SCIM API.
84

95
## Example Usage
106

117
```hcl
128
resource "databricks_group" "my_group" {
139
display_name = "group_name"
14-
allow_cluster_create = "true"
15-
allow_instance_pool_create = "true"
10+
allow_cluster_create = true
11+
allow_instance_pool_create = true
1612
}
1713
```
1814
## Argument Reference
1915

2016
The following arguments are supported:
2117

22-
* `display_name` - **(Required)** This is the display name for the given group.
23-
24-
* `allow_cluster_create` - **(Optional)** This is a field to allow the group to have cluster create priviliges.
25-
26-
* `allow_instance_pool_create` - **(Optional)** This is a field to allow the group to have instance pool create priviliges.
18+
* `display_name` - (Required) This is the display name for the given group.
19+
* `allow_cluster_create` - (Optional) This is a field to allow the group to have [cluster](cluster.md) create priviliges.
20+
* `allow_instance_pool_create` - (Optional) This is a field to allow the group to have [instance pool](instance_pool.md) create priviliges.
2721

2822
## Attribute Reference
2923

docs/resources/group_instance_profile.md

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,10 @@
11
# databricks_group_instance_profile Resource
22

3-
**This resource has evolving API, which may change in future versions of provider.**
3+
-> **Note** This resource has evolving API, which may change in future versions of provider.
44

5-
This resource allows you to attach instance profiles to groups created by the `databricks_group` resource.
5+
This resource allows you to attach instance profiles to groups created by the [group](group.md) resource.
66

7-
-> **Note** Please only use this resource in conjunction with the `databricks_group` resource and **not** the `databricks_scim_group` resource.
8-
9-
-> **Note** You must be a Databricks administrator API token to make SCIM api calls.
7+
-> **Note** Please only use this resource in conjunction with the [group](group.md) resource and **not** the `databricks_scim_group` resource.
108

119
## Example Usage
1210

@@ -27,15 +25,14 @@ resource "databricks_group_instance_profile" "my_group_instance_profile" {
2725

2826
The following arguments are supported:
2927

30-
* `group_id` - **(Required)** This is the id of the `databricks_group` resource.
31-
32-
* `instance_profile_id` - **(Required)** This is the id of the `databricks_instance_profile` resource.
28+
* `group_id` - (Required) This is the id of the [group](group.md) resource.
29+
* `instance_profile_id` - (Required) This is the id of the [instance profile](instance_profile.md) resource.
3330

3431
## Attribute Reference
3532

3633
In addition to all arguments above, the following attributes are exported:
3734

38-
* `id` - The id for the `databricks_group_instance_profile` object which is in the format `<group_id>|<instance_profile_id>`.
35+
* `id` - The id for the [instance profile](instance_profile.md) object which is in the format `<group_id>|<instance_profile_id>`.
3936

4037
## Import
4138

docs/resources/group_member.md

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,8 @@
11
# databricks_group_member Resource
22

3-
This resource allows you to attach members to groups created by the `databricks_group` resource.
3+
This resource allows you to attach members to groups created by the [group](group.md) resource.
44

5-
-> **Note** Please only use this resource in conjunction with the `databricks_group` resource and **not** the `databricks_scim_group` resource.
6-
7-
-> **Note** You must be a Databricks administrator API token to make SCIM api calls.
5+
-> **Note** Please only use this resource in conjunction with the [group](group.md) resource and **not** the `databricks_scim_group` resource.
86

97
## Example Usage
108

@@ -31,9 +29,8 @@ resource "databricks_group_member" "my_member_b" {
3129

3230
The following arguments are supported:
3331

34-
* `group_id` - **(Required)** This is the id of the `databricks_group` resource.
35-
36-
* `member_id` - **(Required)** This is the id of the `databricks_group` or `databricks_scim_user` resource.
32+
* `group_id` - (Required) This is the id of the [group](group.md) resource.
33+
* `member_id` - (Required) This is the id of the [group](group.md) or `databricks_scim_user` resource.
3734
>Members can be groups or users created by the scim api.
3835
3936
## Attribute Reference

0 commit comments

Comments
 (0)