Skip to content

Commit 23d2437

Browse files
authored
[Feature] Add workspace_consume entitlement (#4762)
Databricks now supports [Workspace Consumer](https://learn.microsoft.com/en-us/azure/databricks/release-notes/product/2025/may#new-consumer-entitlement-is-generally-available) entitlement to provide access to dashboards and other related objects. The entitlement is called `workspace-consume`, not sure if it makes sense to name it `workspace_consumer` ? P.S. It's not rolled out everywhere yet, so it doesn't work correctly ## Changes <!-- Summary of your changes that are easy to understand --> ## Tests <!-- How is this tested? Please see the checklist below and also describe any other relevant tests --> - [x] `make test` run locally - [x] relevant change in `docs/` folder - [x] covered with integration tests in `internal/acceptance` - [ ] using Go SDK - [ ] using TF Plugin Framework
1 parent 4ee7ed5 commit 23d2437

File tree

6 files changed

+37
-28
lines changed

6 files changed

+37
-28
lines changed

NEXT_CHANGELOG.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@
66

77
### New Features and Improvements
88

9+
* Add `workspace_consume` entitlement [#4762](https://github.com/databricks/terraform-provider-databricks/pull/4762).
910
* Support configuration of file events in `databricks_external_location` [#4749](https://github.com/databricks/terraform-provider-databricks/pull/4749).
1011
* Improve support for new fields in `databricks_pipeline` [#4744](https://github.com/databricks/terraform-provider-databricks/pull/4744).
1112

docs/resources/entitlements.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,8 @@ The following entitlements are available.
6666
* `allow_cluster_create` - (Optional) Allow the principal to have [cluster](cluster.md) create privileges. Defaults to false. More fine grained permissions could be assigned with [databricks_permissions](permissions.md#Cluster-usage) and `cluster_id` argument. Everyone without `allow_cluster_create` argument set, but with [permission to use](permissions.md#Cluster-Policy-usage) Cluster Policy would be able to create clusters, but within boundaries of that specific policy.
6767
* `allow_instance_pool_create` - (Optional) Allow the principal to have [instance pool](instance_pool.md) create privileges. Defaults to false. More fine grained permissions could be assigned with [databricks_permissions](permissions.md#Instance-Pool-usage) and [instance_pool_id](permissions.md#instance_pool_id) argument.
6868
* `databricks_sql_access` - (Optional) This is a field to allow the principal to have access to [Databricks SQL](https://databricks.com/product/databricks-sql) feature in User Interface and through [databricks_sql_endpoint](sql_endpoint.md).
69-
* `workspace_access` - (Optional) This is a field to allow the principal to have access to Databricks Workspace.
69+
* `workspace_access` - (Optional) This is a field to allow the principal to have access to a Databricks Workspace.
70+
* `workspace_consume` - (Optional) This is a field to allow the principal to have access to a Databricks Workspace as consumer, with limited access to workspace UI.
7071

7172
## Import
7273

docs/resources/group.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,8 @@ The following arguments are supported:
9494
* `allow_cluster_create` - (Optional) This is a field to allow the group to have [cluster](cluster.md) create privileges. More fine grained permissions could be assigned with [databricks_permissions](permissions.md#Cluster-usage) and [cluster_id](permissions.md#cluster_id) argument. Everyone without `allow_cluster_create` argument set, but with [permission to use](permissions.md#Cluster-Policy-usage) Cluster Policy would be able to create clusters, but within boundaries of that specific policy.
9595
* `allow_instance_pool_create` - (Optional) This is a field to allow the group to have [instance pool](instance_pool.md) create privileges. More fine grained permissions could be assigned with [databricks_permissions](permissions.md#Instance-Pool-usage) and [instance_pool_id](permissions.md#instance_pool_id) argument.
9696
* `databricks_sql_access` - (Optional) This is a field to allow the group to have access to [Databricks SQL](https://databricks.com/product/databricks-sql) feature in User Interface and through [databricks_sql_endpoint](sql_endpoint.md).
97-
* `workspace_access` - (Optional) This is a field to allow the group to have access to Databricks Workspace.
97+
* `workspace_access` - (Optional) This is a field to allow the group to have access to a Databricks Workspace.
98+
* `workspace_consume` - (Optional) This is a field to allow the group to have access to a Databricks Workspace as consumer, with limited access to workspace UI.
9899
* `force` - (Optional) Ignore `cannot create group: Group with name X already exists.` errors and implicitly import the specific group into Terraform state, enforcing entitlements defined in the instance of resource. _This functionality is experimental_ and is designed to simplify corner cases, like Azure Active Directory synchronisation.
99100

100101
## Attribute Reference

docs/resources/service_principal.md

Lines changed: 25 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ There are different types of service principals:
1515

1616
-> To assign account level service principals to workspace use [databricks_mws_permission_assignment](mws_permission_assignment.md).
1717

18-
-> Entitlements, like, `allow_cluster_create`, `allow_instance_pool_create`, `databricks_sql_access`, `workspace_access` applicable only for workspace-level service principals. Use [databricks_entitlements](entitlements.md) resource to assign entitlements inside a workspace to account-level service principals.
18+
-> Entitlements, like, `allow_cluster_create`, `allow_instance_pool_create`, `databricks_sql_access`, `workspace_access`, `workspace-consume` applicable only for workspace-level service principals. Use [databricks_entitlements](entitlements.md) resource to assign entitlements inside a workspace to account-level service principals.
1919

2020
The default behavior when deleting a `databricks_service_principal` resource depends on whether the provider is configured at the workspace-level or account-level. When the provider is configured at the workspace-level, the service principal will be deleted from the workspace. When the provider is configured at the account-level, the service principal will be deactivated but not deleted. When the provider is configured at the account level, to delete the service principal from the account when the resource is deleted, set `disable_as_user_deletion = false`. Conversely, when the provider is configured at the account-level, to deactivate the service principal when the resource is deleted, set `disable_as_user_deletion = true`.
2121

@@ -97,27 +97,28 @@ resource "databricks_service_principal" "sp" {
9797

9898
The following arguments are available:
9999

100-
- `application_id` This is the Azure Application ID of the given Azure service principal and will be their form of access and identity. For Databricks-managed service principals this value is auto-generated.
101-
- `display_name` - (Required for Databricks-managed service principals) This is an alias for the service principal and can be the full name of the service principal.
102-
- `external_id` - (Optional) ID of the service principal in an external identity provider.
103-
- `allow_cluster_create` - (Optional) Allow the service principal to have [cluster](cluster.md) create privileges. Defaults to false. More fine grained permissions could be assigned with [databricks_permissions](permissions.md#Cluster-usage) and `cluster_id` argument. Everyone without `allow_cluster_create` argument set, but with [permission to use](permissions.md#Cluster-Policy-usage) Cluster Policy would be able to create clusters, but within the boundaries of that specific policy.
104-
- `allow_instance_pool_create` - (Optional) Allow the service principal to have [instance pool](instance_pool.md) create privileges. Defaults to false. More fine grained permissions could be assigned with [databricks_permissions](permissions.md#Instance-Pool-usage) and [instance_pool_id](permissions.md#instance_pool_id) argument.
105-
- `databricks_sql_access` - (Optional) This is a field to allow the group to have access to [Databricks SQL](https://databricks.com/product/databricks-sql) feature through [databricks_sql_endpoint](sql_endpoint.md).
106-
- `workspace_access` - (Optional) This is a field to allow the group to have access to Databricks Workspace.
107-
- `active` - (Optional) Either service principal is active or not. True by default, but can be set to false in case of service principal deactivation with preserving service principal assets.
108-
- `force` - (Optional) Ignore `cannot create service principal: Service principal with application ID X already exists` errors and implicitly import the specified service principal into Terraform state, enforcing entitlements defined in the instance of resource. _This functionality is experimental_ and is designed to simplify corner cases, like Azure Active Directory synchronisation.
109-
- `force_delete_repos` - (Optional) This flag determines whether the service principal's repo directory is deleted when the user is deleted. It will have no impact when in the accounts SCIM API. False by default.
110-
- `force_delete_home_dir` - (Optional) This flag determines whether the service principal's home directory is deleted when the user is deleted. It will have no impact when in the accounts SCIM API. False by default.
111-
- `disable_as_user_deletion` - (Optional) Deactivate the service principal when deleting the resource, rather than deleting the service principal entirely. Defaults to `true` when the provider is configured at the account-level and `false` when configured at the workspace-level. This flag is exclusive to force_delete_repos and force_delete_home_dir flags.
100+
* `application_id` This is the Azure Application ID of the given Azure service principal and will be their form of access and identity. For Databricks-managed service principals this value is auto-generated.
101+
* `display_name` - (Required for Databricks-managed service principals) This is an alias for the service principal and can be the full name of the service principal.
102+
* `external_id` - (Optional) ID of the service principal in an external identity provider.
103+
* `allow_cluster_create` - (Optional) Allow the service principal to have [cluster](cluster.md) create privileges. Defaults to false. More fine grained permissions could be assigned with [databricks_permissions](permissions.md#Cluster-usage) and `cluster_id` argument. Everyone without `allow_cluster_create` argument set, but with [permission to use](permissions.md#Cluster-Policy-usage) Cluster Policy would be able to create clusters, but within the boundaries of that specific policy.
104+
* `allow_instance_pool_create` - (Optional) Allow the service principal to have [instance pool](instance_pool.md) create privileges. Defaults to false. More fine grained permissions could be assigned with [databricks_permissions](permissions.md#Instance-Pool-usage) and [instance_pool_id](permissions.md#instance_pool_id) argument.
105+
* `databricks_sql_access` - (Optional) This is a field to allow the service principal to have access to [Databricks SQL](https://databricks.com/product/databricks-sql) feature through [databricks_sql_endpoint](sql_endpoint.md).
106+
* `workspace_access` - (Optional) This is a field to allow the service principal to have access to a Databricks Workspace.
107+
* `workspace_consume` - (Optional) This is a field to allow the service principal to have access to a Databricks Workspace as consumer, with limited access to workspace UI.
108+
* `active` - (Optional) Either service principal is active or not. True by default, but can be set to false in case of service principal deactivation with preserving service principal assets.
109+
* `force` - (Optional) Ignore `cannot create service principal: Service principal with application ID X already exists` errors and implicitly import the specified service principal into Terraform state, enforcing entitlements defined in the instance of resource. _This functionality is experimental_ and is designed to simplify corner cases, like Azure Active Directory synchronisation.
110+
* `force_delete_repos` - (Optional) This flag determines whether the service principal's repo directory is deleted when the user is deleted. It will have no impact when in the accounts SCIM API. False by default.
111+
* `force_delete_home_dir` - (Optional) This flag determines whether the service principal's home directory is deleted when the user is deleted. It will have no impact when in the accounts SCIM API. False by default.
112+
* `disable_as_user_deletion` - (Optional) Deactivate the service principal when deleting the resource, rather than deleting the service principal entirely. Defaults to `true` when the provider is configured at the account-level and `false` when configured at the workspace-level. This flag is exclusive to force_delete_repos and force_delete_home_dir flags.
112113

113114
## Attribute Reference
114115

115116
In addition to all arguments above, the following attributes are exported:
116117

117-
- `id` - Canonical unique identifier for the service principal (SCIM ID).
118-
- `home` - Home folder of the service principal, e.g. `/Users/00000000-0000-0000-0000-000000000000`.
119-
- `repos` - Personal Repos location of the service principal, e.g. `/Repos/00000000-0000-0000-0000-000000000000`.
120-
- `acl_principal_id` - identifier for use in [databricks_access_control_rule_set](access_control_rule_set.md), e.g. `servicePrincipals/00000000-0000-0000-0000-000000000000`.
118+
* `id` - Canonical unique identifier for the service principal (SCIM ID).
119+
* `home` - Home folder of the service principal, e.g. `/Users/00000000-0000-0000-0000-000000000000`.
120+
* `repos` - Personal Repos location of the service principal, e.g. `/Repos/00000000-0000-0000-0000-000000000000`.
121+
* `acl_principal_id` - identifier for use in [databricks_access_control_rule_set](access_control_rule_set.md), e.g. `servicePrincipals/00000000-0000-0000-0000-000000000000`.
121122

122123
## Import
123124

@@ -140,10 +141,10 @@ terraform import databricks_service_principal.me <service-principal-id>
140141

141142
The following resources are often used in the same context:
142143

143-
- [End to end workspace management](../guides/workspace-management.md) guide.
144-
- [databricks_group](group.md) to manage [groups in Databricks Workspace](https://docs.databricks.com/administration-guide/users-groups/groups.html) or [Account Console](https://accounts.cloud.databricks.com/) (for AWS deployments).
145-
- [databricks_group](../data-sources/group.md) data to retrieve information about [databricks_group](group.md) members, entitlements and instance profiles.
146-
- [databricks_group_member](group_member.md) to attach [users](user.md) and [groups](group.md) as group members.
147-
- [databricks_permissions](permissions.md) to manage [access control](https://docs.databricks.com/security/access-control/index.html) in Databricks workspace.
148-
- [databricks_sql_permissions](sql_permissions.md) to manage data object access control lists in Databricks workspaces for things like tables, views, databases, and [more](<https://docs.databricks>.
149-
- [databricks-service-principal-secret](service_principal_secret.md) to manage secrets for the service principal (only for AWS deployments)
144+
* [End to end workspace management](../guides/workspace-management.md) guide.
145+
* [databricks_group](group.md) to manage [groups in Databricks Workspace](https://docs.databricks.com/administration-guide/users-groups/groups.html) or [Account Console](https://accounts.cloud.databricks.com/) (for AWS deployments).
146+
* [databricks_group](../data-sources/group.md) data to retrieve information about [databricks_group](group.md) members, entitlements and instance profiles.
147+
* [databricks_group_member](group_member.md) to attach [users](user.md) and [groups](group.md) as group members.
148+
* [databricks_permissions](permissions.md) to manage [access control](https://docs.databricks.com/security/access-control/index.html) in Databricks workspace.
149+
* [databricks_sql_permissions](sql_permissions.md) to manage data object access control lists in Databricks workspaces for things like tables, views, databases, and [more](<https://docs.databricks>.
150+
* [databricks-service-principal-secret](service_principal_secret.md) to manage secrets for the service principal (only for AWS deployments)

docs/resources/user.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ This resource allows you to manage [users in Databricks Workspace](https://docs.
99

1010
-> To assign account level users to workspace use [databricks_mws_permission_assignment](mws_permission_assignment.md).
1111

12-
-> Entitlements, like, `allow_cluster_create`, `allow_instance_pool_create`, `databricks_sql_access`, `workspace_access` applicable only for workspace-level users. Use [databricks_entitlements](entitlements.md) resource to assign entitlements inside a workspace to account-level users.
12+
-> Entitlements, like, `allow_cluster_create`, `allow_instance_pool_create`, `databricks_sql_access`, `workspace_access`, `workspace_consume` applicable only for workspace-level users. Use [databricks_entitlements](entitlements.md) resource to assign entitlements inside a workspace to account-level users.
1313

1414
To create users in the Databricks account, the provider must be configured with `host = "https://accounts.cloud.databricks.com"` on AWS deployments or `host = "https://accounts.azuredatabricks.net"` and authenticate using [AAD tokens](https://registry.terraform.io/providers/databricks/databricks/latest/docs#special-configurations-for-azure) on Azure deployments.
1515

@@ -98,7 +98,9 @@ The following arguments are available:
9898
* `external_id` - (Optional) ID of the user in an external identity provider.
9999
* `allow_cluster_create` - (Optional) Allow the user to have [cluster](cluster.md) create privileges. Defaults to false. More fine grained permissions could be assigned with [databricks_permissions](permissions.md#Cluster-usage) and `cluster_id` argument. Everyone without `allow_cluster_create` argument set, but with [permission to use](permissions.md#Cluster-Policy-usage) Cluster Policy would be able to create clusters, but within boundaries of that specific policy.
100100
* `allow_instance_pool_create` - (Optional) Allow the user to have [instance pool](instance_pool.md) create privileges. Defaults to false. More fine grained permissions could be assigned with [databricks_permissions](permissions.md#Instance-Pool-usage) and [instance_pool_id](permissions.md#instance_pool_id) argument.
101-
* `databricks_sql_access` - (Optional) This is a field to allow the group to have access to [Databricks SQL](https://databricks.com/product/databricks-sql) feature in User Interface and through [databricks_sql_endpoint](sql_endpoint.md).
101+
* `databricks_sql_access` - (Optional) This is a field to allow the user to have access to [Databricks SQL](https://databricks.com/product/databricks-sql) feature in User Interface and through [databricks_sql_endpoint](sql_endpoint.md).
102+
* `workspace_access` - (Optional) This is a field to allow the user to have access to a Databricks Workspace.
103+
* `workspace_consume` - (Optional) This is a field to allow the user to have access to a Databricks Workspace as consumer, with limited access to workspace UI.
102104
* `active` - (Optional) Either user is active or not. True by default, but can be set to false in case of user deactivation with preserving user assets.
103105
* `force` - (Optional) Ignore `cannot create user: User with username X already exists` errors and implicitly import the specific user into Terraform state, enforcing entitlements defined in the instance of resource. _This functionality is experimental_ and is designed to simplify corner cases, like Azure Active Directory synchronisation.
104106
* `force_delete_repos` - (Optional) This flag determines whether the user's repo directory is deleted when the user is deleted. It will have no impact when in the accounts SCIM API. False by default.

scim/scim.go

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,7 @@ var entitlementMapping = map[string]string{
4343
"allow-instance-pool-create": "allow_instance_pool_create",
4444
"databricks-sql-access": "databricks_sql_access",
4545
"workspace-access": "workspace_access",
46+
"workspace-consume": "workspace_consume",
4647
}
4748

4849
// order is important for tests
@@ -51,6 +52,7 @@ var possibleEntitlements = []string{
5152
"allow-instance-pool-create",
5253
"databricks-sql-access",
5354
"workspace-access",
55+
"workspace-consume",
5456
}
5557

5658
type entitlements []ComplexValue
@@ -100,6 +102,7 @@ func addEntitlementsToSchema(s map[string]*schema.Schema) {
100102
Default: false,
101103
}
102104
}
105+
s["workspace_consume"].ConflictsWith = []string{"workspace_access", "databricks_sql_access"}
103106
}
104107

105108
// ResourceMeta is a struct that contains the meta information about the SCIM group

0 commit comments

Comments
 (0)