You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -196,7 +195,31 @@ Unity Catalog introduces two new objects to access and work with external cloud
196
195
-[databricks_storage_credential](../resources/storage_credential.md) represent authentication methods to access cloud storage (e.g. an IAM role for Amazon S3 or a service principal for Azure Storage). Storage credentials are access-controlled to determine which users can use the credential.
197
196
-[databricks_external_location](../resources/external_location.md) are objects that combine a cloud storage path with a Storage Credential that can be used to access the location.
198
197
199
-
First, create the required objects in AWS.
198
+
First, we need to create the storage credential in Databricks before creating the IAM role in AWS. This is because the external ID of the Databricks storage credential is required in the IAM role trust policy.
role_arn = "arn:aws:iam::${data.aws_caller_identity.current.account_id}:role/${local.prefix}-uc-access" //cannot reference aws_iam_role directly, as it will create circular dependency
Then create the [databricks_storage_credential](../resources/storage_credential.md) and[databricks_external_location](../resources/external_location.md) in Unity Catalog.
318
+
Then we can create the[databricks_external_location](../resources/external_location.md) in Unity Catalog.
Copy file name to clipboardExpand all lines: docs/resources/grants.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -29,7 +29,7 @@ Unlike the [SQL specification](https://docs.databricks.com/sql/language-manual/s
29
29
30
30
## Metastore grants
31
31
32
-
You can grant `CREATE_CATALOG`, `CREATE_CONNECTION`, `CREATE_EXTERNAL_LOCATION`, `CREATE_PROVIDER`, `CREATE_RECIPIENT`, `CREATE_SHARE`, `MANAGE_ALLOWLIST`, `SET_SHARE_PERMISSION`, `USE_MARKETPLACE_ASSETS`, `USE_CONNECTION`, `USE_PROVIDER`, `USE_RECIPIENT` and `USE_SHARE` privileges to [databricks_metastore](metastore.md) id specified in `metastore` attribute.
32
+
You can grant `CREATE_CATALOG`, `CREATE_CONNECTION`, `CREATE_EXTERNAL_LOCATION`, `CREATE_PROVIDER`, `CREATE_RECIPIENT`, `CREATE_SHARE`, `CREATE_STORAGE_CREDENTIAL`, `MANAGE_ALLOWLIST`, `SET_SHARE_PERMISSION`, `USE_MARKETPLACE_ASSETS`, `USE_CONNECTION`, `USE_PROVIDER`, `USE_RECIPIENT` and `USE_SHARE` privileges to [databricks_metastore](metastore.md) id specified in `metastore` attribute.
Copy file name to clipboardExpand all lines: docs/resources/metastore_data_access.md
+3-25Lines changed: 3 additions & 25 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,7 +3,7 @@ subcategory: "Unity Catalog"
3
3
---
4
4
# databricks_metastore_data_access (Resource)
5
5
6
-
Optionally, each [databricks_metastore](docs/resources/metastore.md) can have root storage credential defined as `databricks_metastore_data_access`. This will be used by Unity Catalog to access data in the root storage location if defined.
6
+
Optionally, each [databricks_metastore](docs/resources/metastore.md) can have a default [databricks_storage_credential](storage_credential.md) defined as `databricks_metastore_data_access`. This will be used by Unity Catalog to access data in the root storage location if defined.
The arguments are the same as of [databricks_storage_credential](storage_credential.md). Additionally
57
57
58
-
*`name` - Name of Data Access Configuration, which must be unique within the [databricks_metastore](metastore.md). Change forces creation of a new resource.
59
-
*`metastore_id` - Unique identifier of the parent Metastore
60
-
*`owner` - (Optional) Username/groupname/sp application_id of the data access configuration owner.
61
-
*`force_destroy` - (Optional) Delete the data access configuration regardless of its dependencies.
62
-
63
-
`aws_iam_role` optional configuration block for credential details for AWS:
64
-
65
-
*`role_arn` - The Amazon Resource Name (ARN) of the AWS IAM role for S3 data access, of the form `arn:aws:iam::1234567890:role/MyRole-AJJHDSKSDF`
66
-
67
-
`azure_managed_identity` optional configuration block for using managed identity as credential details for Azure (Recommended):
68
-
69
-
*`access_connector_id` - The Resource ID of the Azure Databricks Access Connector resource, of the form `/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-name/providers/Microsoft.Databricks/accessConnectors/connector-name`.
70
-
*`managed_identity_id` - (Optional) The Resource ID of the Azure User Assigned Managed Identity associated with Azure Databricks Access Connector, of the form `/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/rg-name/providers/Microsoft.ManagedIdentity/userAssignedIdentities/user-managed-identity-name`.
71
-
72
-
`databricks_gcp_service_account` optional configuration block for creating a Databricks-managed GCP Service Account:
73
-
74
-
*`email` (output only) - The email of the GCP service account created, to be granted access to relevant buckets.
75
-
76
-
`azure_service_principal` optional configuration block for credential details for Azure (Legacy):
77
-
78
-
*`directory_id` - The directory ID corresponding to the Azure Active Directory (AAD) tenant of the application
79
-
*`application_id` - The application ID of the application registration within the referenced AAD tenant
80
-
*`client_secret` - The client secret generated for the above app ID in AAD. **This field is redacted on output**
58
+
*`is_default` - whether to set this credential as the default for the metastore. In practice, this should always be true.
Copy file name to clipboardExpand all lines: docs/resources/storage_credential.md
+3-2Lines changed: 3 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -74,11 +74,14 @@ The following arguments are required:
74
74
-`name` - Name of Storage Credentials, which must be unique within the [databricks_metastore](metastore.md). Change forces creation of a new resource.
75
75
-`metastore_id` - (Required for account-level) Unique identifier of the parent Metastore
76
76
-`owner` - (Optional) Username/groupname/sp application_id of the storage credential owner.
77
+
-`read_only` - (Optional) Indicates whether the storage credential is only usable for read operations.
77
78
-`force_destroy` - (Optional) Delete storage credential regardless of its dependencies.
78
79
79
80
`aws_iam_role` optional configuration block for credential details for AWS:
80
81
81
82
-`role_arn` - The Amazon Resource Name (ARN) of the AWS IAM role for S3 data access, of the form `arn:aws:iam::1234567890:role/MyRole-AJJHDSKSDF`
83
+
-`external_id` (output only) - The external ID used in role assumption to prevent confused deputy problem.
84
+
-`unity_catalog_iam_arn` (output only) - The Amazon Resource Name (ARN) of the AWS IAM user managed by Databricks. This is the identity that is going to assume the AWS IAM role.
82
85
83
86
`azure_managed_identity` optional configuration block for using managed identity as credential details for Azure (recommended over service principal):
84
87
@@ -90,8 +93,6 @@ The following arguments are required:
90
93
91
94
-`email` (output only) - The email of the GCP service account created, to be granted access to relevant buckets.
92
95
93
-
-`read_only` - (Optional) Indicates whether the storage credential is only usable for read operations.
94
-
95
96
`azure_service_principal` optional configuration block to use service principal as credential details for Azure (Legacy):
96
97
97
98
-`directory_id` - The directory ID corresponding to the Azure Active Directory (AAD) tenant of the application
0 commit comments