You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Documentation] Recommend OAuth instead of PAT (#4787)
## Changes
- Recommend OAuth instead of PAT
## Tests
<!--
How is this tested? Please see the checklist below and also describe any
other relevant tests
-->
- [x] relevant change in `docs/` folder
---------
Co-authored-by: Alex Ott <[email protected]>
Copy file name to clipboardExpand all lines: NEXT_CHANGELOG.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -14,6 +14,8 @@
14
14
* auto `zone_id` can only be used for fleet node types in `databricks_instance_pool` resource ([#4782](https://github.com/databricks/terraform-provider-databricks/pull/4782)).
15
15
* Document `tags` attribute in `databricks_pipeline` resource ([#4783](https://github.com/databricks/terraform-provider-databricks/pull/4783)).
16
16
17
+
* Recommend OAuth instead of PAT in guides ([#4787](https://github.com/databricks/terraform-provider-databricks/pull/4787))
value = databricks_mws_workspaces.this.workspace_url
289
285
}
290
-
291
-
output "databricks_token" {
292
-
value = databricks_mws_workspaces.this.token[0].token_value
293
-
sensitive = true
294
-
}
295
286
```
296
287
297
288
### Data resources and Authentication is not configured errors
@@ -310,12 +301,13 @@ In [the next step](workspace-management.md), please use the following configurat
310
301
311
302
```hcl
312
303
provider "databricks" {
313
-
host = module.e2.workspace_url
314
-
token = module.e2.token_value
304
+
host = module.e2.workspace_url
305
+
client_id = var.client_id
306
+
client_secret = var.client_secret
315
307
}
316
308
```
317
309
318
-
We assume that you have a terraform module in your project that creates a workspace (using [Databricks Workspace](#databricks-workspace) section) and you named it as `e2` while calling it in the **main.tf** file of your terraform project. And`workspace_url`and `token_value` are the output attributes of that module. This provider configuration will allow you to use the generated token to authenticate to the created workspace during workspace creation.
310
+
We assume that you have a terraform module in your project that creates a workspace (using [Databricks Workspace](#databricks-workspace) section) and you named it as `e2` while calling it in the **main.tf** file of your terraform project and`workspace_url`is the output attribute of that module. This provider configuration will allow you to authenticate to the created workspace after workspace creation.
# this makes sure that the NAT is created for outbound traffic before creating the workspace
233
229
depends_on = [google_compute_router_nat.nat]
234
230
}
235
231
236
232
output "databricks_host" {
237
233
value = databricks_mws_workspaces.this.workspace_url
238
234
}
239
-
240
-
output "databricks_token" {
241
-
value = databricks_mws_workspaces.this.token[0].token_value
242
-
sensitive = true
243
-
}
244
235
```
245
236
246
237
-> The `gke_config` argument and the `gke_cluster_service_ip_range` and `gke_pod_service_ip_range` arguments in `gcp_managed_network_config` are now deprecated and no longer supported. Omit these when creating workspaces in the future. If you have already created a workspace using these fields, it is safe to remove them from your Terraform template.
@@ -261,12 +252,13 @@ In [the next step](workspace-management.md), please use the following configurat
261
252
262
253
```hcl
263
254
provider "databricks" {
264
-
host = module.dbx_gcp.workspace_url
265
-
token = module.dbx_gcp.token_value
255
+
host = module.dbx_gcp.workspace_url
256
+
client_id = var.client_id
257
+
client_secret = var.client_secret
266
258
}
267
259
```
268
260
269
-
We assume that you have a terraform module in your project that creates a workspace (using [Databricks Workspace](#creating-a-databricks-workspace) section), and you named it as `dbx_gcp` while calling it in the **main.tf** file of your terraform project. And`workspace_url`and `token_value` are the output attributes of that module. This provider configuration will allow you to use the generated token to authenticate to the created workspace during workspace creation.
261
+
We assume that you have a terraform module in your project that creates a workspace (using [Databricks Workspace](#creating-a-databricks-workspace) section), and you named it as `dbx_gcp` while calling it in the **main.tf** file of your terraform project and`workspace_url`is the output attribute of that module. This provider configuration will allow you to authenticate to the created workspace after workspace creation.
270
262
271
263
### More than one authorization method configured error
~> Databricks strongly recommends using OAuth instead of PATs for user account client authentication and authorization due to the improved security OAuth has
266
+
265
267
You can use `host` and `token` parameters to supply credentials to the workspace. When environment variables are preferred, then you can specify `DATABRICKS_HOST` and `DATABRICKS_TOKEN` instead. Environment variables are the second most recommended way of configuring this provider.
value = databricks_mws_workspaces.this.token[0].token_value
216
-
sensitive = true
217
-
}
218
204
```
219
205
220
206
In order to create a [Databricks Workspace that leverages AWS PrivateLink](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html) please ensure that you have read and understood the [Enable Private Link](https://docs.databricks.com/administration-guide/cloud-configurations/aws/privatelink.html) documentation and then customise the example above with the relevant examples from [mws_vpc_endpoint](mws_vpc_endpoint.md), [mws_private_access_settings](mws_private_access_settings.md) and [mws_networks](mws_networks.md).
value = databricks_mws_workspaces.this.token[0].token_value
313
-
sensitive = true
314
286
}
315
287
```
316
288
@@ -338,9 +310,11 @@ The following arguments are available:
338
310
*`pricing_tier` - (Optional) - The pricing tier of the workspace.
339
311
*`compute_mode` - (Optional) - The compute mode for the workspace. When unset, a classic workspace is created, and both `credentials_id` and `storage_configuration_id` must be specified. When set to `SERVERLESS`, the resulting workspace is a serverless workspace, and `credentials_id` and `storage_configuration_id` must not be set. The only allowed value for this is `SERVERLESS`. Changing this field requires recreation of the workspace.
340
312
341
-
### token block
313
+
~> Databricks strongly recommends using OAuth instead of PATs for user account client authentication and authorization due to the improved security
314
+
315
+
### token block (legacy)
342
316
343
-
You can specify a `token` block in the body of the workspace resource, so that Terraform manages the refresh of the PAT token for the deployment user. The other option is to create [databricks_obo_token](obo_token.md), though it requires Premium or Enterprise plan enabled as well as more complex setup. Token block exposes `token_value`, that holds sensitive PAT token and optionally it can accept two arguments:
317
+
You can specify a `token` block in the body of the workspace resource, so that Terraform manages the refresh of the PAT token for the deployment user. Token block exposes `token_value`, that holds sensitive PAT token and optionally it can accept two arguments:
344
318
345
319
-> Tokens managed by `token {}` block are recreated when expired.
0 commit comments