Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 7 additions & 1 deletion common/datasource.go
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,18 @@ func DataResource(sc any, read func(context.Context, any, *DatabricksClient) err
s := StructToSchema(sc, func(m map[string]*schema.Schema) map[string]*schema.Schema {
return m
})
AddNamespaceInSchema(s)
NamespaceCustomizeSchemaMap(s)
return Resource{
Schema: s,
Read: func(ctx context.Context, d *schema.ResourceData, m *DatabricksClient) (err error) {
newClient, err := m.DatabricksClientForUnifiedProvider(ctx, d)
if err != nil {
return err
}
ptr := reflect.New(reflect.ValueOf(sc).Type())
DataToReflectValue(d, s, ptr.Elem())
err = read(ctx, ptr.Interface(), m)
err = read(ctx, ptr.Interface(), newClient)
if err != nil {
err = nicerError(ctx, err, "read data")
}
Expand Down
5 changes: 5 additions & 0 deletions docs/data-sources/current_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,11 @@ resource "databricks_storage_credential" "external" {
}
```

## Argument Reference

* `provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
* `workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.

## Exported attributes

Data source exposes the following attributes:
Expand Down
5 changes: 5 additions & 0 deletions docs/data-sources/current_user.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,6 +55,11 @@ output "job_url" {
}
```

## Argument Reference

* `provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
* `workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.

## Exported attributes

Data source exposes the following attributes:
Expand Down
2 changes: 2 additions & 0 deletions docs/data-sources/dbfs_file.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,8 @@ data "databricks_dbfs_file" "report" {

* `path` - (Required) Path on DBFS for the file from which to get content.
* `limit_file_size` - (Required - boolean) Do not load content for files larger than 4MB.
* `provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
* `workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.

## Attribute Reference

Expand Down
2 changes: 2 additions & 0 deletions docs/data-sources/dbfs_file_paths.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,8 @@ data "databricks_dbfs_file_paths" "partitions" {

* `path` - (Required) Path on DBFS for the file to perform listing
* `recursive` - (Required) Either or not recursively list all files
* `provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
* `workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.

## Attribute Reference

Expand Down
2 changes: 2 additions & 0 deletions docs/data-sources/group.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,8 @@ Data source allows you to pick groups by the following attributes

* `display_name` - (Required) Display name of the group. The group must exist before this resource can be planned.
* `recursive` - (Optional) Collect information for all nested groups. *Defaults to true.*
* `provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
* `workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.

## Attribute Reference

Expand Down
2 changes: 2 additions & 0 deletions docs/data-sources/instance_pool.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,8 @@ resource "databricks_cluster" "my_cluster" {
Data source allows you to pick instance pool by the following attribute

- `name` - Name of the instance pool. The instance pool must exist before this resource can be planned.
* `provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
* `workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.

## Attribute Reference

Expand Down
5 changes: 5 additions & 0 deletions docs/data-sources/job.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,11 @@ output "job_num_workers" {
}
```

## Argument Reference

* `provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
* `workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.

## Attribute Reference

This data source exports the following attributes:
Expand Down
5 changes: 5 additions & 0 deletions docs/data-sources/mws_credentials.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,11 @@ output "all_mws_credentials" {
}
```

## Argument Reference

* `provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
* `workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.

## Attribute Reference

-> This resource has an evolving interface, which may change in future versions of the provider.
Expand Down
5 changes: 5 additions & 0 deletions docs/data-sources/mws_workspaces.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,11 @@ output "all_mws_workspaces" {
}
```

## Argument Reference

* `provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
* `workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.

## Attribute Reference

-> This resource has an evolving interface, which may change in future versions of the provider.
Expand Down
2 changes: 2 additions & 0 deletions docs/data-sources/notebook_paths.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,8 @@ data "databricks_notebook_paths" "prod" {

* `path` - (Required) Path to workspace directory
* `recursive` - (Required) Either or recursively walk given path
* `provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
* `workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.

## Attribute Reference

Expand Down
2 changes: 2 additions & 0 deletions docs/data-sources/service_principal.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,8 @@ Data source allows you to pick service principals by one of the following attrib
- `application_id` - (Required if neither `display_name` nor `scim_id` is used) Application ID of the service principal. The service principal must exist before this resource can be retrieved.
- `display_name` - (Required if neither `application_id` nor `scim_id` is used) Exact display name of the service principal. The service principal must exist before this resource can be retrieved. In case if there are several service principals with the same name, an error is thrown.
- `scim_id` - (Required if neither `application_id` nor `display_name` is used) Unique SCIM ID for a service principal in the Databricks workspace. The service principal must exist before this resource can be retrieved.
* `provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
* `workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.

## Attribute Reference

Expand Down
2 changes: 2 additions & 0 deletions docs/data-sources/service_principals.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,8 @@ resource "databricks_group_member" "my_member_spn" {
Data source allows you to pick service principals by the following attributes

- `display_name_contains` - (Optional) Only return [databricks_service_principal](service_principal.md) display name that match the given name string
* `provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
* `workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.

## Attribute Reference

Expand Down
2 changes: 2 additions & 0 deletions docs/data-sources/user.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,8 @@ Data source allows you to pick groups by the following attributes

- `user_name` - (Optional) User name of the user. The user must exist before this resource can be planned.
- `user_id` - (Optional) ID of the user.
* `provider_config` - (Optional) Configure the provider for management through account provider. This block consists of the following fields:
* `workspace_id` - (Required) Workspace ID which the resource belongs to. This workspace must be part of the account which the provider is configured with.

## Attribute Reference

Expand Down
103 changes: 99 additions & 4 deletions jobs/data_job_acc_test.go
Original file line number Diff line number Diff line change
@@ -1,14 +1,19 @@
package jobs_test

import (
"context"
"fmt"
"regexp"
"strconv"
"testing"

"github.com/databricks/databricks-sdk-go"
"github.com/databricks/terraform-provider-databricks/internal/acceptance"
"github.com/hashicorp/terraform-plugin-testing/terraform"
"github.com/stretchr/testify/require"
)

func TestAccDataSourceJob(t *testing.T) {
acceptance.WorkspaceLevel(t, acceptance.Step{
Template: `
const dataSourceJobTemplate = `
data "databricks_current_user" "me" {}
data "databricks_spark_version" "latest" {}
data "databricks_node_type" "smallest" {
Expand All @@ -26,7 +31,7 @@ func TestAccDataSourceJob(t *testing.T) {
}

resource "databricks_job" "this" {
name = "job-datasource-acceptance-test"
name = "job-datasource-acceptance-test-{var.RANDOM}"

job_cluster {
job_cluster_key = "j"
Expand All @@ -52,9 +57,99 @@ func TestAccDataSourceJob(t *testing.T) {
}

}
`

func TestAccDataSourceJob(t *testing.T) {
acceptance.WorkspaceLevel(t, acceptance.Step{
Template: dataSourceJobTemplate + `
data "databricks_job" "this" {
job_name = databricks_job.this.name
}`,
})
}

func TestAccDataSourceJob_InvalidID(t *testing.T) {
acceptance.WorkspaceLevel(t, acceptance.Step{
Template: `
data "databricks_job" "this" {
job_name = "job-{var.RANDOM}"
provider_config {
workspace_id = "invalid"
}
}`,
ExpectError: regexp.MustCompile(`workspace_id must be a positive integer without leading zeros`),
PlanOnly: true,
})
}

func TestAccDataSourceJob_MismatchedID(t *testing.T) {
acceptance.WorkspaceLevel(t, acceptance.Step{
Template: `
data "databricks_job" "this" {
job_name = "job-{var.RANDOM}"
provider_config {
workspace_id = "123"
}
}`,
ExpectError: regexp.MustCompile(`workspace_id mismatch.*please check the workspace_id provided in provider_config`),
})
}

func TestAccDataSourceJob_EmptyID(t *testing.T) {
acceptance.WorkspaceLevel(t, acceptance.Step{
Template: `
data "databricks_job" "this" {
job_name = "job-{var.RANDOM}"
provider_config {
workspace_id = ""
}
}`,
ExpectError: regexp.MustCompile(`expected "provider_config.0.workspace_id" to not be an empty string`),
})
}

func TestAccDataSourceJob_EmptyBlock(t *testing.T) {
acceptance.WorkspaceLevel(t, acceptance.Step{
Template: `
data "databricks_job" "this" {
job_name = "job-{var.RANDOM}"
provider_config {
}
}`,
ExpectError: regexp.MustCompile(`The argument "workspace_id" is required, but no definition was found.`),
})
}

func TestAccDataSourceJobApply(t *testing.T) {
acceptance.LoadWorkspaceEnv(t)
ctx := context.Background()
w := databricks.Must(databricks.NewWorkspaceClient())
workspaceID, err := w.CurrentWorkspaceID(ctx)
require.NoError(t, err)
workspaceIDStr := strconv.FormatInt(workspaceID, 10)
acceptance.WorkspaceLevel(t, acceptance.Step{
Template: dataSourceJobTemplate + `
data "databricks_job" "this" {
job_name = databricks_job.this.name
}`,
}, acceptance.Step{
Template: dataSourceJobTemplate + fmt.Sprintf(`
data "databricks_job" "this" {
job_name = databricks_job.this.name
provider_config {
workspace_id = "%s"
}
}`, workspaceIDStr),
Check: func(s *terraform.State) error {
r, ok := s.RootModule().Resources["data.databricks_job.this"]
if !ok {
return fmt.Errorf("data not found in state")
}
id := r.Primary.Attributes["provider_config.0.workspace_id"]
if id != workspaceIDStr {
return fmt.Errorf("wrong workspace_id found: %v", r.Primary.Attributes)
}
return nil
},
})
}
8 changes: 7 additions & 1 deletion pools/data_instance_pool.go
Original file line number Diff line number Diff line change
Expand Up @@ -29,11 +29,17 @@ func DataSourceInstancePool() common.Resource {
Attributes *InstancePoolAndStats `json:"pool_info,omitempty" tf:"computed"`
}
s := common.StructToSchema(poolDetails{}, nil)
common.AddNamespaceInSchema(s)
common.NamespaceCustomizeSchemaMap(s)
return common.Resource{
Schema: s,
Read: func(ctx context.Context, d *schema.ResourceData, m *common.DatabricksClient) error {
newClient, err := m.DatabricksClientForUnifiedProvider(ctx, d)
if err != nil {
return err
}
name := d.Get("name").(string)
poolsAPI := NewInstancePoolsAPI(ctx, m)
poolsAPI := NewInstancePoolsAPI(ctx, newClient)
pool, err := getPool(poolsAPI, name)
if err != nil {
return err
Expand Down
65 changes: 34 additions & 31 deletions scim/data_current_user.go
Original file line number Diff line number Diff line change
Expand Up @@ -14,39 +14,42 @@ var nonAlphanumeric = regexp.MustCompile(`\W`)

// DataSourceCurrentUser returns information about caller identity
func DataSourceCurrentUser() common.Resource {
return common.Resource{
Schema: map[string]*schema.Schema{
"user_name": {
Type: schema.TypeString,
Computed: true,
},
"home": {
Type: schema.TypeString,
Computed: true,
},
"repos": {
Type: schema.TypeString,
Computed: true,
},
"alphanumeric": {
Type: schema.TypeString,
Computed: true,
},
"external_id": {
Type: schema.TypeString,
Computed: true,
},
"workspace_url": {
Type: schema.TypeString,
Computed: true,
},
"acl_principal_id": {
Type: schema.TypeString,
Computed: true,
},
s := map[string]*schema.Schema{
"user_name": {
Type: schema.TypeString,
Computed: true,
},
"home": {
Type: schema.TypeString,
Computed: true,
},
"repos": {
Type: schema.TypeString,
Computed: true,
},
"alphanumeric": {
Type: schema.TypeString,
Computed: true,
},
"external_id": {
Type: schema.TypeString,
Computed: true,
},
"workspace_url": {
Type: schema.TypeString,
Computed: true,
},
"acl_principal_id": {
Type: schema.TypeString,
Computed: true,
},
}
common.AddNamespaceInSchema(s)
common.NamespaceCustomizeSchemaMap(s)
return common.Resource{
Schema: s,
Read: func(ctx context.Context, d *schema.ResourceData, c *common.DatabricksClient) error {
w, err := c.WorkspaceClient()
w, err := c.WorkspaceClientUnifiedProvider(ctx, d)
if err != nil {
return err
}
Expand Down
Loading
Loading