Skip to content

Commit c78b127

Browse files
authored
Add cluster policy support for DLT pipelines (#1554)
Also added `apply_policy_default_values` attribute to automatically add missing attributes from the policy. This fixes #1551
1 parent fc5052a commit c78b127

File tree

3 files changed

+7
-1
lines changed

3 files changed

+7
-1
lines changed

clusters/clusters_api.go

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -370,12 +370,14 @@ type Cluster struct {
370370
DriverNodeTypeID string `json:"driver_node_type_id,omitempty" tf:"group:node_type,computed"`
371371
InstancePoolID string `json:"instance_pool_id,omitempty" tf:"group:node_type"`
372372
DriverInstancePoolID string `json:"driver_instance_pool_id,omitempty" tf:"group:node_type,computed"`
373-
PolicyID string `json:"policy_id,omitempty"`
374373
AwsAttributes *AwsAttributes `json:"aws_attributes,omitempty" tf:"conflicts:instance_pool_id,suppress_diff"`
375374
AzureAttributes *AzureAttributes `json:"azure_attributes,omitempty" tf:"conflicts:instance_pool_id,suppress_diff"`
376375
GcpAttributes *GcpAttributes `json:"gcp_attributes,omitempty" tf:"conflicts:instance_pool_id,suppress_diff"`
377376
AutoterminationMinutes int32 `json:"autotermination_minutes,omitempty"`
378377

378+
PolicyID string `json:"policy_id,omitempty"`
379+
ApplyPolicyDefaultValues bool `json:"apply_policy_default_values,omitempty"`
380+
379381
SparkConf map[string]string `json:"spark_conf,omitempty"`
380382
SparkEnvVars map[string]string `json:"spark_env_vars,omitempty"`
381383
CustomTags map[string]string `json:"custom_tags,omitempty"`

docs/resources/cluster.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -37,6 +37,7 @@ resource "databricks_cluster" "shared_autoscaling" {
3737
* `instance_pool_id` (Optional - required if `node_type_id` is not given) - To reduce cluster start time, you can attach a cluster to a [predefined pool of idle instances](instance_pool.md). When attached to a pool, a cluster allocates its driver and worker nodes from the pool. If the pool does not have sufficient idle resources to accommodate the cluster’s request, it expands by allocating new instances from the instance provider. When an attached cluster changes its state to `TERMINATED`, the instances it used are returned to the pool and reused by a different cluster.
3838
* `driver_instance_pool_id` (Optional) - similar to `instance_pool_id`, but for driver node. If omitted, and `instance_pool_id` is specified, then the driver will be allocated from that pool.
3939
* `policy_id` - (Optional) Identifier of [Cluster Policy](cluster_policy.md) to validate cluster and preset certain defaults. *The primary use for cluster policies is to allow users to create policy-scoped clusters via UI rather than sharing configuration for API-created clusters.* For example, when you specify `policy_id` of [external metastore](https://docs.databricks.com/administration-guide/clusters/policies.html#external-metastore-policy) policy, you still have to fill in relevant keys for `spark_conf`.
40+
* `apply_policy_default_values` - (Optional) Whether to use policy default values for missing cluster attributes.
4041
* `autotermination_minutes` - (Optional) Automatically terminate the cluster after being inactive for this time in minutes. If specified, the threshold must be between 10 and 10000 minutes. You can also set this value to 0 to explicitly disable automatic termination. Defaults to `60`. _We highly recommend having this setting present for Interactive/BI clusters._
4142
* `enable_elastic_disk` - (Optional) If you don’t want to allocate a fixed number of EBS volumes at cluster creation time, use autoscaling local storage. With autoscaling local storage, Databricks monitors the amount of free disk space available on your cluster’s Spark workers. If a worker begins to run too low on disk, Databricks automatically attaches a new EBS volume to the worker before it runs out of disk space. EBS volumes are attached up to a limit of 5 TB of total disk space per instance (including the instance’s local storage). To scale down EBS usage, make sure you have `autotermination_minutes` and `autoscale` attributes set. More documentation available at [cluster configuration page](https://docs.databricks.com/clusters/configure.html#autoscaling-local-storage-1).
4243
* `enable_local_disk_encryption` - (Optional) Some instance types you use to run clusters may have locally attached disks. Databricks may store shuffle data or temporary data on these locally attached disks. To ensure that all data at rest is encrypted for all storage types, including shuffle data stored temporarily on your cluster’s local disks, you can enable local disk encryption. When local disk encryption is enabled, Databricks generates an encryption key locally unique to each cluster node and uses it to encrypt all data stored on local disks. The scope of the key is local to each cluster node and is destroyed along with the cluster node itself. During its lifetime, the key resides in memory for encryption and decryption and is stored encrypted on the disk. _Your workloads may run more slowly because of the performance impact of reading and writing encrypted data to and from local volumes. This feature is not available for all Azure Databricks subscriptions. Contact your Microsoft or Databricks account representative to request access._

pipelines/resource_pipeline.go

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,9 @@ type pipelineCluster struct {
3535
AwsAttributes *clusters.AwsAttributes `json:"aws_attributes,omitempty"`
3636
GcpAttributes *clusters.GcpAttributes `json:"gcp_attributes,omitempty"`
3737

38+
PolicyID string `json:"policy_id,omitempty"`
39+
ApplyPolicyDefaultValues bool `json:"apply_policy_default_values,omitempty"`
40+
3841
SparkConf map[string]string `json:"spark_conf,omitempty"`
3942
SparkEnvVars map[string]string `json:"spark_env_vars,omitempty"`
4043
CustomTags map[string]string `json:"custom_tags,omitempty"`

0 commit comments

Comments
 (0)