You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When you [create a Databricks cluster](https://docs.databricks.com/clusters/configure.html#cluster-size-and-autoscaling), you can either provide a `num_workers` for the fixed-size cluster or provide `min_workers` and/or `max_workers` for the cluster within the `autoscale` group. When you give a fixed-sized cluster, Databricks ensures that your cluster has a specified number of workers. When you provide a range for the number of workers, Databricks chooses the appropriate number of workers required to run your job - also known as "autoscaling." With autoscaling, Databricks dynamically reallocates workers to account for the characteristics of your job. Certain parts of your pipeline may be more computationally demanding than others, and Databricks automatically adds additional workers during these phases of your job (and removes them when they’re no longer needed).
79
79
80
-
When using a [Single Node cluster](https://docs.databricks.com/clusters/single-node.html), `num_workers` needs to be `0`. It could be set to `0` explicitly, or just left out, as it defaults to `0`. When `num_workers` is `0`, provider checks for presence of the required Spark configurations:
80
+
`autoscale` optional configuration block supports the following:
81
+
82
+
*`min_workers` - (Optional) The minimum number of workers to which the cluster can scale down when underutilized. It is also the initial number of workers the cluster will have after creation.
83
+
*`max_workers` - (Optional) The maximum number of workers to which the cluster can scale up when overloaded. max_workers must be strictly greater than min_workers.
84
+
85
+
When using a [Single Node cluster](https://docs.databricks.com/clusters/single-node.html), `num_workers` needs to be `0`. It can be set to `0` explicitly, or simply not specified, as it defaults to `0`. When `num_workers` is `0`, provider checks for presence of the required Spark configurations:
81
86
*`spark.master` must has prefix `local`, like `local[*]`
82
87
*`spark.databricks.cluster.profile` must have value `singleNode`
83
88
89
+
and also `custom_tag` entry:
90
+
*`"ResourceClass" = "SingleNode"`
84
91
85
-
`autoscale` optional configuration block supports the following:
92
+
The following example demonstrates how to create an single node cluster:
86
93
87
-
*`min_workers` - (Optional) The minimum number of workers to which the cluster can scale down when underutilized. It is also the initial number of workers the cluster will have after creation.
88
-
*`max_workers` - (Optional) The maximum number of workers to which the cluster can scale up when overloaded. max_workers must be strictly greater than min_workers.
0 commit comments