Skip to content

Commit 7eb90a2

Browse files
authored
Merge pull request #179653 from v-lanjli/auto
update
2 parents 4373f4b + fe739c5 commit 7eb90a2

File tree

1 file changed

+20
-0
lines changed

1 file changed

+20
-0
lines changed

articles/synapse-analytics/spark/apache-spark-autoscale.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -56,6 +56,26 @@ To enable the Autoscale feature, complete the following steps as part of the nor
5656

5757
The initial number of nodes will be the minimum. This value defines the initial size of the instance when it's created. The minimum number of nodes can't be fewer than three.
5858

59+
Optionally, you can enable dynamic allocation of executors in scenarios where the executor requirements are vastly different across stages of a Spark Job or the volume of data processed fluctuates with time. By enabling Dynamic Allocation of Executors, we can utilize capacity as required.
60+
61+
On enabling Dynamic Allocation of Executors while creation a Spark pool, minimum and maximum number of nodes can be set subject to the limits of available nodes. These values are defaulted to every new session that is created within the pool.
62+
63+
Apache Spark enables configuration of Dynamic Allocation of Executors through code as below:
64+
65+
```
66+
%%configure -f
67+
{
68+
"conf" : {
69+
"spark.dynamicAllocation.maxExecutors" : "6",
70+
"spark.dynamicAllocation.enable": "true",
71+
"spark.dynamicAllocation.minExecutors": "2"
72+
}
73+
}
74+
```
75+
The defaults specified through the code override the values set through the user interface.
76+
77+
On enabling Dynamic allocation, Executors scale up or down based on the utilization of the Executors. This ensure that the Executors are provisioned in accordance with the needs of the job being run.
78+
5979
## Best practices
6080

6181
### Consider the latency of scale up or scale down operations

0 commit comments

Comments
 (0)