You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| yarn.scheduler.maximum-allocation-mb | 102400 (MB) | The maximum allocation for every container request at the RM, in MBs. Memory requests higher than this value won't take effect |
31
31
| yarn.scheduler.maximum-allocation-vcores | 12 |The maximum number of CPU cores for every container request at the Resource Manager. Requests higher than this value won't take effect. |
32
32
| yarn.nodemanager.resource.cpu-vcores | 12 | Number of CPU cores per NodeManager that can be allocated for containers. |
| tez.am.resource.memory.mb | 4096 (MB) | The amount of memory in MB to be used by the tez AppMaster |
35
35
| hive.server2.tez.sessions.per.default.queue | <number_of_worker_nodes> |The number of sessions for each queue named in the hive.server2.tez.default.queues. This number corresponds to number of query coordinators(Tez AMs) |
36
36
| hive.tez.container.size | 4096 (MB) | Specified Tez container size in MB |
@@ -70,7 +70,7 @@ For D14 v2, the recommended value is **12**.
This configuration value determines the number of Tez sessions that can be launched in parallel. These Tez sessions will be launched for each of the queues specified by "hive.server2.tez.default.queues". It corresponds to the number of Tez AMs (Query Coordinators). It's recommended to be the same as the number of worker nodes. The number of Tez AMs can be higher than the number of LLAP daemon nodes. The Tez AM's primary responsibility is to coordinate the query execution and assign query plan fragments to corresponding LLAP daemons for execution. Keep this value as multiple of a number of LLAP daemon nodes to achieve higher throughput.
73
+
This configuration value determines the number of Tez sessions that can be launched in parallel. These Tez sessions will be launched for each of the queues specified by "hive.server2.tez.default.queues". It corresponds to the number of Tez AMs (Query Coordinators). It's recommended to be the same as the number of worker nodes. The number of Tez AMs can be higher than the number of LLAP daemon nodes. The Tez AM's primary responsibility is to coordinate the query execution and assign query plan fragments to corresponding LLAP daemons for execution. Keep this value as multiple of many LLAP daemon nodes to achieve higher throughput.
74
74
75
75
Default HDInsight cluster has four LLAP daemons running on four worker nodes, so the recommended value is **4**.
76
76
@@ -90,9 +90,9 @@ The recommended value is **4096 MB**.
This value indicates a percentage of capacity given to llap queue. The capacity allocations may have different values for different workloads depending on how the YARN queues are configured. If your workload is read-only operations, then setting it as high as 90% of the capacity should work. However, if your workload is mix of update/delete/merge operations using managed tables, it's recommended to give 85% of the capacity for llap queue. The remaining 15% capacity can be used by other tasks such as compaction etc. to allocate containers from default queue. That way tasks in default queue won't deprive of YARN resources.
93
+
This value indicates a percentage of capacity given to LLAP queue. The capacity allocations may have different values for different workloads depending on how the YARN queues are configured. If your workload is read-only operations, then setting it as high as 90% of the capacity should work. However, if your workload is mix of update/delete/merge operations using managed tables, it's recommended to give 85% of the capacity for LLAP queue. The remaining 15% capacity can be used by other tasks such as compaction etc. to allocate containers from default queue. That way tasks in default queue won't deprive of YARN resources.
94
94
95
-
For D14v2 worker nodes, the recommended value for llap queue is **85**.
95
+
For D14v2 worker nodes, the recommended value for LLAP queue is **85**.
96
96
(For readonly workloads, it can be increased up to 90 as suitable.)
97
97
98
98
#### **7. LLAP daemon container size**
@@ -104,7 +104,7 @@ LLAP daemon is run as a YARN container on each worker node. The total memory siz
104
104
* Total memory configured for all containers on a node and LLAP queue capacity
105
105
106
106
Memory needed by Tez Application Masters(Tez AM) can be calculated as follows.
107
-
Tez AM acts as a query coordinator and the number of Tez AMs should be configured based on a number of concurrent queries to be served. Theoretically, we can consider one Tez AM per worker node. However, its possible that you may see more than one Tez AM on a worker node. For calculation purpose, we assume uniform distribution of Tez AMs across all LLAP daemon nodes/worker nodes.
107
+
Tez AM acts as a query coordinator and the number of Tez AMs should be configured based on many concurrent queries to be served. Theoretically, we can consider one Tez AM per worker node. However, it's possible that you may see more than one Tez AM on a worker node. For calculation purpose, we assume uniform distribution of Tez AMs across all LLAP daemon nodes/worker nodes.
108
108
It's recommended to have 4 GB of memory per Tez AM.
109
109
110
110
Number of Tez Ams = value specified by Hive config ***hive.server2.tez.sessions.per.default.queue***.
@@ -116,8 +116,8 @@ For D14 v2, the default configuration has four Tez AMs and four LLAP daemon node
116
116
Tez AM memory per node = (ceil(4/4) x 4 GB) = 4 GB
117
117
118
118
Total Memory available for LLAP queue per worker node can be calculated as follows:
119
-
This value depends on the total amount of memory available for all YARN containers on a node(*yarn.nodemanager.resource.memory-mb*) and the percentage of capacity configured for llap queue(*yarn.scheduler.capacity.root.llap.capacity*).
120
-
Total memory for LLAP queue on worker node = Total memory available for all YARN containers on a node x Percentage of capacity for llap queue.
119
+
This value depends on the total amount of memory available for all YARN containers on a node(*yarn.nodemanager.resource.memory-mb*) and the percentage of capacity configured for LLAP queue(*yarn.scheduler.capacity.root.llap.capacity*).
120
+
Total memory for LLAP queue on worker node = Total memory available for all YARN containers on a node x Percentage of capacity for LLAP queue.
121
121
For D14 v2, this value is (100 GB x 0.85) = 85 GB.
122
122
123
123
The LLAP daemon container size is calculated as follows;
This configuration controls the number of executors that can execute tasks in parallel per LLAP daemon. This value depends on the number of vcores, the amount of memory used per executor, and the amount of total memory available for LLAP daemon container. The number of executors can be oversubscribed to 120% of available vcores per worker node. However, it should be adjusted if it doesn't meet the memory requirements based on memory needed per executor and the LLAP daemon container size.
135
135
136
-
Each executor is equivalent to a Tez container and can consume 4GB(Tez container size) of memory. All executors in LLAP daemon share the same heap memory. With the assumption that not all executors run memory intensive operations at the same time, you can consider 75% of Tez container size(4 GB) per executor. This way you can increase the number of executors by giving each executor less memory (e.g. 3 GB) for increased parallelism. However, it is recommended to tune this setting for your target workload.
136
+
Each executor is equivalent to a Tez container and can consume 4 GB(Tez container size) of memory. All executors in LLAP daemon share the same heap memory. With the assumption that not all executors run memory intensive operations at the same time, you can consider 75% of Tez container size(4 GB) per executor. This way you can increase the number of executors by giving each executor less memory (for example, 3 GB) for increased parallelism. However, it is recommended to tune this setting for your target workload.
137
137
138
138
There are 16 vcores on D14 v2 VMs.
139
-
For D14 v2, the recommended value for num of executors is (16 vcores x 120%) ~= **19** on each worker node considering 3GB per executor.
139
+
For D14 v2, the recommended value for num of executors is (16 vcores x 120%) ~= **19** on each worker node considering 3-GB per executor.
140
140
141
141
***hive.llap.io.threadpool.size***:
142
142
This value specifies the thread pool size for executors. Since executors are fixed as specified, it will be same as number of executors per LLAP daemon.
@@ -185,7 +185,7 @@ For D14 v2 and HDI 4.0, the recommended SSD cache size = 19 GB / 0.08 ~= **237 G
Make sure you have *hive.auto.convert.join.noconditionaltask* enabled for this parameter to take effect.
188
-
This configuration determine the threshold for MapJoin selection by Hive optimizer that considers oversubscription of memory from other executors to have more room for in-memory hash tables to allow more map join conversions. Considering 3GB per executor, this size can be oversubscribed to 3GB, but some heap memory may also be used for sort buffers, shuffle buffers, etc. by the other operations.
188
+
This configuration determines the threshold for MapJoin selection by Hive optimizer that considers oversubscription of memory from other executors to have more room for in-memory hash tables to allow more map join conversions. Considering 3-GB per executor, this size can be oversubscribed to 3-GB, but some heap memory may also be used for sort buffers, shuffle buffers, etc. by the other operations.
189
189
So for D14 v2, with 3 GB memory per executor, it's recommended to set this value to **2048 MB**.
190
190
191
191
(Note: This value may need adjustments that are suitable for your workload. Setting this value too low may not use autoconvert feature. And setting it too high may result into out of memory exceptions or GC pauses that can result into adverse performance.)
@@ -206,19 +206,20 @@ It's recommended to keep both values same as number of worker nodes in Interacti
206
206
### **Considerations for Workload Management**
207
207
If you want to enable workload management for LLAP, make sure you reserve enough capacity for workload management to function as expected. The workload management requires configuration of a custom YARN queue, which is in addition to `llap` queue. Make sure you divide total cluster resource capacity between llap queue and workload management queue in accordance to your workload requirements.
208
208
Workload management spawns Tez Application Masters(Tez AMs) when a resource plan is activated.
209
-
Please note:
209
+
210
+
**Note:**
210
211
211
212
* Tez AMs spawned by activating a resource plan consume resources from the workload management queue as specified by `hive.server2.tez.interactive.queue`.
212
213
* The number of Tez AMs would depend on the value of `QUERY_PARALLELISM` specified in the resource plan.
213
-
* Once the workload management is active, Tez AMs in llap queue will not used. Only Tez AMs from workload management queue are used for query coordination. Tez AMs in the `llap` queue are used when workload management is disabled.
214
+
* Once the workload management is active, Tez AMs in LLAP queue will not be used. Only Tez AMs from workload management queue are used for query coordination. Tez AMs in the `llap` queue are used when workload management is disabled.
214
215
215
216
For example:
216
-
Total cluster capacity = 100GB memory, divided between LLAP, Workload Management, and Default queues as follows:
217
-
-llap queue capacity = 70 GB
217
+
Total cluster capacity = 100-GB memory, divided between LLAP, Workload Management, and Default queues as follows:
218
+
-LLAP queue capacity = 70 GB
218
219
- Workload management queue capacity = 20 GB
219
220
- Default queue capacity = 10 GB
220
221
221
-
With 20 GB in workload management queue capacity, a resource plan can specify `QUERY_PARALLELISM` value as five, which means workload management can launch five Tez AMs with 4GB container size each. If `QUERY_PARALLELISM` is higher than the capacity, you may see some Tez AMs stop responding in `ACCEPTED` state. The Hiveserver2 Interactive cannot submit query fragments to the Tez AMs that are not in `RUNNING` state.
222
+
With 20 GB in workload management queue capacity, a resource plan can specify `QUERY_PARALLELISM` value as five, which means workload management can launch five Tez AMs with 4-GB container size each. If `QUERY_PARALLELISM` is higher than the capacity, you may see some Tez AMs stop responding in `ACCEPTED` state. The Hiveserver2 Interactive cannot submit query fragments to the Tez AMs that are not in `RUNNING` state.
0 commit comments