You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| yarn.scheduler.maximum-allocation-mb | 102400 (MB) | The maximum allocation for every container request at the RM, in MBs. Memory requests higher than this value won't take effect |
31
31
| yarn.scheduler.maximum-allocation-vcores | 12 |The maximum number of CPU cores for every container request at the Resource Manager. Requests higher than this value won't take effect. |
32
32
| yarn.nodemanager.resource.cpu-vcores | 12 | Number of CPU cores per NodeManager that can be allocated for containers. |
| tez.am.resource.memory.mb | 4096 (MB) | The amount of memory in MB to be used by the tez AppMaster |
35
35
| hive.server2.tez.sessions.per.default.queue | <number_of_worker_nodes> |The number of sessions for each queue named in the hive.server2.tez.default.queues. This number corresponds to number of query coordinators(Tez AMs) |
36
36
| hive.tez.container.size | 4096 (MB) | Specified Tez container size in MB |
@@ -70,7 +70,7 @@ For D14 v2, the recommended value is **12**.
This configuration value determines the number of Tez sessions that can be launched in parallel. These Tez sessions will be launched for each of the queues specified by "hive.server2.tez.default.queues". It corresponds to the number of Tez AMs (Query Coordinators). It's recommended to be the same as the number of worker nodes. The number of Tez AMs can be higher than the number of LLAP daemon nodes. The Tez AM's primary responsibility is to coordinate the query execution and assign query plan fragments to corresponding LLAP daemons for execution. Keep this value as multiple of a number of LLAP daemon nodes to achieve higher throughput.
73
+
This configuration value determines the number of Tez sessions that can be launched in parallel. These Tez sessions will be launched for each of the queues specified by "hive.server2.tez.default.queues". It corresponds to the number of Tez AMs (Query Coordinators). It's recommended to be the same as the number of worker nodes. The number of Tez AMs can be higher than the number of LLAP daemon nodes. The Tez AM's primary responsibility is to coordinate the query execution and assign query plan fragments to corresponding LLAP daemons for execution. Keep this value as multiple of many LLAP daemon nodes to achieve higher throughput.
74
74
75
75
Default HDInsight cluster has four LLAP daemons running on four worker nodes, so the recommended value is **4**.
76
76
@@ -90,9 +90,9 @@ The recommended value is **4096 MB**.
This value indicates a percentage of capacity given to llap queue. The capacity allocations may have different values for different workloads depending on how the YARN queues are configured. If your workload is read-only operations, then setting it as high as 90% of the capacity should work. However, if your workload is mix of update/delete/merge operations using managed tables, it's recommended to give 85% of the capacity for llap queue. The remaining 15% capacity can be used by other tasks such as compaction etc. to allocate containers from default queue. That way tasks in default queue won't deprive of YARN resources.
93
+
This value indicates a percentage of capacity given to LLAP queue. The capacity allocations may have different values for different workloads depending on how the YARN queues are configured. If your workload is read-only operations, then setting it as high as 90% of the capacity should work. However, if your workload is mix of update/delete/merge operations using managed tables, it's recommended to give 85% of the capacity for LLAP queue. The remaining 15% capacity can be used by other tasks such as compaction etc. to allocate containers from default queue. That way tasks in default queue won't deprive of YARN resources.
94
94
95
-
For D14v2 worker nodes, the recommended value for llap queue is **85**.
95
+
For D14v2 worker nodes, the recommended value for LLAP queue is **85**.
96
96
(For readonly workloads, it can be increased up to 90 as suitable.)
97
97
98
98
#### **7. LLAP daemon container size**
@@ -104,7 +104,7 @@ LLAP daemon is run as a YARN container on each worker node. The total memory siz
104
104
* Total memory configured for all containers on a node and LLAP queue capacity
105
105
106
106
Memory needed by Tez Application Masters(Tez AM) can be calculated as follows.
107
-
Tez AM acts as a query coordinator and the number of Tez AMs should be configured based on a number of concurrent queries to be served. Theoretically, we can consider one Tez AM per worker node. However, its possible that you may see more than one Tez AM on a worker node. For calculation purpose, we assume uniform distribution of Tez AMs across all LLAP daemon nodes/worker nodes.
107
+
Tez AM acts as a query coordinator and the number of Tez AMs should be configured based on many concurrent queries to be served. Theoretically, we can consider one Tez AM per worker node. However, it's possible that you may see more than one Tez AM on a worker node. For calculation purpose, we assume uniform distribution of Tez AMs across all LLAP daemon nodes/worker nodes.
108
108
It's recommended to have 4 GB of memory per Tez AM.
109
109
110
110
Number of Tez Ams = value specified by Hive config ***hive.server2.tez.sessions.per.default.queue***.
@@ -116,27 +116,27 @@ For D14 v2, the default configuration has four Tez AMs and four LLAP daemon node
116
116
Tez AM memory per node = (ceil(4/4) x 4 GB) = 4 GB
117
117
118
118
Total Memory available for LLAP queue per worker node can be calculated as follows:
119
-
This value depends on the total amount of memory available for all YARN containers on a node(*yarn.nodemanager.resource.memory-mb*) and the percentage of capacity configured for llap queue(*yarn.scheduler.capacity.root.llap.capacity*).
120
-
Total memory for LLAP queue on worker node = Total memory available for all YARN containers on a node x Percentage of capacity for llap queue.
119
+
This value depends on the total amount of memory available for all YARN containers on a node(*yarn.nodemanager.resource.memory-mb*) and the percentage of capacity configured for LLAP queue(*yarn.scheduler.capacity.root.llap.capacity*).
120
+
Total memory for LLAP queue on worker node = Total memory available for all YARN containers on a node x Percentage of capacity for LLAP queue.
121
121
For D14 v2, this value is (100 GB x 0.85) = 85 GB.
122
122
123
123
The LLAP daemon container size is calculated as follows;
124
124
125
125
**LLAP daemon container size = (Total memory for LLAP queue on a workernode) – (Tez AM memory per node) - (Service Master container size)**
126
126
There is only one Service Master (Application Master for LLAP service) on the cluster spawned on one of the worker nodes. For calculation purpose, we consider one service master per worker node.
127
127
For D14 v2 worker node, HDI 4.0 - the recommended value is (85 GB - 4 GB - 1 GB)) = **80 GB**
128
-
(For HDI 3.6, recommended value is **79 GB** because you should reserve additional ~2 GB for slider AM.)
128
+
129
129
130
130
#### **8. Determining number of executors per LLAP daemon**
This configuration controls the number of executors that can execute tasks in parallel per LLAP daemon. This value depends on the number of vcores, the amount of memory used per executor, and the amount of total memory available for LLAP daemon container. The number of executors can be oversubscribed to 120% of available vcores per worker node. However, it should be adjusted if it doesn't meet the memory requirements based on memory needed per executor and the LLAP daemon container size.
135
135
136
-
Each executor is equivalent to a Tez container and can consume 4GB(Tez container size) of memory. All executors in LLAP daemon share the same heap memory. With the assumption that not all executors run memory intensive operations at the same time, you can consider 75% of Tez container size(4 GB) per executor. This way you can increase the number of executors by giving each executor less memory (e.g. 3 GB) for increased parallelism. However, it is recommended to tune this setting for your target workload.
136
+
Each executor is equivalent to a Tez container and can consume 4 GB(Tez container size) of memory. All executors in LLAP daemon share the same heap memory. With the assumption that not all executors run memory intensive operations at the same time, you can consider 75% of Tez container size(4 GB) per executor. This way you can increase the number of executors by giving each executor less memory (for example, 3 GB) for increased parallelism. However, it is recommended to tune this setting for your target workload.
137
137
138
138
There are 16 vcores on D14 v2 VMs.
139
-
For D14 v2, the recommended value for num of executors is (16 vcores x 120%) ~= **19** on each worker node considering 3GB per executor.
139
+
For D14 v2, the recommended value for num of executors is (16 vcores x 120%) ~= **19** on each worker node considering 3 GB per executor.
140
140
141
141
***hive.llap.io.threadpool.size***:
142
142
This value specifies the thread pool size for executors. Since executors are fixed as specified, it will be same as number of executors per LLAP daemon.
When SSD cache is enabled, some portion of the memory will be used to store metadata for the SSD cache. The metadata is stored in memory and it's expected to be ~8% of SSD cache size.
Make sure you have *hive.auto.convert.join.noconditionaltask* enabled for this parameter to take effect.
188
-
This configuration determine the threshold for MapJoin selection by Hive optimizer that considers oversubscription of memory from other executors to have more room for in-memory hash tables to allow more map join conversions. Considering 3GB per executor, this size can be oversubscribed to 3GB, but some heap memory may also be used for sort buffers, shuffle buffers, etc. by the other operations.
188
+
This configuration determines the threshold for MapJoin selection by Hive optimizer that considers oversubscription of memory from other executors to have more room for in-memory hash tables to allow more map join conversions. Considering 3 GB per executor, this size can be oversubscribed to 3 GB, but some heap memory may also be used for sort buffers, shuffle buffers, etc. by the other operations.
189
189
So for D14 v2, with 3 GB memory per executor, it's recommended to set this value to **2048 MB**.
190
190
191
191
(Note: This value may need adjustments that are suitable for your workload. Setting this value too low may not use autoconvert feature. And setting it too high may result into out of memory exceptions or GC pauses that can result into adverse performance.)
**num_llap_nodes** - specifies number of nodes used by Hive LLAP service, this includes nodes running LLAP daemon, LLAP Service Master, and Tez Application Master(Tez AM).
197
197
198
198
:::image type="content" source="./media/hive-llap-sizing-guide/LLAP_sizing_guide_num_llap_nodes.png " alt-text="`Number of Nodes for LLAP service`" border="true":::
199
-
200
199
**num_llap_nodes_for_llap_daemons** - specified number of nodes used only for LLAP daemons. LLAP daemon container sizes are set to max fit node, so it will result in one llap daemon on each node.
201
200
202
201
:::image type="content" source="./media/hive-llap-sizing-guide/LLAP_sizing_guide_num_llap_nodes_for_llap_daemons.png " alt-text="`Number of Nodes for LLAP daemons`" border="true":::
@@ -206,19 +205,20 @@ It's recommended to keep both values same as number of worker nodes in Interacti
206
205
### **Considerations for Workload Management**
207
206
If you want to enable workload management for LLAP, make sure you reserve enough capacity for workload management to function as expected. The workload management requires configuration of a custom YARN queue, which is in addition to `llap` queue. Make sure you divide total cluster resource capacity between llap queue and workload management queue in accordance to your workload requirements.
208
207
Workload management spawns Tez Application Masters(Tez AMs) when a resource plan is activated.
209
-
Please note:
208
+
209
+
**Note:**
210
210
211
211
* Tez AMs spawned by activating a resource plan consume resources from the workload management queue as specified by `hive.server2.tez.interactive.queue`.
212
212
* The number of Tez AMs would depend on the value of `QUERY_PARALLELISM` specified in the resource plan.
213
-
* Once the workload management is active, Tez AMs in llap queue will not used. Only Tez AMs from workload management queue are used for query coordination. Tez AMs in the `llap` queue are used when workload management is disabled.
213
+
* Once the workload management is active, Tez AMs in LLAP queue will not be used. Only Tez AMs from workload management queue are used for query coordination. Tez AMs in the `llap` queue are used when workload management is disabled.
214
214
215
215
For example:
216
-
Total cluster capacity = 100GB memory, divided between LLAP, Workload Management, and Default queues as follows:
217
-
-llap queue capacity = 70 GB
216
+
Total cluster capacity = 100-GB memory, divided between LLAP, Workload Management, and Default queues as follows:
217
+
-LLAP queue capacity = 70 GB
218
218
- Workload management queue capacity = 20 GB
219
219
- Default queue capacity = 10 GB
220
220
221
-
With 20 GB in workload management queue capacity, a resource plan can specify `QUERY_PARALLELISM` value as five, which means workload management can launch five Tez AMs with 4 GB container size each. If `QUERY_PARALLELISM` is higher than the capacity, you may see some Tez AMs stop responding in `ACCEPTED` state. The Hiveserver2 Interactive cannot submit query fragments to the Tez AMs that are not in `RUNNING` state.
221
+
With 20 GB in workload management queue capacity, a resource plan can specify `QUERY_PARALLELISM` value as five, which means workload management can launch five Tez AMs with 4 GB container size each. If `QUERY_PARALLELISM` is higher then the capacity, you may see some Tez AMs stop responding in `ACCEPTED` state. The Hiveserver2 Interactive cannot submit query fragments to the Tez AMs that are not in `RUNNING` state.
222
222
223
223
224
224
#### **Next Steps**
@@ -228,7 +228,7 @@ If setting these values didn't resolve your issue, visit one of the following...
228
228
229
229
* Connect with [@AzureSupport](https://twitter.com/azuresupport) - the official Microsoft Azure account for improving customer experience by connecting the Azure community to the right resources: answers, support, and experts.
230
230
231
-
* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, please review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
231
+
* If you need more help, you can submit a support request from the [Azure portal](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade/). Select **Support** from the menu bar or open the **Help + support** hub. For more detailed information, review [How to create an Azure support request](../../azure-portal/supportability/how-to-create-azure-support-request.md). Access to Subscription Management and billing support is included with your Microsoft Azure subscription, and Technical Support is provided through one of the [Azure Support Plans](https://azure.microsoft.com/support/plans/).
232
232
233
233
*##### **Other References:**
234
234
*[Configure other LLAP properties](https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.5/performance-tuning/content/hive_setup_llap.html)
0 commit comments