You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This document describes the sizing of the HDInsight Interactive Query Cluster (Hive LLAP cluster) for a typical workload to achieve reasonable performance. Please note that the recommendations provided in this document are generic guidelines and specific workloads may need specific tuning.
14
+
This document describes the sizing of the HDInsight Interactive Query Cluster (Hive LLAP cluster) for a typical workload to achieve reasonable performance. Please note that the recommendations provided in this document are generic guidelines and specific workloads may need
15
+
specific tuning.
16
16
17
17
### **Azure Default VM Types for HDInsight Interactive Query Cluster(LLAP)**
18
18
@@ -30,37 +30,38 @@ This document describes the sizing of the HDInsight Interactive Query Cl
30
30
| yarn.nodemanager.resource.memory-mb | 102400 (MB) | Total memory given, in MB, for all YARN containers on a node |
31
31
| yarn.scheduler.maximum-allocation-mb | 102400 (MB) | The maximum allocation for every container request at the RM, in MBs. Memory requests higher than this value won't take effect |
32
32
| yarn.scheduler.maximum-allocation-vcores | 12 |The maximum number of CPU cores for every container request at the Resource Manager. Requests higher than this value won't take effect. |
| hive.server2.tez.sessions.per.default.queue | <number_of_worker_nodes> |The number of sessions for each queue named in the hive.server2.tez.default.queues. This number corresponds to number of query coordinators(Tez AMs) |
34
36
| tez.am.resource.memory.mb | 4096 (MB) | The amount of memory in MB to be used by the tez AppMaster |
35
37
| hive.tez.container.size | 4096 (MB) | Specified Tez container size in MB |
This value indicates a maximum sum of memory in MB used by the YARN containers on each node. It specifies the amount of memory YARN can utilize on this node and therefore this value should be lesser than the total memory on that node.
49
-
Set this value = [Total physical memory on node] – [ memory for OS + Other services ]
49
+
This value indicates a maximum sum of memory in MB that can be used by the YARN containers on each node. The value specified should be lesser than the total amount of physical memory on that node.
50
+
Total memory for all YARN containers on a node = [Total physical memory] – [ memory for OS + Other services ]
50
51
It is recommended to set this value to ~90% of the available RAM.
51
52
For D14 v2, the recommended value is **102400 MB**.
52
53
53
-
#### **2. Determining YARN scheduler maximum allocation size per request**
54
+
#### **2. Determining maximum amount of memory per YARN container request**
This value indicates the maximum allocation for every container request at the Resource Manager, in MB. Memory requests higher than the specified value will not take effect. The Resource Manager can only allocate memory to containers in increments of *yarn.scheduler.minimum-allocation-mb* and cannot exceed the size specified by *yarn.scheduler.maximum-allocation-mb*. This value should not be more than the total allocated memory of the node, which is specified by *yarn.nodemanager.resource.memory-mb*.
57
+
This value indicates the maximum allocation for every container request at the Resource Manager, in MB. Memory requests higher than the specified value will not take effect. The Resource Manager can allocate memory to containers in increments of *yarn.scheduler.minimum-allocation-mb* and cannot exceed the size specified by *yarn.scheduler.maximum-allocation-mb*. The value specified should not be more than the total allocated memory for all containers on the node specified by *yarn.nodemanager.resource.memory-mb*.
57
58
For D14 v2 worker nodes, the recommended value is **102400 MB**
58
59
59
-
#### **3. Determining maximum amount of vcores per YARN container**
60
+
#### **3. Determining maximum amount of vcores per YARN container request**
This value indicates the maximum number of virtual CPU cores for every container request at the Resource Manager. Requesting a higher value than this will not take effect. This is a global property of the YARN scheduler. For LLAP daemon container, this value can be set to 75% of total available vcores. The remaining 25% should be reserved for NodeManager, DataNode, and other services running on the worker nodes.
63
-
For D14 v2 worker nodes, there are 16 vcores and 75% of 16 vcores can be allocated, therefore the recommended value for LLAP daemon container is **12**.
64
+
For D14 v2 worker nodes, there are 16 vcores and 75% of 16 vcores can be used by LLAP daemon container, therefore the recommended value for LLAP daemon container is **12**.
*hive.tez.container.size* - defines the amount of memory allocated for Tez container. This value must be set between the YARN minimum container size(*yarn.scheduler.minimum-allocation-mb*) and the YARN maximum container size(*yarn.scheduler.maximum-allocation-mb*).
79
-
It is recommended to be set to **4096 MB**.
80
+
It is recommended to be set to **4096 MB**. The LLAP daemon executors use this configuration for limiting memory usage per executor.
80
81
81
-
A general rule of thumb is to keep it lesser than the amount of memory per processor considering one processor per container. You should reserve memory for number of Tez AMs on a node before allocating the memory for LLAP daemon. For instance, if you are using two Tez AMs(4 GB each) per node you should allocate only 82 GB out of 90 GB for LLAP daemon reserving 8 GB for two Tez AMs.
82
+
You should reserve some memory for Tez AMs on a node before allocating the memory for LLAP daemon container. For instance, if you are using two Tez AMs(4 GB each) per node, you should allocate only 82 GB out of 90 GB for LLAP daemon reserving 8 GB for two Tez AMs.
This value indicates a percentage of capacity allocated for llap queue. The HDInsights Interactive query cluster allocates 90% of the total capacity for llap queue and the remaining 10% is set to default queue for other container allocations.
87
-
For D14v2 worker nodes, the recommended value is **90** for llap queue.
87
+
This value indicates a percentage of capacity allocated for llap queue. The capacity allocations may have different values for different workloads depending on how the YARN queues are configured. If your workload is read only operations then setting it as high as 90% of the capacity should work. However, if your workload is mix of update/delete/merge operations using managed tables, it is recommended to assign 80% of the capacity for llap queue. The remaining 20% capacity can be used by other internally invoked tasks such as compaction etc. to allocate containers from default queue without much depriving of YARN resources.
88
+
For D14v2 worker nodes, the recommended value is **80** for llap queue. For readonly workloads, it can be increased up to 90 as suitable.
Tez AM memory per node = [ (Number of Tez AMs/Number of LLAP daemon nodes) * Tez AM size ]
101
-
**LLAP daemon container size = [ 90% of YARN max container memory ] – [ Tez AM memory per node ]**
102
-
103
-
For D14 v2 worker node, HDI 4.0 - the recommended value is (90 - (1/1 * 4 GB)) = **86 GB**
104
-
(For HDI 3.6, recommended value is **84 GB** because you should reserve ~2 GB for slider AM.)
93
+
LLAP daemon is run as a YARN container on each worker node. The total memory size for LLAP daemon container depends on following factors,
94
+
1. Configurations of YARN container size (yarn.scheduler.maximum-allocation-mb, yarn.scheduler.maximum-allocation-mb, yarn.nodemanager.resource.memory-mb)
95
+
2. Number of Tez AMs on a node
96
+
3. Total memory configured for all containers on a node and LLAP queue capacity
105
97
106
-
**Headroom size**:
107
-
It is a portion of off-heap memory used for Java VM overhead (metaspace, threads stack, gc data structures, etc.). This is observed to be around 6% of the heap size (Xmx). To be on the safer side, it can be calculated as 6% of total LLAP daemon memory size because it possible when SSD cache is enabled it will allow LLAP daemon to utilize all of the available in-memory space to be used only for heap.
108
-
For D14 v2, the recommended value is ceil(86 GB x 0.06) ~= **6 GB**.
98
+
Memory needed by Tez Application Masters(Tez AM) can be calculated as follows,
99
+
For HDInsight Interactive cluster, by default, there is one Tez AM per worker node which acts as a query coordinator. The number of Tez AMs can be configured based on a number of concurrent queries to be served.
100
+
It is recommended to have 4 GB of memory per Tez AM.
109
101
110
-
**Heap size(Xmx)**:
111
-
It is amount of RAM available after taking out Headroom size.
112
-
For D14 v2, HDI 4.0 - this value is (86 GB - 6 GB) = 80 GB
113
-
For D14 v2, HDI 3.6 - this is (84 GB - 6 GB) = 78 GB
102
+
Tez AM memory per node = [ number of Tez AMs x Tez AM container size ]
103
+
= (1 x 4 GB ) = 4 GB
114
104
115
-
#### **8. LLAP daemon cache size**
116
-
Configuration: ***hive.llap.io.memory.size***
105
+
Total Memory available for LLAP queue per worker node can be calculated as follows:
106
+
This value depends on the total amount of memory available for all YARN containers on a node given by *yarn.nodemanager.resource.memory-mb* and the percentage of capacity configured for llap queue *yarn.scheduler.capacity.root.llap.capacity*..
107
+
Total memory for LLAP queue on worker node = Total memory available for all YARN containers on a node x Percentage of capacity for llap queue
108
+
For D14 v2, this value is [ 100 GB x 0.80 ] = 80 GB.
117
109
118
-
This is the amount of memory available as cache for LLAP daemon.
119
-
The LLAP daemons can use SSD as a cache. Setting *hive.llap.io.allocator.mmap* = true will enable SSD caching.
120
-
The D14 v2 comes with ~800 GB of SSD and the SSD caching is enabled by default for interactive query Cluster (LLAP).
121
-
It is configured to use 50% of the SSD space for off-heap cache.
122
-
For D14 v2, the recommended value is **409600 MB**.
110
+
The LLAP daemon container size is calculated as follows;
123
111
124
-
For other VMs, with no SSD caching enabled, it is beneficial to allocate portion of available RAM for LLAP caching to achieve better performance. Adjust the total memory size for LLAP daemon as follows:
This configuration controls the number of executors that can execute tasks in parallel per LLAP daemon. This value is a balance of number of available vcores, the amount of memory allocated per executor and the amount of total memory available per LLAP daemon. Usually, we would like this value to be as close as possible to the number of cores.
121
+
This configuration controls the number of executors that can execute tasks in parallel per LLAP daemon. This value is a balance of number of available vcores, the amount of memory allocated per executor and the amount of total memory available for LLAP daemon. Usually, we would like this value to be as close as possible to the number of cores.
133
122
For D14 v2, there are 16 vcores available, however, not all of the vcores can be allocated because the worker nodes also run other services like NodeManager, DataNode, Metrics Monitor etc. that need some portion of available vcores.
134
123
135
-
This value can be configured up to 75% of the total vcores available on that node.
124
+
This value can be configured up to 75% of the total vcores available on that node.
136
125
For D14 v2, the recommended value is (.75 X 16) = **12**
137
126
138
-
It is recommended that you reserve ~6 GB of heap space per executor and adjust your number of executors based on available llap daemon size and number of available vcores per node.
127
+
If you need to adjust your number of executors it is recommended that you consider 4 GB of memory per executor as specified by *hive.tez.container.size*and make sure total memory needed for all executors do not exceed the total memory available for LLAP daemon container.
139
128
140
129
***hive.llap.io.threadpool.size***:
141
130
This value specifies the thread pool size for executors. Since executors are fixed as specified, it will be same as number of executors per LLAP daemon.
142
131
For D14 v2, it is recommended to set this value to **12**.
143
132
144
-
**Note:** This configuration cannot exceed yarn.nodemanager.resource.cpu-vcores value.
133
+
#### **9. Determining LLAP daemon cache size**
134
+
Configuration: ***hive.llap.io.memory.size***
135
+
136
+
LLAP daemon container memory consist of following components;
137
+
1. Head room
138
+
2. Heap memory used by executors (Xmx)
139
+
3. In-memory cache per daemon (this is off-heap memory, not applicable when SSD cache is enabled)
140
+
4. In-memory cache metadata size (applicable only when SSD cache is enabled)
141
+
142
+
**Headroom size**:
143
+
It is a portion of off-heap memory used for Java VM overhead (metaspace, threads stack, gc data structures, etc.). This is observed to be around 6% of the heap size (Xmx). To be on the safer side, it can be calculated as 6% of total LLAP daemon memory size
144
+
For D14 v2, the recommended value is ceil(76 GB x 0.06) ~= **5 GB**.
145
+
146
+
**Heap size(Xmx)**:
147
+
It is amount of heap memory available for all executors.
148
+
Total Heap size = Number of executors x 4 GB
149
+
For D14 v2, this value is 12 x 4 GB = 48 GB
150
+
151
+
When SSD cache is disabled, the in-memory cache is an amount of memory that is left after taking out Headroom size and Heap size from LLAP daemon container size.
152
+
153
+
Cache size calculation differs when SSD cache is enabled.
154
+
Setting *hive.llap.io.allocator.mmap* = true will enable SSD caching.
155
+
When SSD cache is enabled, some portion of the memory will be used to store metadata for the SSD cache. The metadata is stored in memory and it is expected to be ~10% of SSD cache size.
0 commit comments