|
| 1 | +[[hotspotting]] |
| 2 | +=== Hot spotting |
| 3 | +++++ |
| 4 | +<titleabbrev>Hot spotting</titleabbrev> |
| 5 | +++++ |
| 6 | +:keywords: hot-spotting, hotspot, hot-spot, hot spot, hotspots, hotspotting |
| 7 | + |
| 8 | +Computer link:{wikipedia}/Hot_spot_(computer_programming)[hot spotting] |
| 9 | +may occur in {es} when resource utilizations are unevenly distributed across |
| 10 | +<<modules-node,nodes>>. Temporary spikes are not usually considered problematic, but |
| 11 | +ongoing significantly unique utilization may lead to cluster bottlenecks |
| 12 | +and should be reviewed. |
| 13 | + |
| 14 | +[discrete] |
| 15 | +[[detect]] |
| 16 | +==== Detect hot spotting |
| 17 | + |
| 18 | +Hot spotting most commonly surfaces as significantly elevated |
| 19 | +resource utilization (of `disk.percent`, `heap.percent`, or `cpu`) among a |
| 20 | +subset of nodes as reported via <<cat-nodes,cat nodes>>. Individual spikes aren't |
| 21 | +necessarily problematic, but if utilization repeatedly spikes or consistently remains |
| 22 | +high over time (for example longer than 30 seconds), the resource may be experiencing problematic |
| 23 | +hot spotting. |
| 24 | + |
| 25 | +For example, let's show case two separate plausible issues using cat nodes: |
| 26 | + |
| 27 | +[source,console] |
| 28 | +---- |
| 29 | +GET _cat/nodes?v&s=master,name&h=name,master,node.role,heap.percent,disk.used_percent,cpu |
| 30 | +---- |
| 31 | +Pretend this same output pulled twice across five minutes: |
| 32 | + |
| 33 | +[source,console-result] |
| 34 | +---- |
| 35 | +name master node.role heap.percent disk.used_percent cpu |
| 36 | +node_1 * hirstm 24 20 95 |
| 37 | +node_2 - hirstm 23 18 18 |
| 38 | +node_3 - hirstmv 25 90 10 |
| 39 | +---- |
| 40 | +// TEST[skip:illustrative response only] |
| 41 | + |
| 42 | +Here we see two significantly unique utilizations: where the master node is at |
| 43 | +`cpu: 95` and a hot node is at `disk.used_percent: 90%`. This would indicate |
| 44 | +hot spotting was occurring on these two nodes, and not necessarily from the same |
| 45 | +root cause. |
| 46 | + |
| 47 | +[discrete] |
| 48 | +[[causes]] |
| 49 | +==== Causes |
| 50 | + |
| 51 | +Historically, clusters experience hot spotting mainly as an effect of hardware, |
| 52 | +shard distributions, and/or task load. We'll review these sequentially in order |
| 53 | +of their potentially impacting scope. |
| 54 | + |
| 55 | +[discrete] |
| 56 | +[[causes-hardware]] |
| 57 | +===== Hardware |
| 58 | + |
| 59 | +Here are some common improper hardware setups which may contribute to hot |
| 60 | +spotting: |
| 61 | + |
| 62 | +* Resources are allocated non-uniformly. For example, if one hot node is |
| 63 | +given half the CPU of its peers. {es} expects all nodes on a |
| 64 | +<<data-tiers,data tier>> to share the same hardware profiles or |
| 65 | +specifications. |
| 66 | + |
| 67 | +* Resources are consumed by another service on the host, including other |
| 68 | +{es} nodes. Refer to our <<dedicated-host,dedicated host>> recommendation. |
| 69 | + |
| 70 | +* Resources experience different network or disk throughputs. For example, if one |
| 71 | +node's I/O is lower than its peers. Refer to |
| 72 | +<<tune-for-indexing-speed,Use faster hardware>> for more information. |
| 73 | + |
| 74 | +* A JVM that has been configured with a heap larger than 31GB. Refer to <<set-jvm-heap-size>> |
| 75 | +for more information. |
| 76 | + |
| 77 | +* Problematic resources uniquely report <<setup-configuration-memory,memory swapping>>. |
| 78 | + |
| 79 | +[discrete] |
| 80 | +[[causes-shards]] |
| 81 | +===== Shard distributions |
| 82 | + |
| 83 | +{es} indices are divided into one or more link:{wikipedia}/Shard_(database_architecture)[shards] |
| 84 | +which can sometimes be poorly distributed. {es} accounts for this by <<modules-cluster,balancing shard counts>> |
| 85 | +across data nodes. As link:{blog-ref}whats-new-elasticsearch-kibana-cloud-8-6-0[introduced in version 8.6], |
| 86 | +{es} by default also enables <<modules-cluster,desired balancing>> to account for ingest load. |
| 87 | +A node may still experience hot spotting either due to write-heavy indices or by the |
| 88 | +overall shards it's hosting. |
| 89 | + |
| 90 | +[discrete] |
| 91 | +[[causes-shards-nodes]] |
| 92 | +====== Node level |
| 93 | + |
| 94 | +You can check for shard balancing via <<cat-allocation,cat allocation>>, though as of version |
| 95 | +8.6, <<modules-cluster,desired balancing>> may no longer fully expect to |
| 96 | +balance shards. Kindly note, both methods may temporarily show problematic imbalance during |
| 97 | +<<cluster-fault-detection,cluster stability issues>>. |
| 98 | + |
| 99 | +For example, let's showcase two separate plausible issues using cat allocation: |
| 100 | + |
| 101 | +[source,console] |
| 102 | +---- |
| 103 | +GET _cat/allocation?v&s=node&h=node,shards,disk.percent,disk.indices,disk.used |
| 104 | +---- |
| 105 | + |
| 106 | +Which could return: |
| 107 | + |
| 108 | +[source,console-result] |
| 109 | +---- |
| 110 | +node shards disk.percent disk.indices disk.used |
| 111 | +node_1 446 19 154.8gb 173.1gb |
| 112 | +node_2 31 52 44.6gb 372.7gb |
| 113 | +node_3 445 43 271.5gb 289.4gb |
| 114 | +---- |
| 115 | +// TEST[skip:illustrative response only] |
| 116 | + |
| 117 | +Here we see two significantly unique situations. `node_2` has recently |
| 118 | +restarted, so it has a much lower number of shards than all other nodes. This |
| 119 | +also relates to `disk.indices` being much smaller than `disk.used` while shards |
| 120 | +are recovering as seen via <<cat-recovery,cat recovery>>. While `node_2`'s shard |
| 121 | +count is low, it may become a write hot spot due to ongoing <<ilm-rollover,ILM |
| 122 | +rollovers>>. This is a common root cause of write hot spots covered in the next |
| 123 | +section. |
| 124 | + |
| 125 | +The second situation is that `node_3` has a higher `disk.percent` than `node_1`, |
| 126 | +even though they hold roughly the same number of shards. This occurs when either |
| 127 | +shards are not evenly sized (refer to <<shard-size-recommendation>>) or when |
| 128 | +there are a lot of empty indices. |
| 129 | + |
| 130 | +Cluster rebalancing based on desired balance does much of the heavy lifting |
| 131 | +of keeping nodes from hot spotting. It can be limited by either nodes hitting |
| 132 | +<<disk-based-shard-allocation,watermarks>> (refer to <<fix-watermark-errors,fixing disk watermark errors>>) or by a |
| 133 | +write-heavy index's total shards being much lower than the written-to nodes. |
| 134 | + |
| 135 | +You can confirm hot spotted nodes via <<cluster-nodes-stats,the nodes stats API>>, |
| 136 | +potentially polling twice over time to only checking for the stats differences |
| 137 | +between them rather than polling once giving you stats for the node's |
| 138 | +full <<cluster-nodes-usage,node uptime>>. For example, to check all nodes |
| 139 | +indexing stats: |
| 140 | + |
| 141 | +[source,console] |
| 142 | +---- |
| 143 | +GET _nodes/stats?human&filter_path=nodes.*.name,nodes.*.indices.indexing |
| 144 | +---- |
| 145 | + |
| 146 | +[discrete] |
| 147 | +[[causes-shards-index]] |
| 148 | +====== Index level |
| 149 | + |
| 150 | +Hot spotted nodes frequently surface via <<cat-thread-pool,cat thread pool>>'s |
| 151 | +`write` and `search` queue backups. For example: |
| 152 | + |
| 153 | +[source,console] |
| 154 | +---- |
| 155 | +GET _cat/thread_pool/write,search?v=true&s=n,nn&h=n,nn,q,a,r,c |
| 156 | +---- |
| 157 | + |
| 158 | +Which could return: |
| 159 | + |
| 160 | +[source,console-result] |
| 161 | +---- |
| 162 | +n nn q a r c |
| 163 | +search node_1 3 1 0 1287 |
| 164 | +search node_2 0 2 0 1159 |
| 165 | +search node_3 0 1 0 1302 |
| 166 | +write node_1 100 3 0 4259 |
| 167 | +write node_2 0 4 0 980 |
| 168 | +write node_3 1 5 0 8714 |
| 169 | +---- |
| 170 | +// TEST[skip:illustrative response only] |
| 171 | + |
| 172 | +Here you can see two significantly unique situations. Firstly, `node_1` has a |
| 173 | +severely backed up write queue compared to other nodes. Secondly, `node_3` shows |
| 174 | +historically completed writes that are double any other node. These are both |
| 175 | +probably due to either poorly distributed write-heavy indices, or to multiple |
| 176 | +write-heavy indices allocated to the same node. Since primary and replica writes |
| 177 | +are majorly the same amount of cluster work, we usually recommend setting |
| 178 | +<<total-shards-per-node,`index.routing.allocation.total_shards_per_node`>> to |
| 179 | +force index spreading after lining up index shard counts to total nodes. |
| 180 | + |
| 181 | +We normally recommend heavy-write indices have sufficient primary |
| 182 | +`number_of_shards` and replica `number_of_replicas` to evenly spread across |
| 183 | +indexing nodes. Alternatively, you can <<cluster-reroute,reroute>> shards to |
| 184 | +more quiet nodes to alleviate the nodes with write hot spotting. |
| 185 | + |
| 186 | +If it's non-obvious what indices are problematic, you can introspect further via |
| 187 | +<<indices-stats,the index stats API>> by running: |
| 188 | + |
| 189 | +[source,console] |
| 190 | +---- |
| 191 | +GET _stats?level=shards&human&expand_wildcards=all&filter_path=indices.*.total.indexing.index_total |
| 192 | +---- |
| 193 | + |
| 194 | +For more advanced analysis, you can poll for shard-level stats, |
| 195 | +which lets you compare joint index-level and node-level stats. This analysis |
| 196 | +wouldn't account for node restarts and/or shards rerouting, but serves as |
| 197 | +overview: |
| 198 | + |
| 199 | +[source,console] |
| 200 | +---- |
| 201 | +GET _stats/indexing,search?level=shards&human&expand_wildcards=all |
| 202 | +---- |
| 203 | + |
| 204 | +You can for example use the link:https://stedolan.github.io/jq[third-party JQ tool], |
| 205 | +to process the output saved as `indices_stats.json`: |
| 206 | + |
| 207 | +[source,sh] |
| 208 | +---- |
| 209 | +cat indices_stats.json | jq -rc ['.indices|to_entries[]|.key as $i|.value.shards|to_entries[]|.key as $s|.value[]|{node:.routing.node[:4], index:$i, shard:$s, primary:.routing.primary, size:.store.size, total_indexing:.indexing.index_total, time_indexing:.indexing.index_time_in_millis, total_query:.search.query_total, time_query:.search.query_time_in_millis } | .+{ avg_indexing: (if .total_indexing>0 then (.time_indexing/.total_indexing|round) else 0 end), avg_search: (if .total_search>0 then (.time_search/.total_search|round) else 0 end) }'] > shard_stats.json |
| 210 | +
|
| 211 | +# show top written-to shard simplified stats which contain their index and node references |
| 212 | +cat shard_stats.json | jq -rc 'sort_by(-.avg_indexing)[]' | head |
| 213 | +---- |
| 214 | + |
| 215 | +[discrete] |
| 216 | +[[causes-tasks]] |
| 217 | +===== Task loads |
| 218 | + |
| 219 | +Shard distribution problems will most-likely surface as task load as seen |
| 220 | +above in the <<cat-thread-pool,cat thread pool>> example. It is also |
| 221 | +possible for tasks to hot spot a node either due to |
| 222 | +individual qualitative expensiveness or overall quantitative traffic loads. |
| 223 | + |
| 224 | +For example, if <<cat-thread-pool,cat thread pool>> reported a high |
| 225 | +queue on the `warmer` <<modules-threadpool,thread pool>>, you would |
| 226 | +look-up the effected node's <<cluster-nodes-hot-threads,hot threads>>. |
| 227 | +Let's say it reported `warmer` threads at `100% cpu` related to |
| 228 | +`GlobalOrdinalsBuilder`. This would let you know to inspect |
| 229 | +<<eager-global-ordinals,field data's global ordinals>>. |
| 230 | + |
| 231 | +Alternatively, let's say <<cat-nodes,cat nodes>> shows a hot spotted master node |
| 232 | +and <<cat-thread-pool,cat thread pool>> shows general queuing across nodes. |
| 233 | +This would suggest the master node is overwhelmed. To resolve |
| 234 | +this, first ensure <<high-availability-cluster-small-clusters,hardware high availability>> |
| 235 | +setup and then look to ephemeral causes. In this example, |
| 236 | +<<cluster-nodes-hot-threads,the nodes hot threads API>> reports multiple threads in |
| 237 | +`other` which indicates they're waiting on or blocked by either garbage collection |
| 238 | +or I/O. |
| 239 | + |
| 240 | +For either of these example situations, a good way to confirm the problematic tasks |
| 241 | +is to look at longest running non-continuous (designated `[c]`) tasks via |
| 242 | +<<cat-tasks,cat task management>>. This can be supplemented checking longest |
| 243 | +running cluster sync tasks via <<cat-pending-tasks,cat pending tasks>>. Using |
| 244 | +a third example, |
| 245 | + |
| 246 | +[source,console] |
| 247 | +---- |
| 248 | +GET _cat/tasks?v&s=time:desc&h=type,action,running_time,node,cancellable |
| 249 | +---- |
| 250 | + |
| 251 | +This could return: |
| 252 | + |
| 253 | +[source,console-result] |
| 254 | +---- |
| 255 | +type action running_time node cancellable |
| 256 | +direct indices:data/read/eql 10m node_1 true |
| 257 | +... |
| 258 | +---- |
| 259 | +// TEST[skip:illustrative response only] |
| 260 | + |
| 261 | +This surfaces a problematic <<eql-search-api,EQL query>>. We can gain |
| 262 | +further insight on it via <<tasks,the task management API>>. Its response |
| 263 | +contains a `description` that reports this query: |
| 264 | + |
| 265 | +[source,eql] |
| 266 | +---- |
| 267 | +indices[winlogbeat-*,logs-window*], sequence by winlog.computer_name with maxspan=1m\n\n[authentication where host.os.type == "windows" and event.action:"logged-in" and\n event.outcome == "success" and process.name == "svchost.exe" ] by winlog.event_data.TargetLogonId |
| 268 | +---- |
| 269 | + |
| 270 | +This lets you know which indices to check (`winlogbeat-*,logs-window*`), as well |
| 271 | +as the <<eql-search-api,EQL search>> request body. Most likely this is |
| 272 | +link:{security-guide}/es-overview.html[SIEM related]. |
| 273 | +You can combine this with <<enable-audit-logging,audit logging>> as needed to |
| 274 | +trace the request source. |
0 commit comments