Skip to content

Commit d6dbfa3

Browse files
committed
stragglers
1 parent ceefa80 commit d6dbfa3

File tree

2 files changed

+1
-26
lines changed

2 files changed

+1
-26
lines changed

troubleshoot/monitoring/cluster-response-time.md

Lines changed: 1 addition & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -19,10 +19,4 @@ Memory pressure is not the culprit. The **Memory Pressure per Node** metric is a
1919

2020
So what caused the sudden increase in response times? The key to the puzzle lies in the **Number of Requests** metric, which indicates the number of requests that a cluster receives per second. Beginning shortly before 13:32, there was a substantial increase in the number of user requests per second. The number of requests per second continued to rise until the requests began to plateau as your cluster reached its maximum throughput, which in turn caused response times to rise. The number of requests remained at a high level for approximately five minutes, until they started to drop off again around 13:40. Overall, the sustained increase of user requests lasted a bit over 10 minutes, consistent with the slowdown you observed.
2121

22-
This cluster was sized to handle a certain number of user requests. As the user requests exceeded the maximum throughput that a cluster of this size could sustain, response times increased. To avoid such a slowdown, you either need to control the volume of user requests that reaches the {{es}} cluster or you need to size your cluster to be able to accommodate a sudden increase in user requests.
23-
24-
25-
26-
% Notes: Lift and shift this one
27-
% - [ ] ./raw-migrated-files/cloud/cloud-heroku/echscenario_why_are_my_cluster_response_times_suddenly_so_much_worse.md
28-
% Notes: dupe, redirect
22+
This cluster was sized to handle a certain number of user requests. As the user requests exceeded the maximum throughput that a cluster of this size could sustain, response times increased. To avoid such a slowdown, you either need to control the volume of user requests that reaches the {{es}} cluster or you need to size your cluster to be able to accommodate a sudden increase in user requests.

troubleshoot/monitoring/unavailable-nodes.md

Lines changed: 0 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -324,22 +324,3 @@ Stack Monitoring comes with out-of-the-box rules, but you need to enable them wh
324324
* Nodes unable to join: Fix the {{es}} configuration.
325325
* Nodes unable to join: Contact support.
326326

327-
328-
329-
% Notes: dupe
330-
331-
% Internal links rely on the following IDs being on this page (e.g. as a heading ID, paragraph ID, etc):
332-
333-
$$$ec-single-node-deployment-disk-used$$$
334-
335-
$$$ec-multiple-node-deployment-disk-used$$$
336-
337-
$$$ec-single-node-deployment-disk-used$$$
338-
339-
$$$ec-multiple-node-deployment-disk-used$$$
340-
341-
$$$ec-jvm-heap-usage-exceed-allowed-threshold$$$
342-
343-
$$$ec-cpu-usage-exceed-allowed-threshold$$$
344-
345-
$$$ec-nodes-unavailable-missing$$$

0 commit comments

Comments
 (0)