You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: deploy-manage/production-guidance/kibana-traffic-scaling-considerations.md
+8-6Lines changed: 8 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -36,7 +36,7 @@ A valuable strategy known as load balancing helps to mitigate this bursty nature
36
36
CPU and memory boundedness often interact in important ways. If CPU-bound activity is reaching its limit, memory pressure will likely increase as {{kib}} has less time for activities like garbage collection. If memory-bound activity is reaching its limit, there may be more CPU work to free claimed memory, increasing CPU pressure.
37
37
::::
38
38
39
-
### Answer the following questions before scaling Kibana up or down [_before_sizing_kibana]
39
+
### Answer the following questions before sizing Kibana up or down [_before_sizing_kibana]
40
40
41
41
#### Is the {{es}} cluster correctly sized?
42
42
@@ -50,7 +50,7 @@ In user interfaces like Dashboards or Discover, one can see the full query that
50
50
51
51
Follow this strategy if you know the max number of expected concurrent users.
52
52
53
-
Start {{kib}} on **1 CPU** and **1.5GB** of memory. This should comfortably serve a set of 10 concurrent users performing analytics activities like browsing dashboards. If you are experiencing performance issues, adding an additional **0.5 CPUs** and **1.5GB** per 10 concurrent users should ensure {{kib}} is not resource-starved for common analytics usage.
53
+
Start {{kib}} on **1 CPU** and **2GB** of memory. This should comfortably serve a set of 10 concurrent users performing analytics activities like browsing dashboards. If you are experiencing performance issues, adding an additional **0.5 CPUs** and **2GB** per 10 concurrent users should ensure {{kib}} is not resource-starved for common analytics usage.
54
54
55
55
::::{important}
56
56
This advice does not apply to scaling {{kib}} for task manager. If you intend to use {{kib}} alerting capabilities see [task manager scaling guidance](./kibana-task-manager-scaling-considerations.md).
@@ -70,7 +70,7 @@ On [serverless](../deploy/elastic-cloud/serverless.md) scaling and configuring {
70
70
71
71
#### Monitoring [_monitoring-kibana-metrics]
72
72
73
-
In order to understand the impact of your usage patterns on **a {{kib}} instance** use the Stack Monitoring feature. See [the guide for {{kib}} deployed on {{ech}} or {{ece}}](../monitor/stack-monitoring/ece-ech-stack-monitoring.md) or the [the guide for self-managed {{kib}}](../monitor/stack-monitoring/kibana-monitoring-self-managed.md).
73
+
In order to understand the impact of your usage patterns on **a single {{kib}} instance** use the Stack Monitoring feature. See [the guide for {{kib}} deployed on {{ech}} or {{ece}}](../monitor/stack-monitoring/ece-ech-stack-monitoring.md) or the [the guide for self-managed {{kib}}](../monitor/stack-monitoring/kibana-monitoring-self-managed.md).
74
74
75
75
The rest of this guide will assume you have visibility into the following important metrics for a {{kib}} instance:
76
76
@@ -82,11 +82,13 @@ The rest of this guide will assume you have visibility into the following import
82
82
83
83
Event loop delay (ELD) is an important metric for understanding whether Kibana is engaged in CPU-bound activity.
84
84
85
-
**As a general target, ELD should be below 200ms 95% of the time**. Higher delays may mean {{kib}} is CPU-starved. Sporadic increases above 200ms may mean that Kibana is periodically processing CPU-intensive activities like large responses from Elasticsearch, whereas consistently high ELD may mean Kibana is struggling to service tasks and requests.
85
+
**As a general target, ELD should be at below ~200ms 95% of the time**. Higher delays may mean {{kib}} is CPU-starved. Sporadic increases above 200ms may mean that Kibana is periodically processing CPU-intensive activities like large responses from Elasticsearch, whereas consistently high ELD may mean Kibana is struggling to service tasks and requests.
86
86
87
-
Consider the impact of ELD on user experience. If users are able to use {{kib}} without the frustration that comes from a blocked CPU, provisioning additional CPU resources will not be impactful. Monitoring ELD over time is a solid strategy for knowing when additional CPU resource is needed.
87
+
Before increasing CPU resources, consider the impact of ELD on user experience. If users are able to use {{kib}} without the frustration that comes from a blocked CPU, provisioning additional CPU resources will not be impactful, although having spare resources in case of unexpected spikes is useful.
88
88
89
-
**{{ece}}, {{ech}}, and {{eck}}** users can adjust {{kib}}'s CPU and memory by viewing their deployment and editing the {{kib}} instance's resource configuration.
89
+
Monitoring {{kib}}'s ELD over time is a solid strategy for knowing when additional CPU resource is needed based on your usage patterns.
90
+
91
+
**{{ece}}, {{ech}}, and {{eck}}** users can adjust {{kib}}'s CPU and memory by viewing their deployment and editing the {{kib}} instance's resource configuration in predefined increments.
90
92
91
93
**Self-managed** users are responsible for managing CPU.
0 commit comments