You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: deploy-manage/production-guidance/kibana-traffic-scaling-considerations.md
+10-4Lines changed: 10 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -49,9 +49,13 @@ In user interfaces like Dashboards or Discover one can see the full query that {
49
49
50
50
### A simple sizing strategy
51
51
52
-
As a general starting point, {{kib}} on **1 CPU** and **1.5GB** of memoryshould comfortably serve a set of 10 concurrent users performing analytics activities like browsing dashboards. If you are experiencing performance issues, doubling the provisioned resources per 10 concurrent users is a simple and safe strategy for ensuring {{kib}} is not resource starved.
52
+
If you know max number of expected concurrent users start {{kib}} on **1 CPU** and **1.5GB** of memory. This should comfortably serve a set of 10 concurrent users performing analytics activities like browsing dashboards. If you are experiencing performance issues, adding an additional **0.5 CPUs** and **1.5GB**per 10 concurrent should ensure {{kib}} is not resource starved for common analytics usage.
53
53
54
-
**{{ece}}, {{ech}} and {{eck}}** users can adjust {{kib}}'s memory by viewing their deployment and editing the {{kib}} instance's resource configuration. Note: size increments will be predetermined and
54
+
::::{important}
55
+
This advice does not apply to scaling {{kib}} for task manager. If you intend to use {{kib}} alerting capabilites see [task manager scaling guidance](./kibana-task-manager-scaling-considerations.md).
56
+
::::
57
+
58
+
**{{ece}}, {{ech}} and {{eck}}** users can adjust {{kib}}'s memory by viewing their deployment and editing the {{kib}} instance's resource configuration. Note: size increments are predetermined and may not fit this simple sizing strategy exactly.
55
59
56
60
**Self-managed** users must provision memory to the host that {{kib}} is running on as well as configure allocated heap, see [the guidance on configuring {{kib}} memory](./kibana-configure-memory.md). **Note:** Node.js suggests allocating 80% of available host memory to heap, assuming that Kibana is the only server process running on the (virtual) host. This allows for memory resources to be used for other activities, for example: allowing for HTTP sockets to allocated.
57
61
@@ -70,14 +74,16 @@ In order to understand the impact of your usage patterns on **a {{kib}} instance
70
74
The rest of this guide will assume you have visibility into the following important metrics for a {{kib}} instance:
71
75
72
76
1. Event loop delay (ELD) in milliseconds - this is a Node.js concept that roughly translates to: the number of milliseconds by which processing of events is delayed due to CPU intensive activities
73
-
2.Memory size in bytes - the amount of bytes currently on the heap
77
+
2.Heap size in bytes - the amount of bytes currently held in memory dedicated to {{kib}}'s heap space
74
78
3. HTTP connections - the number of sockets that the Kibana server has open
75
79
76
80
##### CPU [kibana-traffic-load-cpu-sizing]
77
81
78
82
Event loop delay (ELD) is an important metric for understanding whether Kibana is engaged in CPU-bound activity.
79
83
80
-
**As a general target ELD should be below 200ms 95% of the time**. Higher delays may mean {{kib}} is CPU starved. Sporadic increases above 200ms may mean that Kibana is periodically processing CPU intensive activities like large responses from Elasticsearch. It is important to consider the impact of ELD on user experience. If users are able to use {{kib}} without the frustration that comes from a blocked CPU provisioning additional CPU resources will not be impactful. However, monitoring ELD over time is a solid strategy for ensuring your Kibana is not exhausting CPU resources.
84
+
**As a general target ELD should be below 200ms 95% of the time**. Higher delays may mean {{kib}} is CPU starved. Sporadic increases above 200ms may mean that Kibana is periodically processing CPU intensive activities like large responses from Elasticsearch whereas consistently high ELD may mean Kibana is struggling to service tasks and requests.
85
+
86
+
Consider the impact of ELD on user experience. If users are able to use {{kib}} without the frustration that comes from a blocked CPU provisioning additional CPU resources will not be impactful. Monitoring ELD over time is a solid strategy for knowing when additional CPU resource is needed.
81
87
82
88
**{{ece}}, {{ech}} and {{eck}}** users can adjust {{kib}}'s CPU and memory by viewing their deployment and editing the {{kib}} instance's resource configuration.
0 commit comments