You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: deploy-manage/production-guidance/kibana-traffic-scaling-considerations.md
+14-14Lines changed: 14 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,59 +35,59 @@ A valuable strategy known as load balancing helps to mitigate this bursty nature
35
35
CPU and memory boundedness often interact in important ways. If CPU-bound activity is reaching it's limit memory pressure will likely increase as {{kib}} has less time for activities like garbage collection. If memory-bound activity is reaching it's limit there may be more CPU work to free claimed memory, increasing CPU pressure.
36
36
::::
37
37
38
-
### Before scaling {{kib}} for traffic... [_before_sizing_kibana]
38
+
Please note, there is a [separate guide for sizing Kibana for reporting use cases](./kibana-reporting-production-considerations.md).
39
+
40
+
### Answer the following questions before scaling Kibana up or down [_before_sizing_kibana]
39
41
40
42
#### Is the {{es}} cluster correctly sized?
41
43
42
44
Follow [the production guidance for {{es}} first](./elasticsearch-in-production-environments.md). {{es}} is the search engine and backing database of {{kib}}. Any performance issues in {{es}} will manifest in {{kib}}. Additionally, while we try to mitigate this possibility, {{kib}} may be sending requests to {{es}} that degrade performance if {{es}} is underprovisioned.
43
45
44
46
#### What requests is {{kib}} sending to {{es}}?
45
47
46
-
In user interfaces like Dashboards or Discover one can see the full query that {{kib}} is sending to {{es}}. This is a good way to get an idea of the volume of data and work a {{kib}} visualization or dashboard is creating for {{es}}.
48
+
In user interfaces like Dashboards or Discover one can see the full query that {{kib}} is sending to {{es}}. This is a good way to get an idea of the volume of data and work a {{kib}} visualization or dashboard is creating for {{es}}. Dashboards with many visualizations will generate higher load for {{es}} and {{kib}}.
47
49
48
50
### A simple sizing strategy
49
51
50
52
As a general starting point, {{kib}} on **1 CPU** and **1.5GB** of memory should comfortably serve a set of 10 concurrent users performing analytics activities like browsing dashboards. If you are experiencing performance issues, doubling the provisioned resources per 10 concurrent users is a simple and safe strategy for ensuring {{kib}} is not resource starved.
51
53
52
-
**{{ece}}, {{ech}} and {{eck}** users can adjust {{kib}}'s memory by viewing their deployment and editing the {{kib}} instance's resource configuration.
54
+
**{{ece}}, {{ech}} and {{eck}}** users can adjust {{kib}}'s memory by viewing their deployment and editing the {{kib}} instance's resource configuration. Note: size increments will be predetermined and
53
55
54
-
**Self-managed**control the means for provisioning more and less memory to a {{kib}} instance.
56
+
**Self-managed**users must provision memory to the host that {{kib}} is running on as well as configure allocated heap, see [the guidance on configuring {{kib}} memory](./kibana-configure-memory.md). **Note:** Node.js suggests allocating 80% of available host memory to heap, assuming that Kibana is the only server process running on the (virtual) host. This allows for memory resources to be used for other activities, for example: allowing for HTTP sockets to allocated.
Please note, there is a [separate guide for sizing Kibana for reporting use cases](./kibana-reporting-production-considerations.md).
62
+
::::{important}
63
+
On [serverless](../deploy/elastic-cloud/serverless.md) scaling and configuring {{kib}} is fully managed for you.
64
+
::::
61
65
62
66
#### Monitoring [_monitoring-kibana-metrics]
63
67
64
-
In order to understand the impact of your usage patterns on **a {{kib}} instance** use the Stack Monitoring feature. The rest of this guide will assume you have visibility into the following important metrics for a {{kib}} instance:
68
+
In order to understand the impact of your usage patterns on **a {{kib}} instance** use the Stack Monitoring feature. See [the guide for {{kib}} deployed on {{ech}} or {{ece}}](../monitor/stack-monitoring/ece-ech-stack-monitoring.md) or the [the guide for self-managed {{kib}}](../monitor/stack-monitoring/kibana-monitoring-self-managed.md).
69
+
70
+
The rest of this guide will assume you have visibility into the following important metrics for a {{kib}} instance:
65
71
66
72
1. Event loop delay (ELD) in milliseconds - this is a Node.js concept that roughly translates to: the number of milliseconds by which processing of events is delayed due to CPU intensive activities
67
73
2. Memory size in bytes - the amount of bytes currently on the heap
68
74
3. HTTP connections - the number of sockets that the Kibana server has open
69
75
70
-
::::{important}
71
-
See [the guide for {{kib}} deployed on {{ech}} or {{ece}}](../monitor/stack-monitoring/ece-ech-stack-monitoring.md) or the [the guide for self-managed {{kib}}](../monitor/stack-monitoring/kibana-monitoring-self-managed.md).
72
-
73
-
On [serverless](../deploy/elastic-cloud/serverless.md) scaling {{kib}} is fully managed for you.
74
-
::::
75
-
76
76
##### CPU [kibana-traffic-load-cpu-sizing]
77
77
78
78
Event loop delay (ELD) is an important metric for understanding whether Kibana is engaged in CPU-bound activity.
79
79
80
80
**As a general target ELD should be below 200ms 95% of the time**. Higher delays may mean {{kib}} is CPU starved. Sporadic increases above 200ms may mean that Kibana is periodically processing CPU intensive activities like large responses from Elasticsearch. It is important to consider the impact of ELD on user experience. If users are able to use {{kib}} without the frustration that comes from a blocked CPU provisioning additional CPU resources will not be impactful. However, monitoring ELD over time is a solid strategy for ensuring your Kibana is not exhausting CPU resources.
81
81
82
-
**{{ece}}, {{ech}} and {{eck}** users can adjust {{kib}}'s CPU and memory by viewing their deployment and editing the {{kib}} instance's resource configuration.
82
+
**{{ece}}, {{ech}} and {{eck}}** users can adjust {{kib}}'s CPU and memory by viewing their deployment and editing the {{kib}} instance's resource configuration.
83
83
84
84
**Self-managed** users are responsible for managing CPU.
85
85
86
86
##### Memory [kibana-traffic-load-memory-sizing]
87
87
88
88
Heap size relative is an important metric to track. If {{kib}}'s heap size grows beyond the heap limit {{kib}} will crash. By monitoring heap size you can help ensure that {{kib}} has enough memory available.
89
89
90
-
**{{ece}}, {{ech}} and {{eck}** users can adjust {{kib}}'s CPU and memory by viewing their deployment and editing the {{kib}} instance's resource configuration.
90
+
**{{ece}}, {{ech}} and {{eck}}** users can adjust {{kib}}'s CPU and memory by viewing their deployment and editing the {{kib}} instance's resource configuration.
91
91
92
92
**Self-managed** users must provision memory to the host that {{kib}} is running on as well as configure allocated heap, see [the guidance on configuring {{kib}} memory](./kibana-configure-memory.md). **Note:** Node.js suggests allocating 80% of available memory to heap. This allows for memory resources to be used for other activities, for example: allowing for HTTP sockets to allocated.
0 commit comments