Skip to content

Commit 335483c

Browse files
committed
update advice
1 parent be11719 commit 335483c

File tree

1 file changed

+10
-10
lines changed

1 file changed

+10
-10
lines changed

deploy-manage/production-guidance/kibana-traffic-scaling-considerations.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -26,10 +26,9 @@ On [serverless](../deploy/elastic-cloud/serverless.md) scaling {{kib}} is fully
2626
2. Request and response size: how large (usually measured in bytes) are the requests and responses you expect {{kib}} to service, which is largely **memory-bound**
2727

2828
::::{important}
29-
3029
The nature of traffic is not only diverse, but also unpredictable. Traffic to {{kib}} often comes in short bursts or spikes that can overwhelm an underprovisioned {{kib}}. In production environments, an overwhelmed {{kib}} will typically manifest as 502 or 503 error responses from {{kib}}.
3130

32-
A valuable strategy known as load balancing helps to mitigate this bursty nature of traffic by horizontally scaling your {{kib}} deployments and improving {{kib}}'s availability. See the guide on [load balancing traffic](./kibana-load-balance-traffic.md). The rest of this guide will focus on provisioning CPU and memory (also known as vertically scaling) a single Kibana for handling your traffic load, but is not a replacement for load balancing traffic.
31+
Load balancing helps to mitigate this bursty nature of traffic by horizontally scaling your {{kib}} deployments and improving {{kib}}'s availability. See the guide on [load balancing traffic](./kibana-load-balance-traffic.md).
3332
::::
3433

3534
::::{important}
@@ -50,13 +49,18 @@ In user interfaces like Dashboards or Discover, one can see the full query that
5049

5150
Follow this strategy if you know the max number of expected concurrent users.
5251

53-
Start {{kib}} on **1 CPU** and **2GB** of memory. This should comfortably serve a set of 10 concurrent users performing analytics activities like browsing dashboards. If you are experiencing performance issues, adding an additional **0.5 CPUs** and **2GB** per 10 concurrent users should ensure {{kib}} is not resource-starved for common analytics usage.
52+
Start {{kib}} on **2.1 vCPU** and **2GB** of memory. This should comfortably serve a set of 10 concurrent users performing analytics activities like browsing dashboards. If you are experiencing performance issues, adding an additional **2.1 vCPUs** and **2GB** per 10 concurrent users is a safe _minimum_ ensure {{kib}} is not resource-starved for common analytics use cases. This is known as **vertical scaling** and should typically be employed up to a maximum of **8.4 vCPU** and **8GB** of memory. In combination, it is recommended to employ **horizontal scaling** as outlined in the guide on [load balancing traffic](./kibana-load-balance-traffic.md).
53+
54+
For example, according to this simple strategy, if you want:
55+
56+
1. {{kib}} to serve 50 concurrent users you would need **10.5 vCPU** and **10GB** of memory which, on {{ech}} and {{ece}} translates to: **2 {{kib}} instances of 8.4 vCPU and 8GB memory each** or **3 {{kib}} instances of 8.4 vCPU and 4GB memory each**
57+
2. {{kib}} to serve 100 concurrent users you would need **25.2 vCPU** and **20GB** of memory which translates to: **3 {{kib}} instances of 8.4 vCPU and 8GB memory each**.
5458

5559
::::{important}
5660
This advice does not apply to scaling {{kib}} for task manager. If you intend to use {{kib}} alerting capabilities see [task manager scaling guidance](./kibana-task-manager-scaling-considerations.md).
5761
::::
5862

59-
**{{ece}}, {{ech}}, and {{eck}}** users can adjust {{kib}}'s memory by viewing their deployment and editing the {{kib}} instance's resource configuration. Note: size increments are predetermined and may not fit this simple sizing strategy exactly.
63+
**{{ece}}, {{ech}}, and {{eck}}** users can adjust {{kib}}'s memory by viewing their deployment and editing the {{kib}} instance's resource configuration. Note: size increments are predetermined.
6064

6165
**Self-managed** users must provision memory to the host that {{kib}} is running on as well as configure allocated heap. See [the guidance on configuring {{kib}} memory](./kibana-configure-memory.md). **Note:** Node.js suggests allocating 80% of available host memory to heap, assuming that Kibana is the only server process running on the (virtual) host. This allows for memory resources to be used for other activities, for example: allowing for HTTP sockets to be allocated.
6266

@@ -68,6 +72,8 @@ This advice does not apply to scaling {{kib}} for task manager. If you intend to
6872
On [serverless](../deploy/elastic-cloud/serverless.md) scaling and configuring {{kib}} is fully managed for you.
6973
::::
7074

75+
Building on the simple strategy outlined above, we can make more precise adjustments to resource allocations. **Self-managed** users manage their CPU and memory allocations independently and can employ the strategy below to further tailor resources based on performance metrics.
76+
7177
#### Monitoring [_monitoring-kibana-metrics]
7278

7379
In order to understand the impact of your usage patterns on **a single {{kib}} instance** use the Stack Monitoring feature. See [the guide for {{kib}} deployed on {{ech}} or {{ece}}](../monitor/stack-monitoring/ece-ech-stack-monitoring.md) or the [the guide for self-managed {{kib}}](../monitor/stack-monitoring/kibana-monitoring-self-managed.md).
@@ -88,15 +94,9 @@ Before increasing CPU resources, consider the impact of ELD on user experience.
8894

8995
Monitoring {{kib}}'s ELD over time is a solid strategy for knowing when additional CPU resource is needed based on your usage patterns.
9096

91-
**{{ece}}, {{ech}}, and {{eck}}** users can adjust {{kib}}'s CPU and memory by viewing their deployment and editing the {{kib}} instance's resource configuration in predefined increments.
92-
93-
**Self-managed** users are responsible for managing CPU.
94-
9597
##### Memory [kibana-traffic-load-memory-sizing]
9698

9799
Heap size is an important metric to track. If {{kib}}'s heap size grows beyond the heap limit, {{kib}} will crash. By monitoring heap size, you can help ensure that {{kib}} has enough memory available.
98100

99-
**{{ece}}, {{ech}}, and {{eck}}** users can adjust {{kib}}'s CPU and memory by viewing their deployment and editing the {{kib}} instance's resource configuration.
100-
101101
**Self-managed** users must provision memory to the host that {{kib}} is running on as well as configure allocated heap. See [the guidance on configuring {{kib}} memory](./kibana-configure-memory.md). **Note:** Node.js suggests allocating 80% of available memory to heap. This allows for memory resources to be used for other activities, for example: allowing for HTTP sockets to be allocated.
102102

0 commit comments

Comments
 (0)