Skip to content

Commit f951a8a

Browse files
authored
Merge pull request #51475 from rsevilla87/etcd-improvements-main
Etcd recommendations improvements
2 parents 40d15a2 + 261da54 commit f951a8a

File tree

3 files changed

+32
-24
lines changed

3 files changed

+32
-24
lines changed

modules/etcd-defrag.adoc

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,14 @@
77
[id="etcd-defrag_{context}"]
88
= Defragmenting etcd data
99

10+
For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes.
11+
12+
.Monitor these key metrics:
13+
14+
* `etcd_server_quota_backend_bytes`, which is the current quota limit
15+
* `etcd_mvcc_db_total_size_in_use_in_bytes`, which indicates the actual database usage after a history compaction
16+
* `etcd_debugging_mvcc_db_total_size_in_bytes`, which shows the database size, including free space waiting for defragmentation
17+
1018
Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction.
1119

1220
History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system.
@@ -58,6 +66,8 @@ A Prometheus alert indicates when you need to use manual defragmentation. The al
5866
* When etcd uses more than 50% of its available space for more than 10 minutes
5967
* When etcd is actively using less than 50% of its total database size for more than 10 minutes
6068

69+
You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression: `(etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024`
70+
6171
[WARNING]
6272
====
6373
Defragmenting etcd is a blocking action. The etcd member will not respond until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover.

modules/openshift-cluster-maximums-environment.adoc

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -15,16 +15,16 @@
1515
| r5.4xlarge
1616
| 16
1717
| 128
18-
| io1
19-
| 220 / 3000
18+
| gp3
19+
| 220
2020
| 3
2121
| us-west-2
2222

2323
| Infra ^[2]^
2424
| m5.12xlarge
2525
| 48
2626
| 192
27-
| gp2
27+
| gp3
2828
| 100
2929
| 3
3030
| us-west-2
@@ -33,7 +33,7 @@
3333
| m5.4xlarge
3434
| 16
3535
| 64
36-
| gp2
36+
| gp3
3737
| 500 ^[4]^
3838
| 1
3939
| us-west-2
@@ -42,15 +42,15 @@
4242
| m5.2xlarge
4343
| 8
4444
| 32
45-
| gp2
45+
| gp3
4646
| 100
4747
| 3/25/250/500 ^[5]^
4848
| us-west-2
4949

5050
|===
5151
[.small]
5252
--
53-
1. io1 disks with 3000 IOPS are used for control plane/etcd nodes as etcd is I/O intensive and latency sensitive.
53+
1. gp3 disks with a baseline performance of 3000 IOPS and 125 MiB per second are used for control plane/etcd nodes because etcd is latency sensitive. gp3 volumes do not use burst performance.
5454
2. Infra nodes are used to host Monitoring, Ingress, and Registry components to ensure they have enough resources to run at large scale.
5555
3. Workload node is dedicated to run performance and scalability workload generators.
5656
4. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run.
@@ -67,7 +67,7 @@
6767
| 16
6868
| 32
6969
| io1
70-
| 120 / 10 IOPS per GB
70+
| 120 / 10 IOPS per GiB
7171
| 3
7272

7373
| Infra ^[2]^
@@ -126,7 +126,7 @@
126126
--
127127
1. Nodes are distributed between two logical control units (LCUs) to optimize disk I/O load of the control plane/etcd nodes as etcd is I/O intensive and latency sensitive. Etcd I/O demand should not interfere with other workloads.
128128
2. Four compute nodes are used for the tests running several iterations with 100/250/500 pods at the same time. First, idling pods were used to evaluate if pods can be instanced. Next, a network and CPU demanding client/server workload were used to evaluate the stability of the system under stress. Client and server pods were pairwise deployed and each pair was spread over two compute nodes.
129-
3. No separate workload node was used. The workload simulates a micro-service workload between two compute nodes.
129+
3. No separate workload node was used. The workload simulates a microservice workload between two compute nodes.
130130
4. Physical number of processors used is six Integrated Facilities for Linux (IFLs).
131131
5. Total physical memory used is 512 GiB.
132-
--
132+
--

modules/recommended-etcd-practices.adoc

Lines changed: 13 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -6,17 +6,14 @@
66
[id="recommended-etcd-practices_{context}"]
77
= Recommended etcd practices
88

9-
For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes.
9+
Because etcd writes data to disk and persists proposals on disk, its performance depends on disk performance.
10+
Although etcd is not particularly I/O intensive, it requires a low latency block device for optimal performance and stability. Because etcd's consensus protocol depends on persistently storing metadata to a log (WAL), etcd is sensitive to disk-write latency. Slow disks and disk activity from other processes can cause long fsync latencies.
1011

11-
.Monitor these key metrics:
12+
Those latencies can cause etcd to miss heartbeats, not commit new proposals to the disk on time, and ultimately experience request timeouts and temporary leader loss. High write latencies also lead to a OpenShift API slowness, which affects cluster performance. Because of these reasons, avoid colocating other workloads on the control-plane nodes.
1213

13-
* `etcd_server_quota_backend_bytes`, which is the current quota limit
14-
* `etcd_mvcc_db_total_size_in_use_in_bytes`, which indicates the actual database usage after a history compaction
15-
* `etcd_debugging_mvcc_db_total_size_in_bytes`, which shows the database size, including free space waiting for defragmentation
14+
In terms of latency, run etcd on top of a block device that can write at least 50 IOPS of 8000 bytes long sequentially. That is, with a latency of 20ms, keep in mind that uses fdatasync to synchronize each write in the WAL. For heavy loaded clusters, sequential 500 IOPS of 8000 bytes (2 ms) are recommended. To measure those numbers, you can use a benchmarking tool, such as fio.
1615

17-
For more information about defragmenting etcd, see the "Defragmenting etcd data" section.
18-
19-
Because etcd writes data to disk and persists proposals on disk, its performance depends on disk performance. Slow disks and disk activity from other processes can cause long fsync latencies. Those latencies can cause etcd to miss heartbeats, not commit new proposals to the disk on time, and ultimately experience request timeouts and temporary leader loss. Run etcd on machines that are backed by SSD or NVMe disks with low latency and high throughput. Consider single-level cell (SLC) solid-state drives (SSDs), which provide 1 bit per memory cell, are durable and reliable, and are ideal for write-intensive workloads.
16+
To achieve such performance, run etcd on machines that are backed by SSD or NVMe disks with low latency and high throughput. Consider single-level cell (SLC) solid-state drives (SSDs), which provide 1 bit per memory cell, are durable and reliable, and are ideal for write-intensive workloads.
2017

2118
The following hard disk features provide optimal etcd performance:
2219

@@ -28,17 +25,14 @@ The following hard disk features provide optimal etcd performance:
2825
* RAID 0 technology for increased performance.
2926
* Dedicated etcd drives. Do not place log files or other heavy workloads on etcd drives.
3027
31-
Avoid NAS or SAN setups, and spinning drives. Always benchmark using utilities such as `fio`. Continuously monitor the cluster performance as it increases.
28+
Avoid NAS or SAN setups and spinning drives. Always benchmark by using utilities such as fio. Continuously monitor the cluster performance as it increases.
3229

33-
IMPORTANT: Avoid using the Network File System (NFS) protocol.
30+
IMPORTANT: Avoid using the Network File System (NFS) protocol or other network based file systems.
3431

3532
Some key metrics to monitor on a deployed {product-title} cluster are p99 of etcd disk write ahead log duration and the number of etcd leader changes. Use Prometheus to track these metrics.
3633

37-
* The `etcd_disk_wal_fsync_duration_seconds_bucket` metric reports the etcd disk fsync duration.
38-
* The `etcd_server_leader_changes_seen_total` metric reports the leader changes.
39-
* To rule out a slow disk and confirm that the disk is reasonably fast, verify that the 99th percentile of the `etcd_disk_wal_fsync_duration_seconds_bucket` is less than 10 ms.
4034

41-
To validate the hardware for etcd before or after you create the {product-title} cluster, you can use an I/O benchmarking tool called fio.
35+
To validate the hardware for etcd before or after you create the {product-title} cluster, you can use fio.
4236

4337
.Prerequisites
4438

@@ -64,7 +58,11 @@ $ sudo docker run --volume /var/lib/etcd:/var/lib/etcd:Z quay.io/openshift-scale
6458
----
6559
--
6660

67-
The output reports whether the disk is fast enough to host etcd by comparing the 99th percentile of the fsync metric captured from the run to see if it is less than 10 ms.
61+
The output reports whether the disk is fast enough to host etcd by comparing the 99th percentile of the fsync metric captured from the run to see if it is less than 20 ms. A few of the most important etcd metrics that might affected by I/O performance are as follow:
62+
63+
- `etcd_disk_wal_fsync_duration_seconds_bucket` metric reports the etcd's WAL fsync duration.
64+
- `etcd_disk_backend_commit_duration_seconds_bucket` metric reports the etcd backend commit latency duration.
65+
- `etcd_server_leader_changes_seen_total` metric reports the leader changes.
6866
6967
Because etcd replicates the requests among all the members, its performance strongly depends on network input/output (I/O) latency. High network latencies result in etcd heartbeats taking longer than the election timeout, which results in leader elections that are disruptive to the cluster. A key metric to monitor on a deployed {product-title} cluster is the 99th percentile of etcd network peer latency on each etcd cluster member. Use Prometheus to track the metric.
7068

0 commit comments

Comments
 (0)