You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/etcd-defrag.adoc
+10Lines changed: 10 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,6 +7,14 @@
7
7
[id="etcd-defrag_{context}"]
8
8
= Defragmenting etcd data
9
9
10
+
For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes.
11
+
12
+
.Monitor these key metrics:
13
+
14
+
* `etcd_server_quota_backend_bytes`, which is the current quota limit
15
+
* `etcd_mvcc_db_total_size_in_use_in_bytes`, which indicates the actual database usage after a history compaction
16
+
* `etcd_debugging_mvcc_db_total_size_in_bytes`, which shows the database size, including free space waiting for defragmentation
17
+
10
18
Defragment etcd data to reclaim disk space after events that cause disk fragmentation, such as etcd history compaction.
11
19
12
20
History compaction is performed automatically every five minutes and leaves gaps in the back-end database. This fragmented space is available for use by etcd, but is not available to the host file system. You must defragment etcd to make this space available to the host file system.
@@ -58,6 +66,8 @@ A Prometheus alert indicates when you need to use manual defragmentation. The al
58
66
* When etcd uses more than 50% of its available space for more than 10 minutes
59
67
* When etcd is actively using less than 50% of its total database size for more than 10 minutes
60
68
69
+
You can also determine whether defragmentation is needed by checking the etcd database size in MB that will be freed by defragmentation with the PromQL expression: `(etcd_mvcc_db_total_size_in_bytes - etcd_mvcc_db_total_size_in_use_in_bytes)/1024/1024`
70
+
61
71
[WARNING]
62
72
====
63
73
Defragmenting etcd is a blocking action. The etcd member will not respond until defragmentation is complete. For this reason, wait at least one minute between defragmentation actions on each of the pods to allow the cluster to recover.
Copy file name to clipboardExpand all lines: modules/openshift-cluster-maximums-environment.adoc
+9-9Lines changed: 9 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,16 +15,16 @@
15
15
| r5.4xlarge
16
16
| 16
17
17
| 128
18
-
| io1
19
-
| 220 / 3000
18
+
| gp3
19
+
| 220
20
20
| 3
21
21
| us-west-2
22
22
23
23
| Infra ^[2]^
24
24
| m5.12xlarge
25
25
| 48
26
26
| 192
27
-
| gp2
27
+
| gp3
28
28
| 100
29
29
| 3
30
30
| us-west-2
@@ -33,7 +33,7 @@
33
33
| m5.4xlarge
34
34
| 16
35
35
| 64
36
-
| gp2
36
+
| gp3
37
37
| 500 ^[4]^
38
38
| 1
39
39
| us-west-2
@@ -42,15 +42,15 @@
42
42
| m5.2xlarge
43
43
| 8
44
44
| 32
45
-
| gp2
45
+
| gp3
46
46
| 100
47
47
| 3/25/250/500 ^[5]^
48
48
| us-west-2
49
49
50
50
|===
51
51
[.small]
52
52
--
53
-
1. io1 disks with 3000 IOPS are used for control plane/etcd nodes as etcd is I/O intensive and latency sensitive.
53
+
1. gp3 disks with a baseline performance of 3000 IOPS and 125 MiB per second are used for control plane/etcd nodes because etcd is latency sensitive. gp3 volumes do not use burst performance.
54
54
2. Infra nodes are used to host Monitoring, Ingress, and Registry components to ensure they have enough resources to run at large scale.
55
55
3. Workload node is dedicated to run performance and scalability workload generators.
56
56
4. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run.
@@ -67,7 +67,7 @@
67
67
| 16
68
68
| 32
69
69
| io1
70
-
| 120 / 10 IOPS per GB
70
+
| 120 / 10 IOPS per GiB
71
71
| 3
72
72
73
73
| Infra ^[2]^
@@ -126,7 +126,7 @@
126
126
--
127
127
1. Nodes are distributed between two logical control units (LCUs) to optimize disk I/O load of the control plane/etcd nodes as etcd is I/O intensive and latency sensitive. Etcd I/O demand should not interfere with other workloads.
128
128
2. Four compute nodes are used for the tests running several iterations with 100/250/500 pods at the same time. First, idling pods were used to evaluate if pods can be instanced. Next, a network and CPU demanding client/server workload were used to evaluate the stability of the system under stress. Client and server pods were pairwise deployed and each pair was spread over two compute nodes.
129
-
3. No separate workload node was used. The workload simulates a micro-service workload between two compute nodes.
129
+
3. No separate workload node was used. The workload simulates a microservice workload between two compute nodes.
130
130
4. Physical number of processors used is six Integrated Facilities for Linux (IFLs).
Copy file name to clipboardExpand all lines: modules/recommended-etcd-practices.adoc
+13-15Lines changed: 13 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,17 +6,14 @@
6
6
[id="recommended-etcd-practices_{context}"]
7
7
= Recommended etcd practices
8
8
9
-
For large and dense clusters, etcd can suffer from poor performance if the keyspace grows too large and exceeds the space quota. Periodically maintain and defragment etcd to free up space in the data store. Monitor Prometheus for etcd metrics and defragment it when required; otherwise, etcd can raise a cluster-wide alarm that puts the cluster into a maintenance mode that accepts only key reads and deletes.
9
+
Because etcd writes data to disk and persists proposals on disk, its performance depends on disk performance.
10
+
Although etcd is not particularly I/O intensive, it requires a low latency block device for optimal performance and stability. Because etcd's consensus protocol depends on persistently storing metadata to a log (WAL), etcd is sensitive to disk-write latency. Slow disks and disk activity from other processes can cause long fsync latencies.
10
11
11
-
.Monitor these key metrics:
12
+
Those latencies can cause etcd to miss heartbeats, not commit new proposals to the disk on time, and ultimately experience request timeouts and temporary leader loss. High write latencies also lead to a OpenShift API slowness, which affects cluster performance. Because of these reasons, avoid colocating other workloads on the control-plane nodes.
12
13
13
-
* `etcd_server_quota_backend_bytes`, which is the current quota limit
14
-
* `etcd_mvcc_db_total_size_in_use_in_bytes`, which indicates the actual database usage after a history compaction
15
-
* `etcd_debugging_mvcc_db_total_size_in_bytes`, which shows the database size, including free space waiting for defragmentation
14
+
In terms of latency, run etcd on top of a block device that can write at least 50 IOPS of 8000 bytes long sequentially. That is, with a latency of 20ms, keep in mind that uses fdatasync to synchronize each write in the WAL. For heavy loaded clusters, sequential 500 IOPS of 8000 bytes (2 ms) are recommended. To measure those numbers, you can use a benchmarking tool, such as fio.
16
15
17
-
For more information about defragmenting etcd, see the "Defragmenting etcd data" section.
18
-
19
-
Because etcd writes data to disk and persists proposals on disk, its performance depends on disk performance. Slow disks and disk activity from other processes can cause long fsync latencies. Those latencies can cause etcd to miss heartbeats, not commit new proposals to the disk on time, and ultimately experience request timeouts and temporary leader loss. Run etcd on machines that are backed by SSD or NVMe disks with low latency and high throughput. Consider single-level cell (SLC) solid-state drives (SSDs), which provide 1 bit per memory cell, are durable and reliable, and are ideal for write-intensive workloads.
16
+
To achieve such performance, run etcd on machines that are backed by SSD or NVMe disks with low latency and high throughput. Consider single-level cell (SLC) solid-state drives (SSDs), which provide 1 bit per memory cell, are durable and reliable, and are ideal for write-intensive workloads.
20
17
21
18
The following hard disk features provide optimal etcd performance:
22
19
@@ -28,17 +25,14 @@ The following hard disk features provide optimal etcd performance:
28
25
* RAID 0 technology for increased performance.
29
26
* Dedicated etcd drives. Do not place log files or other heavy workloads on etcd drives.
30
27
31
-
Avoid NAS or SAN setups, and spinning drives. Always benchmark using utilities such as `fio`. Continuously monitor the cluster performance as it increases.
28
+
Avoid NAS or SAN setups and spinning drives. Always benchmark by using utilities such as fio. Continuously monitor the cluster performance as it increases.
32
29
33
-
IMPORTANT: Avoid using the Network File System (NFS) protocol.
30
+
IMPORTANT: Avoid using the Network File System (NFS) protocol or other network based file systems.
34
31
35
32
Some key metrics to monitor on a deployed {product-title} cluster are p99 of etcd disk write ahead log duration and the number of etcd leader changes. Use Prometheus to track these metrics.
36
33
37
-
* The `etcd_disk_wal_fsync_duration_seconds_bucket` metric reports the etcd disk fsync duration.
38
-
* The `etcd_server_leader_changes_seen_total` metric reports the leader changes.
39
-
* To rule out a slow disk and confirm that the disk is reasonably fast, verify that the 99th percentile of the `etcd_disk_wal_fsync_duration_seconds_bucket` is less than 10 ms.
40
34
41
-
To validate the hardware for etcd before or after you create the {product-title} cluster, you can use an I/O benchmarking tool called fio.
35
+
To validate the hardware for etcd before or after you create the {product-title} cluster, you can use fio.
The output reports whether the disk is fast enough to host etcd by comparing the 99th percentile of the fsync metric captured from the run to see if it is less than 10 ms.
61
+
The output reports whether the disk is fast enough to host etcd by comparing the 99th percentile of the fsync metric captured from the run to see if it is less than 20 ms. A few of the most important etcd metrics that might affected by I/O performance are as follow:
62
+
63
+
- `etcd_disk_wal_fsync_duration_seconds_bucket` metric reports the etcd's WAL fsync duration.
64
+
- `etcd_disk_backend_commit_duration_seconds_bucket` metric reports the etcd backend commit latency duration.
65
+
- `etcd_server_leader_changes_seen_total` metric reports the leader changes.
68
66
69
67
Because etcd replicates the requests among all the members, its performance strongly depends on network input/output (I/O) latency. High network latencies result in etcd heartbeats taking longer than the election timeout, which results in leader elections that are disruptive to the cluster. A key metric to monitor on a deployed {product-title} cluster is the 99th percentile of etcd network peer latency on each etcd cluster member. Use Prometheus to track the metric.
0 commit comments