You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. io1 disks with 3000 IOPS are used for master/etcd nodes as etcd is I/O intensive and latency sensitive.
53
+
1. io1 disks with 3000 IOPS are used for control plane/etcd nodes as etcd is I/O intensive and latency sensitive.
54
54
2. Infra nodes are used to host Monitoring, Ingress, and Registry components to ensure they have enough resources to run at large scale.
55
55
3. Workload node is dedicated to run performance and scalability workload generators.
56
56
4. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run.
57
57
5. Cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts.
1. io1 disks with 120 / 3 IOPS per GB are used for master/etcd nodes as etcd is I/O intensive and latency sensitive.
97
+
1. io1 disks with 120 / 10 IOPS per GiB are used for control plane/etcd nodes as etcd is I/O intensive and latency sensitive.
98
98
2. Infra nodes are used to host Monitoring, Ingress, and Registry components to ensure they have enough resources to run at large scale.
99
99
3. Workload node is dedicated to run performance and scalability workload generators.
100
100
4. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run.
101
-
5. Cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts.
1. Nodes are distributed between two logical control units (LCUs) to optimize disk I/O load of the control plane/etcd nodes as etcd is I/O intensive and latency sensitive. Etcd I/O demand should not interfere with other workloads.
128
+
2. Four compute nodes are used for the tests running several iterations with 100/250/500 pods at the same time. First, idling pods were used to evaluate if pods can be instanced. Next, a network and CPU demanding client/server workload were used to evaluate the stability of the system under stress. Client and server pods were pairwise deployed and each pair was spread over two compute nodes.
129
+
3. No separate workload node was used. The workload simulates a micro-service workload between two compute nodes.
130
+
4. Physical number of processors used is six Integrated Facilities for Linux (IFLs).
0 commit comments