Skip to content

Commit f5aeb3e

Browse files
committed
add table for IBM Z
1 parent ff32e44 commit f5aeb3e

File tree

1 file changed

+41
-11
lines changed

1 file changed

+41
-11
lines changed

modules/openshift-cluster-maximums-environment.adoc

Lines changed: 41 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -5,13 +5,13 @@
55
[id="cluster-maximums-environment_{context}"]
66
= {product-title} environment and configuration on which the cluster maximums are tested
77

8-
AWS cloud platform:
8+
== AWS cloud platform
99

1010
[options="header",cols="8*"]
1111
|===
1212
| Node |Flavor |vCPU |RAM(GiB) |Disk type|Disk size(GiB)/IOS |Count |Region
1313

14-
| Master/etcd ^[1]^
14+
| Control plane/etcd ^[1]^
1515
| r5.4xlarge
1616
| 16
1717
| 128
@@ -38,7 +38,7 @@ AWS cloud platform:
3838
| 1
3939
| us-west-2
4040

41-
| Worker
41+
| Compute
4242
| m5.2xlarge
4343
| 8
4444
| 32
@@ -50,24 +50,24 @@ AWS cloud platform:
5050
|===
5151
[.small]
5252
--
53-
1. io1 disks with 3000 IOPS are used for master/etcd nodes as etcd is I/O intensive and latency sensitive.
53+
1. io1 disks with 3000 IOPS are used for control plane/etcd nodes as etcd is I/O intensive and latency sensitive.
5454
2. Infra nodes are used to host Monitoring, Ingress, and Registry components to ensure they have enough resources to run at large scale.
5555
3. Workload node is dedicated to run performance and scalability workload generators.
5656
4. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run.
5757
5. Cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts.
5858
--
5959

60-
IBM Power platform:
60+
== IBM Power platform
6161

6262
[options="header",cols="6*"]
6363
|===
6464
| Node |vCPU |RAM(GiB) |Disk type|Disk size(GiB)/IOS |Count
6565

66-
| Master/etcd ^[1]^
66+
| Control plane/etcd ^[1]^
6767
| 16
6868
| 32
6969
| io1
70-
| 120 / 3 IOPS per GB
70+
| 120 / 10 IOPS per GB
7171
| 3
7272

7373
| Infra ^[2]^
@@ -84,19 +84,49 @@ IBM Power platform:
8484
| 120 ^[4]^
8585
| 1
8686

87-
| Worker
87+
| Compute
8888
| 16
8989
| 64
9090
| gp2
9191
| 120
92-
| 3/25/250/500 ^[5]^
92+
| 2 to 100 ^[5]^
9393

9494
|===
9595
[.small]
9696
--
97-
1. io1 disks with 120 / 3 IOPS per GB are used for master/etcd nodes as etcd is I/O intensive and latency sensitive.
97+
1. io1 disks with 120 / 10 IOPS per GiB are used for control plane/etcd nodes as etcd is I/O intensive and latency sensitive.
9898
2. Infra nodes are used to host Monitoring, Ingress, and Registry components to ensure they have enough resources to run at large scale.
9999
3. Workload node is dedicated to run performance and scalability workload generators.
100100
4. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run.
101-
5. Cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts.
101+
5. Cluster is scaled in iterations.
102+
--
103+
104+
== {ibmzProductName} platform
105+
106+
[options="header",cols="6*"]
107+
|===
108+
| Node |vCPU ^[4]^ |RAM(GiB)^[5]^|Disk type|Disk size(GiB)/IOS |Count
109+
110+
| Control plane/etcd ^[1,2]^
111+
| 8
112+
| 32
113+
| ds8k
114+
| 300 / LCU 1
115+
| 3
116+
117+
| Compute ^[1,3]^
118+
| 8
119+
| 32
120+
| ds8k
121+
| 150 / LCU 2
122+
| 4 nodes (scaled to 100/250/500 pods per node)
123+
124+
|===
125+
[.small]
102126
--
127+
1. Nodes are distributed between two logical control units (LCUs) to optimize disk I/O load of the control plane/etcd nodes as etcd is I/O intensive and latency sensitive. Etcd I/O demand should not interfere with other workloads.
128+
2. Four compute nodes are used for the tests running several iterations with 100/250/500 pods at the same time. First, idling pods were used to evaluate if pods can be instanced. Next, a network and CPU demanding client/server workload were used to evaluate the stability of the system under stress. Client and server pods were pairwise deployed and each pair was spread over two compute nodes.
129+
3. No separate workload node was used. The workload simulates a micro-service workload between two compute nodes.
130+
4. Physical number of processors used is six Integrated Facilities for Linux (IFLs).
131+
5. Total physical memory used is 512 GiB.
132+
--

0 commit comments

Comments
 (0)