You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can install {product-title} version {product-version} on the following IBM hardware:
220
-
221
-
* IBM z15 (all models), IBM z14 (all models), IBM z13, and IBM z13s
222
-
* LinuxONE, any version
223
-
224
-
[discrete]
225
-
=== Hardware requirements
226
-
227
-
* The equivalent of 6 IFLs, which are SMT2 enabled, for each cluster.
228
-
* At least one network connection to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster.
229
-
230
-
[NOTE]
231
-
====
232
-
You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every {product-title} cluster.
233
-
====
234
-
235
-
[IMPORTANT]
236
-
====
237
-
Since the overall performance of the cluster can be impacted, the LPARs that are used to setup the {product-title} clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role.
238
-
====
239
-
240
-
[discrete]
241
-
=== Operating system requirements
242
-
243
-
* One instance of z/VM 7.1 or later
244
-
245
-
On your z/VM instance, set up:
246
-
247
-
* 3 guest virtual machines for {product-title} control plane machines
248
-
* 2 guest virtual machines for {product-title} compute machines
249
-
* 1 guest virtual machine for the temporary {product-title} bootstrap machine
250
-
251
-
[discrete]
252
-
== IBM Z network connectivity requirements
253
-
254
-
To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need:
255
-
256
-
* A direct-attached OSA or RoCE network adapter
257
-
* A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation.
258
-
259
-
[discrete]
260
-
=== Disk storage for the z/VM guest virtual machines
261
-
262
-
* FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for {op-system-first} installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance.
263
-
* FCP attached disk storage
264
-
265
-
[discrete]
266
-
=== Storage / Main Memory
267
-
268
-
* 16 GB for {product-title} control plane machines
269
-
* 8 GB for {product-title} compute machines
270
-
* 16 GB for the temporary {product-title} bootstrap machine
* 3 LPARS that each have the equivalent of 6 IFLs, which are SMT2 enabled, for each cluster.
279
-
* Two network connections to connect to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster.
280
-
* HiperSockets, which are attached to a node either directly as a device or by bridging with one z/VM VSWITCH to be transparent to the z/VM guest. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a {op-system-base} 8 guest to bridge to the HiperSockets network.
281
-
282
-
[discrete]
283
-
=== Operating system requirements
284
-
285
-
* 2 or 3 instances of z/VM 7.1 or later for high availability
286
-
287
-
On your z/VM instances, set up:
288
-
289
-
* 3 guest virtual machines for {product-title} control plane machines, one per z/VM instance.
290
-
* At least 6 guest virtual machines for {product-title} compute machines, distributed across the z/VM instances.
291
-
* 1 guest virtual machine for the temporary {product-title} bootstrap machine.
292
-
* To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using the CP command `SET SHARE`. Do the same for infrastructure nodes, if they exist. See link:https://www.ibm.com/docs/en/zvm/7.1?topic=commands-set-share[SET SHARE] in IBM Documentation.
293
-
294
-
[discrete]
295
-
== IBM Z network connectivity requirements
296
-
297
-
To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need:
298
-
299
-
* A direct-attached OSA or RoCE network adapter
300
-
* A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation.
301
-
302
-
[discrete]
303
-
=== Disk storage for the z/VM guest virtual machines
304
-
305
-
* FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for {op-system-first} installations, you need extended address volumes (EAV). If available, use HyperPAV and High Performance FICON (zHPF) to ensure optimal performance.
306
-
* FCP attached disk storage
307
-
308
-
[discrete]
309
-
=== Storage / Main Memory
310
-
311
-
* 16 GB for {product-title} control plane machines
312
-
* 8 GB for {product-title} compute machines
313
-
* 16 GB for the temporary {product-title} bootstrap machine
You can install {product-title} version {product-version} on the following IBM hardware:
10
+
11
+
* IBM z15 (all models), IBM z14 (all models), IBM z13, and IBM z13s
12
+
* LinuxONE, any version
13
+
14
+
[discrete]
15
+
== Hardware requirements
16
+
17
+
* The equivalent of 6 IFLs, which are SMT2 enabled, for each cluster.
18
+
* At least one network connection to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster.
19
+
20
+
[NOTE]
21
+
====
22
+
You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every {product-title} cluster.
23
+
====
24
+
25
+
[IMPORTANT]
26
+
====
27
+
Since the overall performance of the cluster can be impacted, the LPARs that are used to setup the {product-title} clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role.
28
+
====
29
+
30
+
[discrete]
31
+
== Operating system requirements
32
+
33
+
* One instance of z/VM 7.1 or later
34
+
35
+
On your z/VM instance, set up:
36
+
37
+
* 3 guest virtual machines for {product-title} control plane machines
38
+
* 2 guest virtual machines for {product-title} compute machines
39
+
* 1 guest virtual machine for the temporary {product-title} bootstrap machine
40
+
41
+
[discrete]
42
+
== IBM Z network connectivity requirements
43
+
44
+
To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need:
45
+
46
+
* A direct-attached OSA or RoCE network adapter
47
+
* A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation.
48
+
49
+
[discrete]
50
+
=== Disk storage for the z/VM guest virtual machines
51
+
52
+
* FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for {op-system-first} installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance.
53
+
* FCP attached disk storage
54
+
55
+
[discrete]
56
+
=== Storage / Main Memory
57
+
58
+
* 16 GB for {product-title} control plane machines
59
+
* 8 GB for {product-title} compute machines
60
+
* 16 GB for the temporary {product-title} bootstrap machine
* 3 LPARS that each have the equivalent of 6 IFLs, which are SMT2 enabled, for each cluster.
13
+
* Two network connections to connect to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster.
14
+
* HiperSockets, which are attached to a node either directly as a device or by bridging with one z/VM VSWITCH to be transparent to the z/VM guest. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a {op-system-base} 8 guest to bridge to the HiperSockets network.
15
+
16
+
[discrete]
17
+
== Operating system requirements
18
+
19
+
* 2 or 3 instances of z/VM 7.1 or later for high availability
20
+
21
+
On your z/VM instances, set up:
22
+
23
+
* 3 guest virtual machines for {product-title} control plane machines, one per z/VM instance.
24
+
* At least 6 guest virtual machines for {product-title} compute machines, distributed across the z/VM instances.
25
+
* 1 guest virtual machine for the temporary {product-title} bootstrap machine.
26
+
* To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using the CP command `SET SHARE`. Do the same for infrastructure nodes, if they exist. See link:https://www.ibm.com/docs/en/zvm/7.1?topic=commands-set-share[SET SHARE] in IBM Documentation.
27
+
28
+
[discrete]
29
+
== IBM Z network connectivity requirements
30
+
31
+
To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need:
32
+
33
+
* A direct-attached OSA or RoCE network adapter
34
+
* A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation.
35
+
36
+
[discrete]
37
+
=== Disk storage for the z/VM guest virtual machines
38
+
39
+
* FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for {op-system-first} installations, you need extended address volumes (EAV). If available, use HyperPAV and High Performance FICON (zHPF) to ensure optimal performance.
40
+
* FCP attached disk storage
41
+
42
+
[discrete]
43
+
=== Storage / Main Memory
44
+
45
+
* 16 GB for {product-title} control plane machines
46
+
* 8 GB for {product-title} compute machines
47
+
* 16 GB for the temporary {product-title} bootstrap machine
0 commit comments