Skip to content

Commit da0c128

Browse files
authored
Merge pull request #39912 from codyhoag/upi-requirements-reorg
UPI requirements reorg
2 parents 982b605 + 236639d commit da0c128

9 files changed

+201
-183
lines changed

installing/installing_ibm_power/installing-ibm-power.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,8 @@ Be sure to also review this site list if you are configuring a proxy.
3131
include::modules/cluster-entitlements.adoc[leveloffset=+1]
3232

3333
include::modules/installation-requirements-user-infra.adoc[leveloffset=+1]
34+
include::modules/minimum-ibm-power-system-requirements.adoc[leveloffset=+2]
35+
include::modules/recommended-ibm-power-system-requirements.adoc[leveloffset=+2]
3436
include::modules/csr-management.adoc[leveloffset=+2]
3537

3638
include::modules/installation-network-user-infra.adoc[leveloffset=+2]

installing/installing_ibm_power/installing-restricted-networks-ibm-power.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -39,6 +39,8 @@ include::modules/installation-about-restricted-network.adoc[leveloffset=+1]
3939
include::modules/cluster-entitlements.adoc[leveloffset=+1]
4040

4141
include::modules/installation-requirements-user-infra.adoc[leveloffset=+1]
42+
include::modules/minimum-ibm-power-system-requirements.adoc[leveloffset=+2]
43+
include::modules/recommended-ibm-power-system-requirements.adoc[leveloffset=+2]
4244
include::modules/csr-management.adoc[leveloffset=+2]
4345

4446
include::modules/installation-network-user-infra.adoc[leveloffset=+2]

installing/installing_ibm_z/installing-ibm-z.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,8 @@ Be sure to also review this site list if you are configuring a proxy.
3636
include::modules/cluster-entitlements.adoc[leveloffset=+1]
3737

3838
include::modules/installation-requirements-user-infra.adoc[leveloffset=+1]
39+
include::modules/minimum-ibm-z-system-requirements.adoc[leveloffset=+2]
40+
include::modules/preferred-ibm-z-system-requirements.adoc[leveloffset=+2]
3941
include::modules/csr-management.adoc[leveloffset=+2]
4042

4143
.Additional resources

installing/installing_ibm_z/installing-restricted-networks-ibm-z.adoc

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -44,6 +44,8 @@ include::modules/installation-about-restricted-network.adoc[leveloffset=+1]
4444
include::modules/cluster-entitlements.adoc[leveloffset=+1]
4545

4646
include::modules/installation-requirements-user-infra.adoc[leveloffset=+1]
47+
include::modules/minimum-ibm-z-system-requirements.adoc[leveloffset=+2]
48+
include::modules/preferred-ibm-z-system-requirements.adoc[leveloffset=+2]
4749
include::modules/csr-management.adoc[leveloffset=+2]
4850

4951
.Additional resources

modules/installation-requirements-user-infra.adoc

Lines changed: 0 additions & 183 deletions
Original file line numberDiff line numberDiff line change
@@ -212,189 +212,6 @@ ifndef::ibm-z[]
212212
endif::ibm-z[]
213213
--
214214

215-
ifdef::ibm-z[]
216-
[id="minimum-ibm-z-system-requirements_{context}"]
217-
== Minimum IBM Z system environment
218-
219-
You can install {product-title} version {product-version} on the following IBM hardware:
220-
221-
* IBM z15 (all models), IBM z14 (all models), IBM z13, and IBM z13s
222-
* LinuxONE, any version
223-
224-
[discrete]
225-
=== Hardware requirements
226-
227-
* The equivalent of 6 IFLs, which are SMT2 enabled, for each cluster.
228-
* At least one network connection to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster.
229-
230-
[NOTE]
231-
====
232-
You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every {product-title} cluster.
233-
====
234-
235-
[IMPORTANT]
236-
====
237-
Since the overall performance of the cluster can be impacted, the LPARs that are used to setup the {product-title} clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role.
238-
====
239-
240-
[discrete]
241-
=== Operating system requirements
242-
243-
* One instance of z/VM 7.1 or later
244-
245-
On your z/VM instance, set up:
246-
247-
* 3 guest virtual machines for {product-title} control plane machines
248-
* 2 guest virtual machines for {product-title} compute machines
249-
* 1 guest virtual machine for the temporary {product-title} bootstrap machine
250-
251-
[discrete]
252-
== IBM Z network connectivity requirements
253-
254-
To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need:
255-
256-
* A direct-attached OSA or RoCE network adapter
257-
* A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation.
258-
259-
[discrete]
260-
=== Disk storage for the z/VM guest virtual machines
261-
262-
* FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for {op-system-first} installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance.
263-
* FCP attached disk storage
264-
265-
[discrete]
266-
=== Storage / Main Memory
267-
268-
* 16 GB for {product-title} control plane machines
269-
* 8 GB for {product-title} compute machines
270-
* 16 GB for the temporary {product-title} bootstrap machine
271-
272-
[id="preferred-ibm-z-system-requirements_{context}"]
273-
== Preferred IBM Z system environment
274-
275-
[discrete]
276-
=== Hardware requirements
277-
278-
* 3 LPARS that each have the equivalent of 6 IFLs, which are SMT2 enabled, for each cluster.
279-
* Two network connections to connect to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster.
280-
* HiperSockets, which are attached to a node either directly as a device or by bridging with one z/VM VSWITCH to be transparent to the z/VM guest. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a {op-system-base} 8 guest to bridge to the HiperSockets network.
281-
282-
[discrete]
283-
=== Operating system requirements
284-
285-
* 2 or 3 instances of z/VM 7.1 or later for high availability
286-
287-
On your z/VM instances, set up:
288-
289-
* 3 guest virtual machines for {product-title} control plane machines, one per z/VM instance.
290-
* At least 6 guest virtual machines for {product-title} compute machines, distributed across the z/VM instances.
291-
* 1 guest virtual machine for the temporary {product-title} bootstrap machine.
292-
* To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using the CP command `SET SHARE`. Do the same for infrastructure nodes, if they exist. See link:https://www.ibm.com/docs/en/zvm/7.1?topic=commands-set-share[SET SHARE] in IBM Documentation.
293-
294-
[discrete]
295-
== IBM Z network connectivity requirements
296-
297-
To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need:
298-
299-
* A direct-attached OSA or RoCE network adapter
300-
* A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation.
301-
302-
[discrete]
303-
=== Disk storage for the z/VM guest virtual machines
304-
305-
* FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for {op-system-first} installations, you need extended address volumes (EAV). If available, use HyperPAV and High Performance FICON (zHPF) to ensure optimal performance.
306-
* FCP attached disk storage
307-
308-
[discrete]
309-
=== Storage / Main Memory
310-
311-
* 16 GB for {product-title} control plane machines
312-
* 8 GB for {product-title} compute machines
313-
* 16 GB for the temporary {product-title} bootstrap machine
314-
endif::ibm-z[]
315-
316-
ifdef::ibm-power[]
317-
318-
[id="minimum-ibm-power-system-requirements_{context}"]
319-
== Minimum IBM Power Systems requirements
320-
321-
You can install {product-title} version {product-version} on the following IBM hardware:
322-
323-
* IBM POWER8 or POWER9 processor-based systems
324-
325-
[discrete]
326-
=== Hardware requirements
327-
328-
* 6 IBM Power bare metal servers or 6 LPARs across multiple PowerVM servers
329-
330-
[discrete]
331-
=== Operating system requirements
332-
333-
* One instance of an IBM POWER8 or POWER9 processor-based system
334-
335-
On your IBM Power instance, set up:
336-
337-
* 3 guest virtual machines for {product-title} control plane machines
338-
* 2 guest virtual machines for {product-title} compute machines
339-
* 1 guest virtual machine for the temporary {product-title} bootstrap machine
340-
341-
[discrete]
342-
=== Disk storage for the IBM Power guest virtual machines
343-
344-
* Storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools)
345-
346-
[discrete]
347-
=== Network for the PowerVM guest virtual machines
348-
349-
* Virtualized by the Virtual I/O Server using Shared Ethernet Adapter
350-
* Virtualized by the Virtual I/O Server using IBM vNIC
351-
352-
[discrete]
353-
=== Storage / main memory
354-
355-
* 100 GB / 16 GB for {product-title} control plane machines
356-
* 100 GB / 8 GB for {product-title} compute machines
357-
* 100 GB / 16 GB for the temporary {product-title} bootstrap machine
358-
359-
[id="recommended-ibm-Power-system-requirements_{context}"]
360-
361-
== Recommended IBM Power system requirements
362-
[discrete]
363-
=== Hardware requirements
364-
365-
* 6 IBM Power bare metal servers or 6 LPARs across multiple PowerVM servers
366-
367-
[discrete]
368-
=== Operating system requirements
369-
370-
* One instance of an IBM POWER8 or POWER9 processor-based system
371-
372-
On your IBM Power instance, set up:
373-
374-
* 3 guest virtual machines for {product-title} control plane machines
375-
* 2 guest virtual machines for {product-title} compute machines
376-
* 1 guest virtual machine for the temporary {product-title} bootstrap machine
377-
378-
[discrete]
379-
=== Disk storage for the IBM Power guest virtual machines
380-
381-
* Storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools)
382-
383-
[discrete]
384-
=== Network for the PowerVM guest virtual machines
385-
386-
* Virtualized by the Virtual I/O Server using Shared Ethernet Adapter
387-
* Virtualized by the Virtual I/O Server using IBM vNIC
388-
389-
[discrete]
390-
=== Storage / main memory
391-
392-
* 120 GB / 32 GB for {product-title} control plane machines
393-
* 120 GB / 32 GB for {product-title} compute machines
394-
* 120 GB / 16 GB for the temporary {product-title} bootstrap machine
395-
396-
endif::ibm-power[]
397-
398215
ifeval::["{context}" == "installing-ibm-z"]
399216
:!ibm-z:
400217
endif::[]
Lines changed: 45 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,45 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * installing/installing_ibm_power/installing-ibm-power.adoc
4+
// * installing/installing_ibm_power/installing-restricted-networks-ibm-power.adoc
5+
6+
[id="minimum-ibm-power-system-requirements_{context}"]
7+
= Minimum IBM Power Systems requirements
8+
9+
You can install {product-title} version {product-version} on the following IBM hardware:
10+
11+
* IBM POWER8 or POWER9 processor-based systems
12+
13+
[discrete]
14+
== Hardware requirements
15+
16+
* 6 IBM Power bare metal servers or 6 LPARs across multiple PowerVM servers
17+
18+
[discrete]
19+
== Operating system requirements
20+
21+
* One instance of an IBM POWER8 or POWER9 processor-based system
22+
23+
On your IBM Power instance, set up:
24+
25+
* 3 guest virtual machines for {product-title} control plane machines
26+
* 2 guest virtual machines for {product-title} compute machines
27+
* 1 guest virtual machine for the temporary {product-title} bootstrap machine
28+
29+
[discrete]
30+
== Disk storage for the IBM Power guest virtual machines
31+
32+
* Storage provisioned by the Virtual I/O Server using vSCSI, NPIV (N-Port ID Virtualization) or SSP (shared storage pools)
33+
34+
[discrete]
35+
== Network for the PowerVM guest virtual machines
36+
37+
* Virtualized by the Virtual I/O Server using Shared Ethernet Adapter
38+
* Virtualized by the Virtual I/O Server using IBM vNIC
39+
40+
[discrete]
41+
== Storage / main memory
42+
43+
* 100 GB / 16 GB for {product-title} control plane machines
44+
* 100 GB / 8 GB for {product-title} compute machines
45+
* 100 GB / 16 GB for the temporary {product-title} bootstrap machine
Lines changed: 60 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,60 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * installing/installing_ibm_z/installing-ibm-z.adoc
4+
// * installing/installing_ibm_z/installing-restricted-networks-ibm-z.adoc
5+
6+
[id="minimum-ibm-z-system-requirements_{context}"]
7+
= Minimum IBM Z system environment
8+
9+
You can install {product-title} version {product-version} on the following IBM hardware:
10+
11+
* IBM z15 (all models), IBM z14 (all models), IBM z13, and IBM z13s
12+
* LinuxONE, any version
13+
14+
[discrete]
15+
== Hardware requirements
16+
17+
* The equivalent of 6 IFLs, which are SMT2 enabled, for each cluster.
18+
* At least one network connection to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster.
19+
20+
[NOTE]
21+
====
22+
You can use dedicated or shared IFLs to assign sufficient compute resources. Resource sharing is one of the key strengths of IBM Z. However, you must adjust capacity correctly on each hypervisor layer and ensure sufficient resources for every {product-title} cluster.
23+
====
24+
25+
[IMPORTANT]
26+
====
27+
Since the overall performance of the cluster can be impacted, the LPARs that are used to setup the {product-title} clusters must provide sufficient compute capacity. In this context, LPAR weight management, entitlements, and CPU shares on the hypervisor level play an important role.
28+
====
29+
30+
[discrete]
31+
== Operating system requirements
32+
33+
* One instance of z/VM 7.1 or later
34+
35+
On your z/VM instance, set up:
36+
37+
* 3 guest virtual machines for {product-title} control plane machines
38+
* 2 guest virtual machines for {product-title} compute machines
39+
* 1 guest virtual machine for the temporary {product-title} bootstrap machine
40+
41+
[discrete]
42+
== IBM Z network connectivity requirements
43+
44+
To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need:
45+
46+
* A direct-attached OSA or RoCE network adapter
47+
* A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation.
48+
49+
[discrete]
50+
=== Disk storage for the z/VM guest virtual machines
51+
52+
* FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for {op-system-first} installations, you need extended address volumes (EAV). If available, use HyperPAV to ensure optimal performance.
53+
* FCP attached disk storage
54+
55+
[discrete]
56+
=== Storage / Main Memory
57+
58+
* 16 GB for {product-title} control plane machines
59+
* 8 GB for {product-title} compute machines
60+
* 16 GB for the temporary {product-title} bootstrap machine
Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,47 @@
1+
// Module included in the following assemblies:
2+
//
3+
// * installing/installing_ibm_z/installing-ibm-z.adoc
4+
// * installing/installing_ibm_z/installing-restricted-networks-ibm-z.adoc
5+
6+
[id="preferred-ibm-z-system-requirements_{context}"]
7+
= Preferred IBM Z system environment
8+
9+
[discrete]
10+
== Hardware requirements
11+
12+
* 3 LPARS that each have the equivalent of 6 IFLs, which are SMT2 enabled, for each cluster.
13+
* Two network connections to connect to both connect to the `LoadBalancer` service and to serve data for traffic outside the cluster.
14+
* HiperSockets, which are attached to a node either directly as a device or by bridging with one z/VM VSWITCH to be transparent to the z/VM guest. To directly connect HiperSockets to a node, you must set up a gateway to the external network via a {op-system-base} 8 guest to bridge to the HiperSockets network.
15+
16+
[discrete]
17+
== Operating system requirements
18+
19+
* 2 or 3 instances of z/VM 7.1 or later for high availability
20+
21+
On your z/VM instances, set up:
22+
23+
* 3 guest virtual machines for {product-title} control plane machines, one per z/VM instance.
24+
* At least 6 guest virtual machines for {product-title} compute machines, distributed across the z/VM instances.
25+
* 1 guest virtual machine for the temporary {product-title} bootstrap machine.
26+
* To ensure the availability of integral components in an overcommitted environment, increase the priority of the control plane by using the CP command `SET SHARE`. Do the same for infrastructure nodes, if they exist. See link:https://www.ibm.com/docs/en/zvm/7.1?topic=commands-set-share[SET SHARE] in IBM Documentation.
27+
28+
[discrete]
29+
== IBM Z network connectivity requirements
30+
31+
To install on IBM Z under z/VM, you require a single z/VM virtual NIC in layer 2 mode. You also need:
32+
33+
* A direct-attached OSA or RoCE network adapter
34+
* A z/VM VSwitch set up. For a preferred setup, use OSA link aggregation.
35+
36+
[discrete]
37+
=== Disk storage for the z/VM guest virtual machines
38+
39+
* FICON attached disk storage (DASDs). These can be z/VM minidisks, fullpack minidisks, or dedicated DASDs, all of which must be formatted as CDL, which is the default. To reach the minimum required DASD size for {op-system-first} installations, you need extended address volumes (EAV). If available, use HyperPAV and High Performance FICON (zHPF) to ensure optimal performance.
40+
* FCP attached disk storage
41+
42+
[discrete]
43+
=== Storage / Main Memory
44+
45+
* 16 GB for {product-title} control plane machines
46+
* 8 GB for {product-title} compute machines
47+
* 16 GB for the temporary {product-title} bootstrap machine

0 commit comments

Comments
 (0)