You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* See link:https://www.ibm.com/support/knowledgecenter/en/SSB27U_7.1.0/com.ibm.zvm.v710.hcpa6/bhslzvs.htm[Bridging a HiperSockets LAN with a z/VM Virtual Switch] in the IBM Knowledge Center.
42
+
* See link:https://www.ibm.com/docs/en/zvm/7.1?topic=networks-bridging-hipersockets-lan-zvm-virtual-switch[Bridging a HiperSockets LAN with a z/VM Virtual Switch] in IBM Documentation.
43
43
44
44
* See link:http://public.dhe.ibm.com/software/dw/linux390/perf/zvm_hpav00.pdf[Scaling HyperPAV alias devices on Linux guests on z/VM] for performance optimization.
Copy file name to clipboardExpand all lines: modules/installation-three-node-cluster.adoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -58,7 +58,7 @@ ifdef::ibm-z,ibm-z-kvm[]
58
58
+
59
59
[NOTE]
60
60
====
61
-
The minimum for control plane nodes is 21 GB. For three control plane nodes this is the memory equivalent of a minimum fivenode cluster.
61
+
The preferred resource for control plane nodes is six vCPUs and 21 GB. For three control plane nodes this is the memory + vCPU equivalent of a minimum five-node cluster. You should back the three nodes, each installed on a 120 GB disk, with three IFLs that are SMT2 enabled. The minimum tested setup is three vCPUs and 10 GB on a 120 GB disk for each control plane node.
0 commit comments