You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: modules/installation-complete-user-infra.adoc
+6-1Lines changed: 6 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -146,11 +146,16 @@ command.
146
146
If the pod logs display, the Kubernetes API server can communicate with the
147
147
cluster machines.
148
148
149
+
ifndef::ibm-power[]
149
150
. For an installation with Fibre Channel Protocol (FCP), additional steps are required to enable multipathing. Do not enable multipathing during installation.
150
151
+
151
152
See "Enabling multipathing with kernel arguments on RHCOS" in the _Post-installation configuration_ documentation for more information.
152
-
153
+
endif::ibm-power[]
153
154
ifdef::ibm-power[]
155
+
. Additional steps are required to enable multipathing. Do not enable multipathing during installation.
156
+
+
157
+
See "Enabling multipathing with kernel arguments on RHCOS" in the _Post-installation configuration_ documentation for more information.
158
+
154
159
.. To display a boot list and specify the possible boot devices if the system is booted in normal mode, enter the following command:
= Installing {op-system} by using PXE or iPXE booting
22
+
endif::only-pxe[]
23
+
ifdef::only-pxe[]
24
+
= Installing {op-system} by using PXE booting
25
+
endif::only-pxe[]
12
26
27
+
ifndef::only-pxe[]
13
28
You can use PXE or iPXE booting to install {op-system} on the machines.
29
+
endif::only-pxe[]
30
+
ifdef::only-pxe[]
31
+
You can use PXE booting to install {op-system} on the machines.
32
+
endif::only-pxe[]
14
33
15
34
.Prerequisites
16
35
17
36
* You have created the Ignition config files for your cluster.
18
37
* You have configured suitable network, DNS and load balancing infrastructure.
38
+
ifndef::only-pxe[]
19
39
* You have configured suitable PXE or iPXE infrastructure.
40
+
endif::only-pxe[]
41
+
ifdef::only-pxe[]
42
+
* You have configured suitable PXE infrastructure.
43
+
endif::only-pxe[]
20
44
* You have an HTTP server that can be accessed from your computer, and from the machines that you create.
21
45
* You have reviewed the _Advanced {op-system} installation configuration_ section for different ways to configure features, such as networking and disk partitioning.
22
46
@@ -92,12 +116,24 @@ installation, do not delete these files.
92
116
. Configure the network boot infrastructure so that the machines boot from their
93
117
local disks after {op-system} is installed on them.
94
118
119
+
ifndef::only-pxe[]
95
120
. Configure PXE or iPXE installation for the {op-system} images and begin the installation.
121
+
endif::only-pxe[]
122
+
ifdef::only-pxe[]
123
+
. Configure PXE installation for the {op-system} images and begin the installation.
124
+
endif::only-pxe[]
96
125
+
126
+
ifndef::only-pxe[]
97
127
Modify one of the following example menu entries for your environment and verify
98
128
that the image and Ignition files are properly accessible:
129
+
endif::only-pxe[]
99
130
131
+
ifdef::only-pxe[]
132
+
Modify the following example menu entry for your environment and verify that the image and Ignition files are properly accessible:
133
+
endif::only-pxe[]
134
+
ifndef::only-pxe[]
100
135
** For PXE:
136
+
endif::only-pxe[]
101
137
+
102
138
----
103
139
DEFAULT pxeboot
@@ -125,6 +161,7 @@ or other boot options.
125
161
This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more `console=` arguments to the `APPEND` line. For example, add `console=tty0 console=ttyS0` to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see link:https://access.redhat.com/articles/7212[How does one set up a serial terminal and/or console in Red Hat Enterprise Linux?].
126
162
====
127
163
164
+
ifndef::only-pxe[]
128
165
** For iPXE:
129
166
+
130
167
----
@@ -146,6 +183,7 @@ For example, to use DHCP on a NIC that is named `eno1`, set `ip=eno1:dhcp`.
146
183
====
147
184
This configuration does not enable serial console access on machines with a graphical console. To configure a different console, add one or more `console=` arguments to the `kernel` line. For example, add `console=tty0 console=ttyS0` to set the first PC serial port as the primary console and the graphical console as a secondary console. For more information, see link:https://access.redhat.com/articles/7212[How does one set up a serial terminal and/or console in Red Hat Enterprise Linux?].
148
185
====
186
+
endif::only-pxe[]
149
187
150
188
. Monitor the progress of the {op-system} installation on the console of the machine.
151
189
+
@@ -171,3 +209,12 @@ If the required network, DNS, and load balancer infrastructure are in place, the
171
209
====
172
210
{op-system} nodes do not include a default password for the `core` user. You can access the nodes by running `ssh core@<node>.<cluster_name>.<base_domain>` as a user with access to the SSH private key that is paired to the public key that you specified in your `install_config.yaml` file. {product-title} 4 cluster nodes running {op-system} are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, when investigating installation issues, if the {product-title} API is not available, or the kubelet is not properly functioning on a target node, SSH access might be required for debugging or disaster recovery.
Copy file name to clipboardExpand all lines: modules/openshift-cluster-maximums-environment.adoc
+44Lines changed: 44 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -56,3 +56,47 @@ AWS cloud platform:
56
56
4. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run.
57
57
5. Cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts.
1. io1 disks with 120 / 3 IOPS per GB are used for master/etcd nodes as etcd is I/O intensive and latency sensitive.
98
+
2. Infra nodes are used to host Monitoring, Ingress, and Registry components to ensure they have enough resources to run at large scale.
99
+
3. Workload node is dedicated to run performance and scalability workload generators.
100
+
4. Larger disk size is used so that there is enough space to store the large amounts of data that is collected during the performance and scalability test run.
101
+
5. Cluster is scaled in iterations and performance and scalability tests are executed at the specified node counts.
0 commit comments