You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Most vendors support Baseboard Management Controller (BMC) addressing with the Intelligent Platform Management Interface (IPMI). IPMI does not encrypt communications. It is suitable for use within a data center over a secured or dedicated management network. Check with your vendor to see if they support Redfish network boot. Redfish delivers simple and secure management for converged, hybrid IT and the Software Defined Data Center (SDDC). Redfish is human readable and machine capable, and leverages common internet and web services standards to expose information directly to the modern tool chain. If your hardware does not support Redfish network boot, use IPMI.
The following procedure configures BIOS for the worker node during the installation process.
9
+
The following procedure configures the BIOS for a worker node during the installation process.
9
10
10
11
.Procedure
11
-
. Create manifests.
12
+
. Create the manifests.
13
+
12
14
. Modify the BMH file corresponding to the worker:
13
15
+
16
+
[source,terminal]
14
17
----
15
18
$ vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-3.yaml
16
19
----
20
+
17
21
. Add the BIOS configuration to the `spec` section of the BMH file:
18
22
+
23
+
[source,terminal]
19
24
----
20
25
spec:
21
26
firmware:
@@ -26,6 +31,7 @@ spec:
26
31
+
27
32
[NOTE]
28
33
====
29
-
. Red Hat supports three BIOS configurations. See the link:https://github.com/openshift/baremetal-operator/blob/master/docs/api.md#firmware[BMH documentation] for details. Only servers with bmc type `irmc` are supported. Other types of servers are currently not supported.
34
+
. Red Hat supports three BIOS configurations. See the link:https://github.com/openshift/baremetal-operator/blob/master/docs/api.md#firmware[BMH documentation] for details. Only servers with BMC type `irmc` are supported. Other types of servers are currently not supported.
@@ -15,7 +16,7 @@ Each node in the cluster requires the following configuration for proper install
15
16
A mismatch between nodes will cause an installation failure.
16
17
====
17
18
18
-
While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs. In the following table, NIC1 is a non-routable network (`provisioning`) that is only used for the installation of the {product-title} cluster.
19
+
While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs. In the following table, NIC1 is a non-routable network (`provisioning`) that is only used for the installation of the {product-title} cluster.
The following procedure configures RAID (Redundant Array of Independent Disks) for the worker node during the installation process.
8
10
9
11
[NOTE]
10
12
====
11
-
. Only servers with BMC (Baseboard Management Controller) type `irmc` are supported. Other types of servers are currently not supported.
12
-
. If you want to configure hardware RAID for the server, make sure the server has a RAID controller.
13
+
. Only nodes with BMC (Baseboard Management Controller) type `irmc` are supported. Other types of nodes are currently not supported.
14
+
. If you want to configure hardware RAID for the node, make sure the node has a RAID controller.
13
15
====
14
16
15
17
.Procedure
16
18
. Create manifests.
19
+
17
20
. Modify the BMH (Bare Metal Host) file corresponding to the worker:
18
21
+
19
22
[source,terminal]
@@ -23,10 +26,10 @@ $ vim clusterconfigs/openshift/99_openshift-cluster-api_hosts-3.yaml
23
26
+
24
27
[NOTE]
25
28
====
26
-
Because servers with BMC type `irmc` do not support software RAID, the following RAID configuration uses hardware RAID as an example.
29
+
Because nodes with BMC type `irmc` do not support software RAID, the following RAID configuration uses hardware RAID as an example.
27
30
====
28
31
+
29
-
.. If you added specific RAID configuration to the spec, this causes the worker node to delete the original RAID configuration in the `preparing` phase and perform a specified configuration on the RAID. For example:
32
+
.. If you added a specific RAID configuration to the `spec` section, this causes the worker node to delete the original RAID configuration in the `preparing` phase and perform a specified configuration on the RAID. For example:
30
33
+
31
34
[source,yaml]
32
35
----
@@ -41,7 +44,7 @@ spec:
41
44
----
42
45
<1> `level` is a required field, and the others are optional fields.
43
46
+
44
-
.. If you added an empty RAID configuration to the spec, this empty configuration causes the worker node to delete the original RAID configuration during the `preparing` phase, but does not perform a new configuration. For example:
47
+
.. If you added an empty RAID configuration to the `spec` section, this empty configuration causes the worker node to delete the original RAID configuration during the `preparing` phase, but does not perform a new configuration. For example:
45
48
+
46
49
[source,yaml]
47
50
----
@@ -50,5 +53,6 @@ spec:
50
53
hardwareRAIDVolumes: []
51
54
----
52
55
+
53
-
.. If you do not add a `raid` field in the spec, the original RAID configuration is not deleted, and no new configuration will be performed.
56
+
.. If you do not add a `raid` field in the `spec` section, the original RAID configuration is not deleted, and no new configuration will be performed.
0 commit comments