Skip to content

Commit c418084

Browse files
authored
Merge pull request #61800 from johnwilkins/TELCODOCS-609
TELCODOCS-609: D/S Docs & RN: TELCODOCS-579 Remote worker node updates
2 parents 1dee313 + 3661d9e commit c418084

6 files changed

+251
-2
lines changed

installing/installing_aws/installing-aws-expanding-a-cluster-with-on-premise-bare-metal-nodes.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
:_content-type: ASSEMBLY
2-
[id="expanding-a-cluster-with-on-premise-bare-metal-nodes"]
2+
[id="aws-expanding-a-cluster-with-on-premise-bare-metal-nodes"]
33
= Expanding a cluster with on-premise bare metal nodes
44
include::_attributes/common-attributes.adoc[]
5-
:context: expanding-a-cluster-with-on-premise-bare-metal-nodes
5+
:context: aws-expanding-a-cluster-with-on-premise-bare-metal-nodes
66

77
toc::[]
88

installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,8 @@ include::modules/ipi-install-preparing-the-provisioner-node-for-openshift-instal
1212

1313
include::modules/ipi-install-configuring-networking.adoc[leveloffset=+1]
1414

15+
include::modules/ipi-install-establishing-communication-between-subnets.adoc[leveloffset=+1]
16+
1517
include::modules/ipi-install-retrieving-the-openshift-installer.adoc[leveloffset=+1]
1618

1719
include::modules/ipi-install-extracting-the-openshift-installer.adoc[leveloffset=+1]
@@ -44,6 +46,8 @@ include::modules/ipi-install-modifying-install-config-for-dual-stack-network.ado
4446

4547
include::modules/ipi-install-configuring-host-network-interfaces-in-the-install-config.yaml-file.adoc[leveloffset=+2]
4648

49+
include::modules/ipi-install-configuring-host-network-interfaces-for-subnets.adoc[leveloffset=+2]
50+
4751
include::modules/ipi-install-configuring-host-dual-network-interfaces-in-the-install-config.yaml-file.adoc[leveloffset=+2]
4852

4953
[role="_additional-resources"]
Lines changed: 59 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,59 @@
1+
// This module is included in the following assemblies:
2+
//
3+
// installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="ipi-install-configuring-host-network-interfaces-for-subnets_{context}"]
7+
= Configuring host network interfaces for subnets
8+
9+
For edge computing scenarios, it can be beneficial to locate worker nodes closer to the edge. To locate remote worker nodes in subnets, you might use different network segments or subnets for the remote worker nodes than you used for the control plane subnet and local worker nodes. You can reduce latency for the edge and allow for enhanced scalability by setting up subnets for edge computing scenarios.
10+
11+
If you have established different network segments or subnets for remote worker nodes as described in the section on "Establishing communication between subnets", you must specify the subnets in the `machineNetwork` configuration setting if the workers are using static IP addresses, bonds or other advanced networking. When setting the node IP address in the `networkConfig` paramter for each remote worker node, you must also specify the gateway and the DNS server for the subnet containing the control plane nodes when using static IP addresses. This ensures the remote worker nodes can reach the subnet containing the control plane nodes and that they can receive network traffic from the control plane.
12+
13+
[IMPORTANT]
14+
====
15+
All control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details.
16+
17+
Deploying a cluster with multiple subnets requires using virtual media, such as `redfish-virtualmedia` and `idrac-virtualmedia`.
18+
====
19+
20+
.Procedure
21+
22+
. Add the subnets to the `machineNetwork` in the `install-config.yaml` file when using static IP addresses:
23+
+
24+
[source,yaml]
25+
----
26+
networking:
27+
machineNetwork:
28+
- cidr: 10.0.0.0/24
29+
- cidr: 192.168.0.0/24
30+
networkType: OVNKubernetes
31+
----
32+
33+
. Add the gateway and DNS configuration to the `networkConfig` parameter of each edge worker node using NMState syntax when using a static IP address or advanced networking such as bonds:
34+
+
35+
[source,yaml]
36+
----
37+
networkConfig:
38+
nmstate:
39+
interfaces:
40+
- name: <interface_name> <1>
41+
type: ethernet
42+
state: up
43+
ipv4:
44+
enabled: true
45+
dhcp: false
46+
address:
47+
- ip: <node_ip> <2>
48+
prefix-length: 24
49+
gateway: <gateway_ip> <3>
50+
dns-resolver:
51+
config:
52+
server:
53+
- <dns_ip> <4>
54+
----
55+
+
56+
<1> Replace `<interface_name>` with the interface name.
57+
<2> Replace `<node_ip>` with the IP address of the node.
58+
<3> Replace `<gateway_ip>` with the IP address of the gateway.
59+
<4> Replace `<dns_ip>` with the IP address of the DNS server.
Lines changed: 155 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,155 @@
1+
// This module is included in the following assemblies:
2+
//
3+
// installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc
4+
5+
:_content-type: PROCEDURE
6+
[id="ipi-install-establishing-communication-between-subnets_{context}"]
7+
= Establishing communication between subnets
8+
9+
In a typical {product-title} cluster setup, all nodes, including the control plane and worker nodes, reside in the same network. However, for edge computing scenarios, it can be beneficial to locate worker nodes closer to the edge. This often involves using different network segments or subnets for the remote worker nodes than the subnet used by the control plane and local worker nodes. Such a setup can reduce latency for the edge and allow for enhanced scalability. However, the network must be configured properly before installing {product-title} to ensure that the edge subnets containing the remote worker nodes can reach the subnet containing the control plane nodes and receive traffic from the control plane too.
10+
11+
[IMPORTANT]
12+
====
13+
All control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details.
14+
15+
Deploying a cluster with multiple subnets requires using virtual media.
16+
====
17+
18+
This procedure details the network configuration required to allow the remote worker nodes in the second subnet to communicate effectively with the control plane nodes in the first subnet and to allow the control plane nodes in the first subnet to communicate effectively with the remote worker nodes in the second subnet.
19+
20+
In this procedure, the cluster spans two subnets:
21+
22+
- The first subnet (`10.0.0.0`) contains the control plane and local worker nodes.
23+
- The second subnet (`192.168.0.0`) contains the edge worker nodes.
24+
25+
.Procedure
26+
27+
. Configure the first subnet to communicate with the second subnet:
28+
29+
.. Log in as `root` to a control plane node by running the following command:
30+
+
31+
[source,terminal]
32+
----
33+
$ sudo su -
34+
----
35+
36+
.. Get the name of the network interface:
37+
+
38+
[source,terminal]
39+
----
40+
# nmcli dev status
41+
----
42+
43+
.. Add a route to the second subnet (`192.168.0.0`) via the gateway:
44+
s+
45+
[source,terminal]
46+
----
47+
# nmcli connection modify <interface_name> +ipv4.routes "192.168.0.0/24 via <gateway>"
48+
----
49+
+
50+
Replace `<interface_name>` with the interface name. Replace `<gateway>` with the IP address of the actual gateway.
51+
+
52+
.Example
53+
+
54+
[source,terminal]
55+
----
56+
# nmcli connection modify eth0 +ipv4.routes "192.168.0.0/24 via 192.168.0.1"
57+
----
58+
59+
.. Apply the changes:
60+
+
61+
[source,terminal]
62+
----
63+
# nmcli connection up <interface_name>
64+
----
65+
+
66+
Replace `<interface_name>` with the interface name.
67+
68+
.. Verify the routing table to ensure the route has been added successfully:
69+
+
70+
[source,terminal]
71+
----
72+
# ip route
73+
----
74+
75+
.. Repeat the previous steps for each control plane node in the first subnet.
76+
+
77+
[NOTE]
78+
====
79+
Adjust the commands to match your actual interface names and gateway.
80+
====
81+
82+
. Configure the second subnet to communicate with the first subnet:
83+
84+
.. Log in as `root` to a remote worker node:
85+
+
86+
[source,terminal]
87+
----
88+
$ sudo su -
89+
----
90+
91+
.. Get the name of the network interface:
92+
+
93+
[source,terminal]
94+
----
95+
# nmcli dev status
96+
----
97+
98+
.. Add a route to the first subnet (`10.0.0.0`) via the gateway:
99+
+
100+
[source,terminal]
101+
----
102+
# nmcli connection modify <interface_name> +ipv4.routes "10.0.0.0/24 via <gateway>"
103+
----
104+
+
105+
Replace `<interface_name>` with the interface name. Replace `<gateway>` with the IP address of the actual gateway.
106+
+
107+
.Example
108+
+
109+
[source,terminal]
110+
----
111+
# nmcli connection modify eth0 +ipv4.routes "10.0.0.0/24 via 10.0.0.1"
112+
----
113+
114+
.. Apply the changes:
115+
+
116+
[source,terminal]
117+
----
118+
# nmcli connection up <interface_name>
119+
----
120+
+
121+
Replace `<interface_name>` with the interface name.
122+
123+
.. Verify the routing table to ensure the route has been added successfully:
124+
+
125+
[source,terminal]
126+
----
127+
# ip route
128+
----
129+
130+
.. Repeat the previous steps for each worker node in the second subnet.
131+
+
132+
[NOTE]
133+
====
134+
Adjust the commands to match your actual interface names and gateway.
135+
====
136+
137+
. Once you have configured the networks, test the connectivity to ensure the remote worker nodes can reach the control plane nodes and the control plane nodes can reach the remote worker nodes.
138+
139+
.. From the control plane nodes in the first subnet, ping a remote worker node in the second subnet:
140+
+
141+
[source,terminal]
142+
----
143+
$ ping <remote_worker_node_ip_address>
144+
----
145+
+
146+
If the ping is successful, it means the control plane nodes in the first subnet can reach the remote worker nodes in the second subnet. If you don't receive a response, review the network configurations and repeat the procedure for the node.
147+
148+
.. From the remote worker nodes in the second subnet, ping a control plane node in the first subnet:
149+
+
150+
[source,terminal]
151+
----
152+
$ ping <control_plane_node_ip_address>
153+
----
154+
+
155+
If the ping is successful, it means the remote worker nodes in the second subnet can reach the control plane in the first subnet. If you don't receive a response, review the network configurations and repeat the procedure for the node.
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
// This module is included in the following assemblies:
2+
//
3+
// nodes/edge/nodes-edge-remote-workers.adoc
4+
5+
:_content-type: CONCEPT
6+
[id="nodes-rwn_con_adding-remote-worker-nodes_{context}"]
7+
= Adding remote worker nodes
8+
9+
Adding remote worker nodes to a cluster involves some additional considerations.
10+
11+
* You must ensure that a route or a default gateway is in place to route traffic between the control plane and every remote worker node.
12+
13+
* You must place the Ingress VIP on the control plane.
14+
15+
* Adding remote worker nodes with user-provisioned infrastructure is identical to adding other worker nodes.
16+
17+
* To add remote worker nodes to an installer-provisioned cluster at install time, specify the subnet for each worker node in the `install-config.yaml` file before installation. There are no additional settings required for the DHCP server. You must use virtual media, because the remote worker nodes will not have access to the local provisioning network.
18+
19+
* To add remote worker nodes to an installer-provisioned cluster deployed with a provisioning network, ensure that `virtualMediaViaExternalNetwork` flag is set to `true` in the `install-config.yaml` file so that it will add the nodes using virtual media. Remote worker nodes will not have access to the local provisioning network. They must be deployed with virtual media rather than PXE. Additionally, specify each subnet for each group of remote worker nodes and the control plane nodes in the DHCP server.

nodes/edge/nodes-edge-remote-workers.adoc

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,17 @@ Note the following limitations when planning a cluster with remote worker nodes:
2929
3030
* You are responsible for configuring and maintaining L2/L3-level network connectivity between the control plane and the network-edge nodes.
3131
32+
include::modules/nodes-rwn_con_adding-remote-worker-nodes.adoc[leveloffset=+1]
33+
34+
.Additional resources
35+
36+
* xref:../../installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc#ipi-install-establishing-communication-between-subnets_ipi-install-installation-workflow[Establishing communications between subnets]
37+
38+
* xref:../../installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc#ipi-install-configuring-host-network-interfaces-for-subnets_ipi-install-installation-workflow[Configuring host network interfaces for subnets]
39+
40+
* xref:../../installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc#configure-network-components-to-run-on-the-control-plane_ipi-install-installation-workflow[Configuring network components to run on the control plane]
41+
42+
3243
include::modules/nodes-edge-remote-workers-network.adoc[leveloffset=+1]
3344

3445
For more information on using these objects in a cluster with remote worker nodes, see xref:../../nodes/edge/nodes-edge-remote-workers.html#nodes-edge-remote-workers-strategies_nodes-edge-remote-workers[About remote worker node strategies].
@@ -64,3 +75,4 @@ include::modules/nodes-edge-remote-workers-strategies.adoc[leveloffset=+1]
6475
* For more information on replication controllers, see xref:../../applications/deployments/what-deployments-are.html#deployments-replicationcontrollers_what-deployments-are[Replication controllers].
6576

6677
* For more information on the controller manager, see xref:../../operators/operator-reference.adoc#kube-controller-manager-operator_cluster-operators-ref[Kubernetes Controller Manager Operator].
78+

0 commit comments

Comments
 (0)