|
| 1 | +// This module is included in the following assemblies: |
| 2 | +// |
| 3 | +// installing/installing_bare_metal_ipi/ipi-install-installation-workflow.adoc |
| 4 | + |
| 5 | +:_content-type: PROCEDURE |
| 6 | +[id="ipi-install-establishing-communication-between-subnets_{context}"] |
| 7 | += Establishing communication between subnets |
| 8 | + |
| 9 | +In a typical {product-title} cluster setup, all nodes, including the control plane and worker nodes, reside in the same network. However, for edge computing scenarios, it can be beneficial to locate worker nodes closer to the edge. This often involves using different network segments or subnets for the remote worker nodes than the subnet used by the control plane and local worker nodes. Such a setup can reduce latency for the edge and allow for enhanced scalability. However, the network must be configured properly before installing {product-title} to ensure that the edge subnets containing the remote worker nodes can reach the subnet containing the control plane nodes and receive traffic from the control plane too. |
| 10 | + |
| 11 | +[IMPORTANT] |
| 12 | +==== |
| 13 | +All control plane nodes must run in the same subnet. When using more than one subnet, you can also configure the Ingress VIP to run on the control plane nodes by using a manifest. See "Configuring network components to run on the control plane" for details. |
| 14 | +
|
| 15 | +Deploying a cluster with multiple subnets requires using virtual media. |
| 16 | +==== |
| 17 | + |
| 18 | +This procedure details the network configuration required to allow the remote worker nodes in the second subnet to communicate effectively with the control plane nodes in the first subnet and to allow the control plane nodes in the first subnet to communicate effectively with the remote worker nodes in the second subnet. |
| 19 | + |
| 20 | +In this procedure, the cluster spans two subnets: |
| 21 | + |
| 22 | +- The first subnet (`10.0.0.0`) contains the control plane and local worker nodes. |
| 23 | +- The second subnet (`192.168.0.0`) contains the edge worker nodes. |
| 24 | +
|
| 25 | +.Procedure |
| 26 | + |
| 27 | +. Configure the first subnet to communicate with the second subnet: |
| 28 | + |
| 29 | +.. Log in as `root` to a control plane node by running the following command: |
| 30 | ++ |
| 31 | +[source,terminal] |
| 32 | +---- |
| 33 | +$ sudo su - |
| 34 | +---- |
| 35 | + |
| 36 | +.. Get the name of the network interface: |
| 37 | ++ |
| 38 | +[source,terminal] |
| 39 | +---- |
| 40 | +# nmcli dev status |
| 41 | +---- |
| 42 | + |
| 43 | +.. Add a route to the second subnet (`192.168.0.0`) via the gateway: |
| 44 | +s+ |
| 45 | +[source,terminal] |
| 46 | +---- |
| 47 | +# nmcli connection modify <interface_name> +ipv4.routes "192.168.0.0/24 via <gateway>" |
| 48 | +---- |
| 49 | ++ |
| 50 | +Replace `<interface_name>` with the interface name. Replace `<gateway>` with the IP address of the actual gateway. |
| 51 | ++ |
| 52 | +.Example |
| 53 | ++ |
| 54 | +[source,terminal] |
| 55 | +---- |
| 56 | +# nmcli connection modify eth0 +ipv4.routes "192.168.0.0/24 via 192.168.0.1" |
| 57 | +---- |
| 58 | + |
| 59 | +.. Apply the changes: |
| 60 | ++ |
| 61 | +[source,terminal] |
| 62 | +---- |
| 63 | +# nmcli connection up <interface_name> |
| 64 | +---- |
| 65 | ++ |
| 66 | +Replace `<interface_name>` with the interface name. |
| 67 | + |
| 68 | +.. Verify the routing table to ensure the route has been added successfully: |
| 69 | ++ |
| 70 | +[source,terminal] |
| 71 | +---- |
| 72 | +# ip route |
| 73 | +---- |
| 74 | + |
| 75 | +.. Repeat the previous steps for each control plane node in the first subnet. |
| 76 | ++ |
| 77 | +[NOTE] |
| 78 | +==== |
| 79 | +Adjust the commands to match your actual interface names and gateway. |
| 80 | +==== |
| 81 | + |
| 82 | +. Configure the second subnet to communicate with the first subnet: |
| 83 | + |
| 84 | +.. Log in as `root` to a remote worker node: |
| 85 | ++ |
| 86 | +[source,terminal] |
| 87 | +---- |
| 88 | +$ sudo su - |
| 89 | +---- |
| 90 | + |
| 91 | +.. Get the name of the network interface: |
| 92 | ++ |
| 93 | +[source,terminal] |
| 94 | +---- |
| 95 | +# nmcli dev status |
| 96 | +---- |
| 97 | + |
| 98 | +.. Add a route to the first subnet (`10.0.0.0`) via the gateway: |
| 99 | ++ |
| 100 | +[source,terminal] |
| 101 | +---- |
| 102 | +# nmcli connection modify <interface_name> +ipv4.routes "10.0.0.0/24 via <gateway>" |
| 103 | +---- |
| 104 | ++ |
| 105 | +Replace `<interface_name>` with the interface name. Replace `<gateway>` with the IP address of the actual gateway. |
| 106 | ++ |
| 107 | +.Example |
| 108 | ++ |
| 109 | +[source,terminal] |
| 110 | +---- |
| 111 | +# nmcli connection modify eth0 +ipv4.routes "10.0.0.0/24 via 10.0.0.1" |
| 112 | +---- |
| 113 | + |
| 114 | +.. Apply the changes: |
| 115 | ++ |
| 116 | +[source,terminal] |
| 117 | +---- |
| 118 | +# nmcli connection up <interface_name> |
| 119 | +---- |
| 120 | ++ |
| 121 | +Replace `<interface_name>` with the interface name. |
| 122 | + |
| 123 | +.. Verify the routing table to ensure the route has been added successfully: |
| 124 | ++ |
| 125 | +[source,terminal] |
| 126 | +---- |
| 127 | +# ip route |
| 128 | +---- |
| 129 | + |
| 130 | +.. Repeat the previous steps for each worker node in the second subnet. |
| 131 | ++ |
| 132 | +[NOTE] |
| 133 | +==== |
| 134 | +Adjust the commands to match your actual interface names and gateway. |
| 135 | +==== |
| 136 | + |
| 137 | +. Once you have configured the networks, test the connectivity to ensure the remote worker nodes can reach the control plane nodes and the control plane nodes can reach the remote worker nodes. |
| 138 | + |
| 139 | +.. From the control plane nodes in the first subnet, ping a remote worker node in the second subnet: |
| 140 | ++ |
| 141 | +[source,terminal] |
| 142 | +---- |
| 143 | +$ ping <remote_worker_node_ip_address> |
| 144 | +---- |
| 145 | ++ |
| 146 | +If the ping is successful, it means the control plane nodes in the first subnet can reach the remote worker nodes in the second subnet. If you don't receive a response, review the network configurations and repeat the procedure for the node. |
| 147 | + |
| 148 | +.. From the remote worker nodes in the second subnet, ping a control plane node in the first subnet: |
| 149 | ++ |
| 150 | +[source,terminal] |
| 151 | +---- |
| 152 | +$ ping <control_plane_node_ip_address> |
| 153 | +---- |
| 154 | ++ |
| 155 | +If the ping is successful, it means the remote worker nodes in the second subnet can reach the control plane in the first subnet. If you don't receive a response, review the network configurations and repeat the procedure for the node. |
0 commit comments