Skip to content

k3s node IPs not necessarily correct for multi-homed hosts #543

@sjpb

Description

@sjpb

As per the k3s docs, a default route is required and is used to determine the primary IP for a cluster. However in some cases this means the node IPs are not all on the same network, e.g.

          NetA   NetB
           |      |
login -----x------x
           |      |
control----x------x
                  |
compute1----------x
                  |

where NetA has a default route, and NetB doesn't, with a dummy route set on compute1 via cloud-init using #539.

In this case the k3s server on the control node gets an InternalIP on NetA, while compute1's is on NetB.

The IP the nodes should use for the k3s server is set by templating out K3S_URL at boot via ansible-init. However it turns out this is not sufficient in the above case and e.g. shelling into a container running on compute from k9s on the control node does not work.

In manual testing, setting --node-ip (available for both server and agent sub--commands) got this working.

Although those links show it isn't "natively" exposed as an environment variable, INSTALL_K3S_EXEC could be set to something like --node-ip $K3S_NODE_IP and then template out "K3S_NODE_IP=$ip" into an environment file via ansible-init (with the environment file reference possibly added by a dropin, configured to not start until that exists).

The docs suggest that even if setting --node-ip a default route is still required (emphasis added):

K3s requires a default route in order to auto-detect the node's primary IP, and for kube-proxy ClusterIP routing to function properly

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions