|
1 | | -# Kube-router on generic cluster |
| 1 | +# Kube-router on generic clusters |
2 | 2 |
|
3 | | -This guide assumes you already have bootstrapped the initial pieces for a Kubernets cluster and is about to switch or setup service & container networking provider |
| 3 | +This guide is for running kube-router as the [CNI](https://github.com/containernetworking) network provider for on premise and/or bare metal clusters outside of a cloud provider's environment. It assumes the initial cluster is bootstrapped and a networking provider needs configuration. |
4 | 4 |
|
5 | | -Kube-router relies on kube-controller-manager to allocate pod CIDR for the nodes |
| 5 | +All pod networking CIDRs are allocated by kube-controller-manager. Kube-router provides service/pod networking, a network policy firewall, and a high performance IPVS/LVS based service proxy. The network policy firewall and service proxy are both optional but recommended. |
6 | 6 |
|
7 | | -Kube-router provides pod networking, network policy and high perfoming IPVS/LVS based service proxy. Depending on you choose to use kube-router for service proxy you have two options listed below the prerequisites |
8 | 7 |
|
9 | | -## Prerequisites |
| 8 | +### Configuring the Kubelet |
10 | 9 |
|
11 | | -kube-router can work as your whole network stack in Kubernetes on-prem & bare metall and works without any cloudproviders |
| 10 | +Ensure each kubelet is configured with the following options: |
12 | 11 |
|
13 | | -below is the needed configuration to run kube-router in such environments |
| 12 | + --network-plugin=cni |
| 13 | + --cni-conf-dir=/etc/cni/net.d |
14 | 14 |
|
15 | | -### Kubelet on each node |
| 15 | +If a previous CNI provider (e.g. weave-net, calico, or flannel) was used, remove old configurations from `/etc/cni/net.d` on each kubelet. |
16 | 16 |
|
17 | | -kube-router assumes each Kubelet is using `/etc/cni/net.d` as cni conf dir & network plugin `cni` |
| 17 | +**Note: Switching CNI providers on a running cluster requires re-creating all pods to pick up new pod IPs** |
18 | 18 |
|
19 | | -- --cni-conf-dir=/etc/cni/net.d |
20 | | -- --network-plugin=cni |
21 | 19 |
|
22 | | -If you have been using a other CNI providerssuch as weave-net, calico or flannel you will have to remove old configurations from /etc/cni/net.d on each node |
| 20 | +### Configuring kube-controller-manager |
23 | 21 |
|
24 | | -## __Switching CNI provider on a running cluster will require you to delete all the running pods and let them recreate and get new adresses assigned from the Kubenet IPAM__ |
| 22 | +The following options are mandatory for kube-controller-manager: |
25 | 23 |
|
26 | | -### Kube controller-manager |
| 24 | + --cluster-cidr=${POD_NETWORK} # for example 10.32.0.0/12 |
| 25 | + --service-cluster-ip-range=${SERVICE_IP_RANGE} # for example 10.50.0.0/22 |
27 | 26 |
|
28 | | -The following options needs to be set on the controller-manager |
29 | 27 |
|
30 | | -```text |
31 | | ---cluster-cidr=${POD_NETWORK} # for example 10.32.0.0/12 |
32 | | ---service-cluster-ip-range=${SERVICE_IP_RANGE} # for example 10.50.0.0/22 |
33 | | -``` |
| 28 | +## Running kube-router with everything |
34 | 29 |
|
35 | | -## Kube-router providing pod networking and network policy |
| 30 | +This runs kube-router with pod/service networking, the network policy firewall, and service proxy to replace kube-proxy. The example command uses `10.32.0.0/12` as the pod CIDR address range and `https://cluster01.int.domain.com:6443` as the apiserver address. Please change these to suit your cluster. |
36 | 31 |
|
37 | | -Don't forgett to adjust values for Cluster CIDR (pod range) & apiserver adress (must be reachable directly from host networking) |
| 32 | + CLUSTERCIDR=10.32.0.0/12 \ |
| 33 | + APISERVER=https://cluster01.int.domain.com:6443 \ |
| 34 | + sh -c 'curl https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/generic-kuberouter-all-features.yaml -o - | \ |
| 35 | + sed -e "s;%APISERVER%;$APISERVER;g" -e "s;%CLUSTERCIDR%;$CLUSTERCIDR;g"' | \ |
| 36 | + kubectl apply -f - |
38 | 37 |
|
39 | | -```sh |
40 | | -CLUSTERCIDR=10.32.0.0/12 \ |
41 | | -APISERVER=https://cluster01.int.domain.com:6443 \ |
42 | | -sh -c 'curl https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/generic-kuberouter.yaml -o - | \ |
43 | | -sed -e "s;%APISERVER%;$APISERVER;g" -e "s;%CLUSTERCIDR%;$CLUSTERCIDR;g"' | \ |
44 | | -kubectl apply -f - |
45 | | -``` |
| 38 | +### Removing a previous kube-proxy |
46 | 39 |
|
47 | | -## Kube-router providing service proxy, firewall and pod networking |
| 40 | +If kube-proxy was never deployed to the cluster, this can likely be skipped. |
48 | 41 |
|
49 | | -Don't forgett to adjust values for Cluster CIDR (pod range) & apiserver adress (must be reachable directly from host networking) |
| 42 | +Remove any previously running kube-proxy and all iptables rules it created. Start by deleting the kube-proxy daemonset: |
50 | 43 |
|
51 | | -```sh |
52 | | -CLUSTERCIDR=10.32.0.0/12 \ |
53 | | -APISERVER=https://cluster01.int.domain.com:6443 \ |
54 | | -sh -c 'curl https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/generic-kuberouter-all-features.yaml -o - | \ |
55 | | -sed -e "s;%APISERVER%;$APISERVER;g" -e "s;%CLUSTERCIDR%;$CLUSTERCIDR;g"' | \ |
56 | | -kubectl apply -f - |
57 | | -``` |
| 44 | + kubectl -n kube-system delete ds kube-proxy |
58 | 45 |
|
59 | | -Now since kube-router provides service proxy as well. Run below commands to remove kube-proxy and cleanup any iptables configuration it may have done |
| 46 | +Any iptables rules kube-proxy left around will also need to be cleaned up. This command might differ based on how kube-proxy was setup or configured: |
60 | 47 |
|
61 | | -Depending on if or how you installed kube-proxy previously these instructions will differ and have to be ran on every node where kube-proxy has run |
| 48 | + docker run --privileged --net=host gcr.io/google_containers/kube-proxy-amd64:v1.7.3 kube-proxy --cleanup-iptables |
62 | 49 |
|
63 | | -```sh |
64 | | -kubectl -n kube-system delete ds kube-proxy |
65 | | -docker run --privileged --net=host gcr.io/google_containers/kube-proxy-amd64:v1.7.3 kube-proxy --cleanup-iptables |
66 | | -``` |
| 50 | + |
| 51 | +## Running kube-router without the service proxy |
| 52 | + |
| 53 | +This runs kube-router with pod/service networking and the network policy firewall. The service proxy is disabled. Don't forget to update the cluster CIDR and apiserver addresses to match your cluster. |
| 54 | + |
| 55 | + CLUSTERCIDR=10.32.0.0/12 \ |
| 56 | + APISERVER=https://cluster01.int.domain.com:6443 \ |
| 57 | + sh -c 'curl https://raw.githubusercontent.com/cloudnativelabs/kube-router/master/daemonset/generic-kuberouter.yaml -o - | \ |
| 58 | + sed -e "s;%APISERVER%;$APISERVER;g" -e "s;%CLUSTERCIDR%;$CLUSTERCIDR;g"' | \ |
| 59 | + kubectl apply -f - |
0 commit comments