You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: Documentation/README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ Kube-router consists of 3 core controllers and multiple watchers as depicted in
12
12
13
13
Network services controller is responsible for reading the services and endpoints information from Kubernetes API server and configure IPVS on each cluster node accordingly.
14
14
15
-
Please read blog for design details and pros and cons compared to iptables based Kube-proxy
15
+
Please our read blog for design details and pros and cons compared to iptables based Kube-proxy
Kube-router is a distributed load balancer, firewall and router for Kubernetes. Kube-router can be configured to provide on each cluster node:
7
+
Kube-router is a distributed load balancer, firewall and router for Kubernetes
8
+
clusters. It gives your cluster a unified control plane for features
9
+
that would typically be provided by two or three separate software projects.
8
10
9
-
- a IPVS/LVS based service proxy on each node for *ClusterIP* and *NodePort* service types, providing service discovery and load balancing
10
-
- an ingress firewall for the pods running on the node as per the defined Kubernetes network policies using iptables and ipset
11
-
- a BGP router to advertise and learn the routes to the pod IP's for cross-node pod-to-pod connectivity
11
+
## Project status
12
12
13
-
## Why Kube-router
14
-
We have Kube-proxy which provides service proxy and load balancer. We have several addons or solutions like Flannel, Calico, Weave etc to provide cross node pod-to-pod networking. Simillarly there are solutions like Calico that enforce network policies. Then why do we need Kube-router for a similar job? Here is the motivation:
13
+
Project is in alpha stage. We are working towards beta release
14
+
[milestone](https://github.com/cloudnativelabs/kube-router/milestone/2) and are
15
+
activley incorporating users feedback.
15
16
16
-
- It is challenging to deploy, monitor and troubleshoot multiple solutions at runtime. These independent solution need to work well together. Kube-router aims to provide operational simplicity by combining all the networking functionality that can be provided at the node in to one cohesive solution. Run Kube-router as daemonset, by just running one command ``kubectl create -f kube-router-daemonset.yaml`` you have solution for pod-to-pod networking, service proxy and firewall on each node.
17
+
## Getting Started
17
18
18
-
- Kube-router is motivated to provide optimized solution for performance. Kube-router uses IPVS for service proxy as compared to iptables by Kube-proxy. Kube-router uses solutions like ipsets to optimize iptables rules matching while providing ingress firewall for the pods. For inter-node pod-to-pod communication, routing rules added by kube-router ensures data path is efficient (one hop for pod-to-pod connectivity) with out overhead of overlays.
- Kube-router builds on standard Linux technologies, so you can verify the configuration and troubleshoot with standard Linux networking tools (ipvsadm, ip route, iptables, ipset, traceroute, tcpdump etc).
23
+
## Primary Features
21
24
22
-
- Kube-router is a solution purpose built for Kubernetes. So it does not carry the burden of functionality to support other container orchestration and infrastrcutre orchestration platforms. Code base is extremely small for the functionality it provides and easy to hack up and customize.
25
+
*kube-router does it all.*
23
26
24
-
## See it in action
27
+
With all features enabled, kube-router is a lean yet powerful alternative to
28
+
several software components used in typical Kubernetes clusters. All this from a
29
+
single DaemonSet/Binary. It doesn't get any easier.
### Alternative to kube-proxy | `--run-service-proxy`
27
32
28
-
## Project status
33
+
kube-router uses the Linux kernel's IPVS features to implement its K8s Services
34
+
Proxy. This feature has been requested for some time in kube-proxy, but you can
35
+
have it right now with kube-router.
29
36
30
-
Project is in alpha stage. We are working towards beta release [milestone](https://github.com/cloudnativelabs/kube-router/milestone/2) and are activley incorporating users feedback.
37
+
Read more about the advantages of IPVS for container load balancing:
38
+
-[Kubernetes network services proxy with IPVS/LVS](https://cloudnativelabs.github.io/post/2017-05-10-kube-network-service-proxy/)
39
+
-[Kernel Load-Balancing for Docker Containers Using IPVS](https://blog.codeship.com/kernel-load-balancing-for-docker-containers-using-ipvs/)
31
40
32
-
##Support & Feedback
41
+
### Pod Networking | `--run-router`
33
42
34
-
If you experience any problems please reach us on gitter [community forum](https://gitter.im/kube-router/Lobby) for quick help. Feel free to leave feedback or raise questions at any time by opening an issue [here](https://github.com/cloudnativelabs/kube-router/issues).
43
+
kube-router handles Pod networking efficiently with direct routing thanks to the
44
+
BGP protocol and the GoBGP Go library. It uses the native Kubernetes API to
45
+
maintain distributed pod networking state. That means no dependency on a
46
+
separate datastore to maintain in your cluster.
35
47
36
-
## Getting Started
48
+
kube-router's elegant design also means there is no dependency on another CNI
We encourage all kinds of contributions, be they documentation, code, fixing typos, tests — anything at all. Please
47
-
read the [contribution guide](./CONTRIBUTING.md).
48
110
49
111
## Theory of Operation
50
112
51
-
Kube-router can be run as a agent or a pod (through daemonset) on each node and leverages standard Linux technologies **iptables, ipvs/lvs, ipset, iproute2**
113
+
Kube-router can be run as an agent or a Pod (via DaemonSet) on each node and
114
+
leverages standard Linux technologies **iptables, ipvs/lvs, ipset, iproute2**
52
115
53
-
### service proxy and load balancing
116
+
### Service Proxy And Load Balancing
54
117
55
-
refer to https://cloudnativelabs.github.io/post/2017-05-10-kube-network-service-proxy/ for the design details and demo
118
+
[Kubernetes network services proxy with IPVS/LVS](https://cloudnativelabs.github.io/post/2017-05-10-kube-network-service-proxy/)
56
119
57
-
Kube-router uses IPVS/LVS technology built in Linux to provide L4 load balancing. Each of the kubernetes service of **ClusterIP** and **NodePort** type is configured as IPVS virtual service. Each service endpoint is configured as real server to the virtual service.
58
-
Standard **ipvsadm** tool can be used to verify the configuration and monitor the active connections.
120
+
Kube-router uses IPVS/LVS technology built in Linux to provide L4 load
121
+
balancing. Each **ClusterIP** and **NodePort** Kubernetes Service type is
122
+
configured as an IPVS virtual service. Each Service Endpoint is configured as
123
+
real server to the virtual service. The standard **ipvsadm** tool can be used
124
+
to verify the configuration and monitor the active connections.
59
125
60
-
Below is example set of services on kubernetes
126
+
Below is example set of Services on Kubernetes:
61
127
62
128

63
129
64
-
and the endpoints for the services
130
+
and the Endpoints for the Services:
65
131
66
132

67
133
68
-
and how they got mapped to the ipvs by kube-router
134
+
and how they got mapped to the IPVS by kube-router:
Each policy chain has rules expressed through source and destination ipsets. Set of pods matching ingress rule in network policy spec
101
-
forms a source pod ip ipset. set of pods matching pod selector (for destination pods) in the network policy forms
102
-
destination pod ip ipset.
174
+
Each policy chain has rules expressed through source and destination ipsets. Set
175
+
of pods matching ingress rule in network policy spec forms a source Pod ip
176
+
ipset. set of Pods matching pod selector (for destination Pods) in the Network
177
+
Policy forms destination Pod ip ipset.
103
178
104
179

105
180
106
-
Finally ipsets are created that are used in forming the rules in the network policy specific chain
181
+
Finally ipsets are created that are used in forming the rules in the Network
182
+
Policy specific chain
107
183
108
184

109
185
110
-
Kube-router at runtime watches Kubernetes API server for changes in the namespace, network policy and pods and
111
-
dynamically updates iptables and ipset configuration to reflect desired state of ingress firewall for the the pods.
112
-
113
-
### Pod networking
186
+
Kube-router at runtime watches Kubernetes API server for changes in the
187
+
namespace, network policy and pods and dynamically updates iptables and ipset
188
+
configuration to reflect desired state of ingress firewall for the the pods.
114
189
115
-
Please see the [blog](https://cloudnativelabs.github.io/post/2017-05-22-kube-pod-networking/) for design details.
190
+
### Pod Networking
116
191
117
-
Kube-router is expected to run on each node. Subnet of the node is learnt by kube-router from the CNI configuration file on the node or through the node.PodCidr. Each kube-router
118
-
instance on the node acts a BGP router and advertise the pod CIDR assigned to the node. Each node peers with rest of the
119
-
nodes in the cluster forming full mesh. Learned routes about the pod CIDR from the other nodes (BGP peers) are injected into
120
-
local node routing table. On the data path, inter node pod-to-pod communication is done by routing stack on the node.
192
+
[Kubernetes pod networking and beyond with BGP](https://cloudnativelabs.github.io/post/2017-05-22-kube-pod-networking)
121
193
194
+
Kube-router is expected to run on each Node. The subnet of the Node is obtained
195
+
from the CNI configuration file on the Node or through the Node.PodCidr. Each
196
+
kube-router instance on the Node acts as a BGP router and advertises the Pod
197
+
CIDR assigned to the Node. Each Node peers with rest of the Nodes in the cluster
198
+
forming full mesh. Learned routes about the Pod CIDR from the other Nodes (BGP
199
+
peers) are injected into local Node routing table. On the data path, inter Node
200
+
Pod-to-Pod communication is done by the routing stack on the Node.
122
201
123
202
## TODO
124
203
-~~convert Kube-router to docker image and run it as daemonset~~
0 commit comments