Skip to content

Commit 14cbce9

Browse files
authored
Docs: Reworded, reorganized (#48)
* README: Add link to blog about Network Policies * More style [skip ci]
1 parent f3d7188 commit 14cbce9

File tree

2 files changed

+142
-63
lines changed

2 files changed

+142
-63
lines changed

Documentation/README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ Kube-router consists of 3 core controllers and multiple watchers as depicted in
1212

1313
Network services controller is responsible for reading the services and endpoints information from Kubernetes API server and configure IPVS on each cluster node accordingly.
1414

15-
Please read blog for design details and pros and cons compared to iptables based Kube-proxy
15+
Please our read blog for design details and pros and cons compared to iptables based Kube-proxy
1616
https://cloudnativelabs.github.io/post/2017-05-10-kube-network-service-proxy/
1717

1818
Demo of Kube-router's IPVS based Kubernetes network service proxy

README.md

Lines changed: 141 additions & 62 deletions
Original file line numberDiff line numberDiff line change
@@ -4,121 +4,200 @@ kube-router
44
[![Build Status](https://travis-ci.org/cloudnativelabs/kube-router.svg?branch=master)](https://travis-ci.org/cloudnativelabs/kube-router)
55
[![Gitter chat](http://badges.gitter.im/kube-router/Lobby.svg)](https://gitter.im/kube-router/Lobby)
66

7-
Kube-router is a distributed load balancer, firewall and router for Kubernetes. Kube-router can be configured to provide on each cluster node:
7+
Kube-router is a distributed load balancer, firewall and router for Kubernetes
8+
clusters. It gives your cluster a unified control plane for features
9+
that would typically be provided by two or three separate software projects.
810

9-
- a IPVS/LVS based service proxy on each node for *ClusterIP* and *NodePort* service types, providing service discovery and load balancing
10-
- an ingress firewall for the pods running on the node as per the defined Kubernetes network policies using iptables and ipset
11-
- a BGP router to advertise and learn the routes to the pod IP's for cross-node pod-to-pod connectivity
11+
## Project status
1212

13-
## Why Kube-router
14-
We have Kube-proxy which provides service proxy and load balancer. We have several addons or solutions like Flannel, Calico, Weave etc to provide cross node pod-to-pod networking. Simillarly there are solutions like Calico that enforce network policies. Then why do we need Kube-router for a similar job? Here is the motivation:
13+
Project is in alpha stage. We are working towards beta release
14+
[milestone](https://github.com/cloudnativelabs/kube-router/milestone/2) and are
15+
activley incorporating users feedback.
1516

16-
- It is challenging to deploy, monitor and troubleshoot multiple solutions at runtime. These independent solution need to work well together. Kube-router aims to provide operational simplicity by combining all the networking functionality that can be provided at the node in to one cohesive solution. Run Kube-router as daemonset, by just running one command ``kubectl create -f kube-router-daemonset.yaml`` you have solution for pod-to-pod networking, service proxy and firewall on each node.
17+
## Getting Started
1718

18-
- Kube-router is motivated to provide optimized solution for performance. Kube-router uses IPVS for service proxy as compared to iptables by Kube-proxy. Kube-router uses solutions like ipsets to optimize iptables rules matching while providing ingress firewall for the pods. For inter-node pod-to-pod communication, routing rules added by kube-router ensures data path is efficient (one hop for pod-to-pod connectivity) with out overhead of overlays.
19+
- [User Guide](./Documentation/README.md#user-guide)
20+
- [Developer Guide](./Documentation/developing.md)
21+
- [Architecture](./Documentation/README.md#architecture)
1922

20-
- Kube-router builds on standard Linux technologies, so you can verify the configuration and troubleshoot with standard Linux networking tools (ipvsadm, ip route, iptables, ipset, traceroute, tcpdump etc).
23+
## Primary Features
2124

22-
- Kube-router is a solution purpose built for Kubernetes. So it does not carry the burden of functionality to support other container orchestration and infrastrcutre orchestration platforms. Code base is extremely small for the functionality it provides and easy to hack up and customize.
25+
*kube-router does it all.*
2326

24-
## See it in action
27+
With all features enabled, kube-router is a lean yet powerful alternative to
28+
several software components used in typical Kubernetes clusters. All this from a
29+
single DaemonSet/Binary. It doesn't get any easier.
2530

26-
<a href="https://asciinema.org/a/118056" target="_blank"><img src="https://asciinema.org/a/118056.png" /></a>
31+
### Alternative to kube-proxy | `--run-service-proxy`
2732

28-
## Project status
33+
kube-router uses the Linux kernel's IPVS features to implement its K8s Services
34+
Proxy. This feature has been requested for some time in kube-proxy, but you can
35+
have it right now with kube-router.
2936

30-
Project is in alpha stage. We are working towards beta release [milestone](https://github.com/cloudnativelabs/kube-router/milestone/2) and are activley incorporating users feedback.
37+
Read more about the advantages of IPVS for container load balancing:
38+
- [Kubernetes network services proxy with IPVS/LVS](https://cloudnativelabs.github.io/post/2017-05-10-kube-network-service-proxy/)
39+
- [Kernel Load-Balancing for Docker Containers Using IPVS](https://blog.codeship.com/kernel-load-balancing-for-docker-containers-using-ipvs/)
3140

32-
## Support & Feedback
41+
### Pod Networking | `--run-router`
3342

34-
If you experience any problems please reach us on gitter [community forum](https://gitter.im/kube-router/Lobby) for quick help. Feel free to leave feedback or raise questions at any time by opening an issue [here](https://github.com/cloudnativelabs/kube-router/issues).
43+
kube-router handles Pod networking efficiently with direct routing thanks to the
44+
BGP protocol and the GoBGP Go library. It uses the native Kubernetes API to
45+
maintain distributed pod networking state. That means no dependency on a
46+
separate datastore to maintain in your cluster.
3547

36-
## Getting Started
48+
kube-router's elegant design also means there is no dependency on another CNI
49+
plugin. The
50+
[official "bridge" plugin](https://github.com/containernetworking/plugins/tree/master/plugins/main/bridge)
51+
provided by the CNI project is all you need -- and chances are you already have
52+
it in your CNI binary directory!
3753

38-
Use below guides to get started.
54+
Read more about the advantages and potential of BGP with Kubernetes:
55+
- [Kubernetes pod networking and beyond with BGP](https://cloudnativelabs.github.io/post/2017-05-22-kube-pod-networking)
3956

40-
- [Architecture](./Documentation/README.md#architecture)
41-
- [Users Guide](./Documentation/README.md#user-guide)
42-
- [Developers Guide](./Documentation/README.md#develope-guide)
57+
### Network Policy Controller | `--run-firewall`
58+
59+
Enabling Kubernetes [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
60+
is easy with kube-router -- just add a flag to kube-router. It uses ipsets with
61+
iptables to ensure your firewall rules have as little performance impact on your
62+
cluster as possible.
63+
64+
Read more about kube-router's approach to Kubernetes Network Policies:
65+
- [Enforcing Kubernetes network policies with iptables](https://cloudnativelabs.github.io/post/2017-05-1-kube-network-policies/)
66+
67+
### Advanced BGP Capabilities
68+
69+
If you have other networking devices or SDN systems that talk BGP, kube-router
70+
will fit in perfectly. From a simple full node-to-node mesh to per-node peering
71+
configurations, most routing needs can be attained. The configuration is
72+
Kubernetes native (annotations) just like the rest of kube-router, so use the
73+
tools you already know! Since kube-router uses GoBGP, you have access to a
74+
modern BGP API platform as well right out of the box.
75+
76+
For more details please refer to the [BGP documentation](Documentation/bgp.md).
4377

44-
## Contribution
78+
### Small Footprint
79+
80+
Although it does the work of several of its peers in one binary, kube-router
81+
does it all with a relatively tiny codebase, partly because IPVS is already
82+
there on your Kuberneres nodes waiting to help you do amazing things.
83+
kube-router brings that and GoBGP's modern BGP interface to you in an elegant
84+
package designed from the ground up for Kubernetes.
85+
86+
### High Performance
87+
88+
A primary motivation for kube-router is performance. The combination of BGP for
89+
inter-node Pod networking and IPVS for load balanced proxy Services is a perfect
90+
recipe for high-performance cluster networking at scale. BGP ensures that the
91+
data path is dynamic and efficient, and IPVS provides in-kernel load balancing
92+
that has been thouroughly tested and optimized.
93+
94+
## Contributing
95+
96+
We encourage all kinds of contributions, be they documentation, code, fixing
97+
typos, tests — anything at all. Please read the [contribution guide](./CONTRIBUTING.md).
98+
99+
## Support & Feedback
100+
101+
If you experience any problems please reach us on our gitter
102+
[community forum](https://gitter.im/kube-router/Lobby)
103+
for quick help. Feel free to leave feedback or raise questions at any time by
104+
opening an issue [here](https://github.com/cloudnativelabs/kube-router/issues).
105+
106+
## See it in action
107+
108+
<a href="https://asciinema.org/a/118056" target="_blank"><img src="https://asciinema.org/a/118056.png" /></a>
45109

46-
We encourage all kinds of contributions, be they documentation, code, fixing typos, tests — anything at all. Please
47-
read the [contribution guide](./CONTRIBUTING.md).
48110

49111
## Theory of Operation
50112

51-
Kube-router can be run as a agent or a pod (through daemonset) on each node and leverages standard Linux technologies **iptables, ipvs/lvs, ipset, iproute2**
113+
Kube-router can be run as an agent or a Pod (via DaemonSet) on each node and
114+
leverages standard Linux technologies **iptables, ipvs/lvs, ipset, iproute2**
52115

53-
### service proxy and load balancing
116+
### Service Proxy And Load Balancing
54117

55-
refer to https://cloudnativelabs.github.io/post/2017-05-10-kube-network-service-proxy/ for the design details and demo
118+
[Kubernetes network services proxy with IPVS/LVS](https://cloudnativelabs.github.io/post/2017-05-10-kube-network-service-proxy/)
56119

57-
Kube-router uses IPVS/LVS technology built in Linux to provide L4 load balancing. Each of the kubernetes service of **ClusterIP** and **NodePort** type is configured as IPVS virtual service. Each service endpoint is configured as real server to the virtual service.
58-
Standard **ipvsadm** tool can be used to verify the configuration and monitor the active connections.
120+
Kube-router uses IPVS/LVS technology built in Linux to provide L4 load
121+
balancing. Each **ClusterIP** and **NodePort** Kubernetes Service type is
122+
configured as an IPVS virtual service. Each Service Endpoint is configured as
123+
real server to the virtual service. The standard **ipvsadm** tool can be used
124+
to verify the configuration and monitor the active connections.
59125

60-
Below is example set of services on kubernetes
126+
Below is example set of Services on Kubernetes:
61127

62128
![Kube services](./Documentation/img/svc.jpg)
63129

64-
and the endpoints for the services
130+
and the Endpoints for the Services:
65131

66132
![Kube services](./Documentation/img/ep.jpg)
67133

68-
and how they got mapped to the ipvs by kube-router
134+
and how they got mapped to the IPVS by kube-router:
69135

70136
![IPVS configuration](./Documentation/img/ipvs1.jpg)
71137

72-
Kube-router watches kubernetes API server to get updates on the services, endpoints and automatically syncs the ipvs
73-
configuration to reflect desired state of services. Kube-router uses IPVS masquerading mode and uses round robin scheduling
74-
currently. Source pod IP is preserved so that appropriate network policies can be applied.
138+
Kube-router watches the Kubernetes API server to get updates on the
139+
Services/Endpoints and automatically syncs the IPVS configuration to reflect the
140+
desired state of Services. Kube-router uses IPVS masquerading mode and uses
141+
round robin scheduling currently. Source pod IP is preserved so that appropriate
142+
network policies can be applied.
75143

76-
### pod ingress firewall
144+
### Pod Ingress Firewall
77145

78-
refer to https://cloudnativelabs.github.io/post/2017-05-1-kube-network-policies/ for the detailed design details
146+
[Enforcing Kubernetes network policies with iptables](https://cloudnativelabs.github.io/post/2017-05-1-kube-network-policies/)
79147

80-
Kube-router provides implementation of network policies semantics through the use of iptables, ipset and conntrack.
81-
All the pods in a namespace with 'DefaultDeny' ingress isolation policy has ingress blocked. Only traffic that matches
82-
whitelist rules specified in the network policies are permitted to reach pod. Following set of iptables rules and
83-
chains in the 'filter' table are used to achieve the network policies semantics.
148+
Kube-router provides an implementation of Kubernetes Network Policies through
149+
the use of iptables, ipset and conntrack. All the Pods in a Namespace with
150+
'DefaultDeny' ingress isolation policy has ingress blocked. Only traffic that
151+
matches whitelist rules specified in the network policies are permitted to reach
152+
those Pods. The following set of iptables rules and chains in the 'filter' table
153+
are used to achieve the Network Policies semantics.
84154

85-
Each pod running on the node, which needs ingress blocked by default is matched in FORWARD and OUTPUT chains of fliter table
86-
and send to pod specific firewall chain. Below rules are added to match various cases
155+
Each Pod running on the Node which needs ingress blocked by default is matched
156+
in FORWARD and OUTPUT chains of the fliter table and are sent to a pod specific
157+
firewall chain. Below rules are added to match various cases
87158

88-
- traffic getting switched between the pods on the same node through bridge
89-
- traffic getting routed between the pods on different nodes
90-
- traffic originating from a pod and going through the service proxy and getting routed to pod on same node
159+
- Traffic getting switched between the Pods on the same Node through the local
160+
bridge
161+
- Traffic getting routed between the Pods on different Nodes
162+
- Traffic originating from a Pod and going through the Service proxy and getting
163+
routed to a Pod on the same Node
91164

92165
![FORWARD/OUTPUT chain](./Documentation/img/forward.png)
93166

94-
Each pod specific firewall chain has default rule to block the traffic. Rules are added to jump traffic to the network policy
95-
specific policy chains. Rules cover only policies that apply to the destination pod ip. A rule is added to accept the
96-
the established traffic to permit the return traffic.
167+
Each Pod specific firewall chain has default rule to block the traffic. Rules
168+
are added to jump traffic to the Network Policy specific policy chains. Rules
169+
cover only policies that apply to the destination pod ip. A rule is added to
170+
accept the the established traffic to permit the return traffic.
97171

98172
![Pod firewall chain](./Documentation/img/podfw.png)
99173

100-
Each policy chain has rules expressed through source and destination ipsets. Set of pods matching ingress rule in network policy spec
101-
forms a source pod ip ipset. set of pods matching pod selector (for destination pods) in the network policy forms
102-
destination pod ip ipset.
174+
Each policy chain has rules expressed through source and destination ipsets. Set
175+
of pods matching ingress rule in network policy spec forms a source Pod ip
176+
ipset. set of Pods matching pod selector (for destination Pods) in the Network
177+
Policy forms destination Pod ip ipset.
103178

104179
![Policy chain](./Documentation/img/policyfw.png)
105180

106-
Finally ipsets are created that are used in forming the rules in the network policy specific chain
181+
Finally ipsets are created that are used in forming the rules in the Network
182+
Policy specific chain
107183

108184
![ipset](./Documentation/img/ipset.jpg)
109185

110-
Kube-router at runtime watches Kubernetes API server for changes in the namespace, network policy and pods and
111-
dynamically updates iptables and ipset configuration to reflect desired state of ingress firewall for the the pods.
112-
113-
### Pod networking
186+
Kube-router at runtime watches Kubernetes API server for changes in the
187+
namespace, network policy and pods and dynamically updates iptables and ipset
188+
configuration to reflect desired state of ingress firewall for the the pods.
114189

115-
Please see the [blog](https://cloudnativelabs.github.io/post/2017-05-22-kube-pod-networking/) for design details.
190+
### Pod Networking
116191

117-
Kube-router is expected to run on each node. Subnet of the node is learnt by kube-router from the CNI configuration file on the node or through the node.PodCidr. Each kube-router
118-
instance on the node acts a BGP router and advertise the pod CIDR assigned to the node. Each node peers with rest of the
119-
nodes in the cluster forming full mesh. Learned routes about the pod CIDR from the other nodes (BGP peers) are injected into
120-
local node routing table. On the data path, inter node pod-to-pod communication is done by routing stack on the node.
192+
[Kubernetes pod networking and beyond with BGP](https://cloudnativelabs.github.io/post/2017-05-22-kube-pod-networking)
121193

194+
Kube-router is expected to run on each Node. The subnet of the Node is obtained
195+
from the CNI configuration file on the Node or through the Node.PodCidr. Each
196+
kube-router instance on the Node acts as a BGP router and advertises the Pod
197+
CIDR assigned to the Node. Each Node peers with rest of the Nodes in the cluster
198+
forming full mesh. Learned routes about the Pod CIDR from the other Nodes (BGP
199+
peers) are injected into local Node routing table. On the data path, inter Node
200+
Pod-to-Pod communication is done by the routing stack on the Node.
122201

123202
## TODO
124203
- ~~convert Kube-router to docker image and run it as daemonset~~

0 commit comments

Comments
 (0)