Skip to content

Commit 9e79181

Browse files
committed
update docs
1 parent fa7dc2f commit 9e79181

File tree

1 file changed

+139
-10
lines changed

1 file changed

+139
-10
lines changed

README.md

Lines changed: 139 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,145 @@ Implementation of Kubernetes Network Policies:
44
- [Network Policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/)
55
- [Admin Network Policies and Baseline Admin Network Policies](https://network-policy-api.sigs.k8s.io/)
66

7+
8+
## Architecture Overview
9+
10+
The kube-network-policies project is designed to enforce Kubernetes network policies by intercepting and evaluating network packets in userspace. This is achieved by using NFQUEUE to redirect packets to the controller, which then decides whether to allow or deny them based on a set of policy evaluators.
11+
12+
### Packet Flow
13+
14+
The following diagram illustrates the flow of a network packet from the kernel to the userspace controller and back:
15+
16+
[![](https://mermaid.ink/img/pako:eNp1UlFvmzAQ_iuWnzaJsEADATpVSgppp25dljZ7GPDg4EuC6tjImHVZlP8-Y6KgpZsfwHz3fXf3HXfAhaCAI7yRpNqi5zjjSJ-6WXXAA0gODD1VpIAu1B5aSihUKTj6vOjRSfoI6lXIFzQnxQso9HElb9BcUDTJ0WBwg6YHvlZkxaBGi6Z9GkJMFKkY4YBuBVdSMAbyeN2nnWotgroVlvUW6AcJjCigJmWcThgTr28qTvNTBuA04xemljXI-n-Onqc92o-hWcHg5G4wF6wsSt1-328vaU-SGsoeJXxTcujMz87gT8IaooRETn79t3BmmHdvme4l884w71PbtvM-YsxeuDbje5x9WybLxIiSU6r7NvAdJC0LFaFJUUClupn-kxAD35vwp3dpvPg6z99rGrbwDuSOlFSv0KGVZVhtYQcZjvSVwpo0TGU440dNJY0ST3te4EjJBizcVFT_yLgkesg7HK0Jq89oQktt_AyC-fzS7apZWQtXhOPogH_hyPFCOwxG-hW4fnjl-p6F9zhyg6Htjv3A930ndN3x1dHCv4XQpYa27wzdcOQFw5EXeuFobGEpms32XFAn_2GoXasb2Vrs7lJPF-StaLjCke8c_wB98vpd?type=png)](https://mermaid.live/edit#pako:eNp1UlFvmzAQ_iuWnzaJsEADATpVSgppp25dljZ7GPDg4EuC6tjImHVZlP8-Y6KgpZsfwHz3fXf3HXfAhaCAI7yRpNqi5zjjSJ-6WXXAA0gODD1VpIAu1B5aSihUKTj6vOjRSfoI6lXIFzQnxQso9HElb9BcUDTJ0WBwg6YHvlZkxaBGi6Z9GkJMFKkY4YBuBVdSMAbyeN2nnWotgroVlvUW6AcJjCigJmWcThgTr28qTvNTBuA04xemljXI-n-Onqc92o-hWcHg5G4wF6wsSt1-328vaU-SGsoeJXxTcujMz87gT8IaooRETn79t3BmmHdvme4l884w71PbtvM-YsxeuDbje5x9WybLxIiSU6r7NvAdJC0LFaFJUUClupn-kxAD35vwp3dpvPg6z99rGrbwDuSOlFSv0KGVZVhtYQcZjvSVwpo0TGU440dNJY0ST3te4EjJBizcVFT_yLgkesg7HK0Jq89oQktt_AyC-fzS7apZWQtXhOPogH_hyPFCOwxG-hW4fnjl-p6F9zhyg6Htjv3A930ndN3x1dHCv4XQpYa27wzdcOQFw5EXeuFobGEpms32XFAn_2GoXasb2Vrs7lJPF-StaLjCke8c_wB98vpd)
17+
18+
### Key Components
19+
20+
The key components of the architecture are:
21+
22+
* **Dataplane Controller**: The dataplane/controller.go file contains the main controller that sets up NFQUEUE, intercepts packets, and orchestrates the policy evaluation process. It is responsible for creating the necessary nftables rules to redirect traffic. To avoid the performance penalty of sending all packets to userspace, the controller includes logic to only capture packets for pods that are targeted by at least one network policy.
23+
* **Policy Engine**: The networkpolicy/engine.go file defines the PolicyEngine, which manages a pipeline of PolicyEvaluator plugins. The engine is responsible for running each packet through the pipeline and making a final decision based on the verdicts returned by the evaluators.
24+
* **Pod Info Provider**: The podinfo/podinfo.go file provides an interface for retrieving pod information. It resolves a packet's IP address to a PodInfo protobuf type (pkg/api/kubenetworkpolicies.proto). This PodInfo object contains all the necessary information for evaluators to match policies, including the pod's name, labels, namespace, and associated node information.
25+
* **Policy Evaluators**: These are plugins that implement the PolicyEvaluator interface and contain the logic for a specific type of network policy. The project currently includes evaluators for AdminNetworkPolicy, BaselineAdminNetworkPolicy, and the standard Kubernetes NetworkPolicy.
26+
27+
Here is a diagram illustrating the interaction between these components:
28+
29+
[![](https://mermaid.ink/img/pako:eNpNkdtum0AQhl9lNFeNhB2bGAykipTYVLKqVDSHXiTkYgtjjIp30bJYcS2_e4eFkO7Nzs7OfP8cTpipnDDCQot6B0_rVAKf29cU18KIuhKSYKWk0aqqSH_9rS9vJvBIpoG2hh_ffj7Hz_HgfaCMygM1UIvsD0ek-AaTCST2xdYN3DE2UVWZHSGWRSlpyLwXUhScSAdRtcKoXgegLmuqOIxJ131hdx2RIZbpQKLyjdyqFC1-9R_-g_SluRhEBhfBRhaamuYyttdYvGm1hF-k8zIzn4IrS15bcg6dGCRaHcp8HMYDNarq2t4kYNRHScPnbZaxBn9-Dxqbrfeku8lco4Ns7kWZ8_RPnViKZkd7bjZiM6etaCsuJJVnDhWtUY9HmWFkdEsOtnXOraxLwXvbY7QVVTN647zkxkcn2ed9v2a7bQdrITE64TtGcy-chsGCr8D1wyvX9xw8YuQGs6m79APf9-eh6y6vzg7-VYqlZlN_PnPDhRfMFl7ohYulg1q1xW4UZPiLDe1LLXTXYm9rkjy4lWqlwcj3zv8A6-vK3A?type=png)](https://mermaid.live/edit#pako:eNpNkdtum0AQhl9lNFeNhB2bGAykipTYVLKqVDSHXiTkYgtjjIp30bJYcS2_e4eFkO7Nzs7OfP8cTpipnDDCQot6B0_rVAKf29cU18KIuhKSYKWk0aqqSH_9rS9vJvBIpoG2hh_ffj7Hz_HgfaCMygM1UIvsD0ek-AaTCST2xdYN3DE2UVWZHSGWRSlpyLwXUhScSAdRtcKoXgegLmuqOIxJ131hdx2RIZbpQKLyjdyqFC1-9R_-g_SluRhEBhfBRhaamuYyttdYvGm1hF-k8zIzn4IrS15bcg6dGCRaHcp8HMYDNarq2t4kYNRHScPnbZaxBn9-Dxqbrfeku8lco4Ns7kWZ8_RPnViKZkd7bjZiM6etaCsuJJVnDhWtUY9HmWFkdEsOtnXOraxLwXvbY7QVVTN647zkxkcn2ed9v2a7bQdrITE64TtGcy-chsGCr8D1wyvX9xw8YuQGs6m79APf9-eh6y6vzg7-VYqlZlN_PnPDhRfMFl7ohYulg1q1xW4UZPiLDe1LLXTXYm9rkjy4lWqlwcj3zv8A6-vK3A)
30+
31+
### The PolicyEvaluator Interface
32+
33+
The PolicyEvaluator interface is the core of the policy evaluation pipeline. Each evaluator is responsible for determining whether a packet should be allowed, denied, or passed to the next evaluator in the pipeline.
34+
35+
The interface is defined in pkg/networkpolicy/engine.go as follows:
36+
37+
```go
38+
type PolicyEvaluator interface {
39+
Name() string
40+
EvaluateIngress(ctx context.Context, p \*network.Packet, srcPod, dstPod \*api.PodInfo) (Verdict, error)
41+
EvaluateEgress(ctx context.Context, p \*network.Packet, srcPod, dstPod \*api.PodInfo) (Verdict, error)
42+
}
43+
```
44+
45+
The Verdict returned by each evaluator can be one of the following:
46+
47+
* VerdictAccept: The packet is allowed, and no further evaluators are consulted.
48+
* VerdictDeny: The packet is denied, and no further evaluators are consulted.
49+
* VerdictNext: The packet is passed to the next evaluator in the pipeline.
50+
51+
### How to Add a New PolicyEvaluator
52+
53+
Adding a new PolicyEvaluator is straightforward and involves the following steps:
54+
55+
1. **Create a new file** for your evaluator in the pkg/networkpolicy directory.
56+
2. **Define a struct** for your evaluator that implements the PolicyEvaluator interface.
57+
3. **Implement the Name method** to return a unique name for your evaluator.
58+
4. **Implement the EvaluateIngress and EvaluateEgress methods** to define the logic for your policy.
59+
5. **Register your new evaluator** in the PolicyEngine in cmd/main.go.
60+
61+
#### Example: Creating an AllowListPolicy
62+
63+
Let's create a simple AllowListPolicy that only allows traffic from a predefined list of IP addresses.
64+
65+
1. **Create the file** pkg/networkpolicy/allowlistpolicy.go:
66+
67+
```go
68+
package networkpolicy
69+
70+
import (
71+
"context"
72+
"net"
73+
"sigs.k8s.io/kube-network-policies/pkg/api"
74+
"sigs.k8s.io/kube-network-policies/pkg/network"
75+
)
76+
77+
// AllowListPolicy is a simple policy that allows traffic only from a predefined list of IP addresses.
78+
type AllowListPolicy struct {
79+
allowedIPs \[\]net.IP
80+
}
81+
82+
// NewAllowListPolicy creates a new AllowListPolicy.
83+
func NewAllowListPolicy(allowedIPs \[\]net.IP) \*AllowListPolicy {
84+
return \&AllowListPolicy{
85+
allowedIPs: allowedIPs,
86+
}
87+
}
88+
89+
func (a \*AllowListPolicy) Name() string {
90+
return "AllowListPolicy"
91+
}
92+
93+
func (a \*AllowListPolicy) EvaluateIngress(ctx context.Context, p \*network.Packet, srcPod, dstPod \*api.PodInfo) (Verdict, error) {
94+
for \_, ip := range a.allowedIPs {
95+
if ip.Equal(p.SrcIP) {
96+
return VerdictAccept, nil
97+
}
98+
}
99+
return VerdictDeny, nil
100+
}
101+
102+
func (a \*AllowListPolicy) EvaluateEgress(ctx context.Context, p \*network.Packet, srcPod, dstPod \*api.PodInfo) (Verdict, error) {
103+
// This policy only applies to ingress traffic.
104+
return VerdictNext, nil
105+
}
106+
```
107+
108+
2. **Register the new evaluator** in cmd/main.go:
109+
110+
```go
111+
// ... (imports)
112+
113+
func main() {
114+
// ... (existing setup)
115+
116+
// Create the evaluators for the Pipeline to process the packets
117+
// and take a network policy action. The evaluators are processed
118+
// by the order in the array.
119+
evaluators := \[\]networkpolicy.PolicyEvaluator{}
120+
121+
// Add the new AllowListPolicy evaluator
122+
allowedIPs := \[\]net.IP{net.ParseIP("10.0.0.1"), net.ParseIP("10.0.0.2")}
123+
evaluators \= append(evaluators, networkpolicy.NewAllowListPolicy(allowedIPs))
124+
125+
// ... (rest of the evaluators)
126+
127+
// Create the controller that enforces the network policies on the data plane
128+
networkPolicyController, err := dataplane.NewController(
129+
clientset,
130+
networkPolicyInfomer,
131+
nsInformer,
132+
podInformer,
133+
networkpolicy.NewPolicyEngine(podInfoProvider, evaluators),
134+
cfg,
135+
)
136+
// ... (rest of the main function)
137+
}
138+
```
139+
140+
By following these steps, you can easily extend the functionality of kube-network-policies with your own custom policy evaluators.
141+
142+
### Future Improvements
143+
144+
* **Programmable Traffic Capture**: Currently, the controller decides which traffic to send to userspace based on whether a pod is selected by any network policy. A potential improvement is to make this more programmable, allowing individual PolicyEvaluator plugins to specify the exact traffic they are interested in. This would further optimize performance by reducing the amount of traffic sent to userspace.
145+
7146
## Install
8147

9148
### Manual Installation
@@ -88,16 +227,6 @@ Current implemented metrics are:
88227
* nfqueue_user_dropped: Number of packets that were dropped within the netlink subsystem. Such drops usually happen when the corresponding socket buffer is full; that is, user space is not able to read messages fast enough
89228
* nfqueue_packet_id: ID of the most recent packet queued
90229

91-
## Development
92-
93-
Network policies are hard to implement efficiently and in large clusters this is translated to performance and scalability problems.
94-
95-
Most of the existing implementations use the same approach of processing the APIs and transforming them in the corresponding dataplane implementation: iptables, nftables, ebpf or ovs, ...
96-
97-
This project takes a different approach. It uses the NFQUEUE functionality implemented in netfilter to process the first packet of each connection (or udp flows) in userspace and emit a verdict. The advantage is that the dataplane implementation does not need to represent all the complex logic, allowing it to scale better. The disadvantage is that we need to pass each new connection packet through userspace. Subsequent packets are accepted via a "ct state established,related accept" rule.
98-
99-
For performance only the Pods selected by network policies will be queued to user space and thus absorb the first packet perf hit.
100-
101230
## Testing
102231

103232
See [TESTING](docs/testing/README.md)

0 commit comments

Comments
 (0)