Skip to content

Commit ed9f556

Browse files
authored
Update concepts-network.md
Add the new CNI options released for AKS during the last year.
1 parent 0af6dcb commit ed9f556

File tree

1 file changed

+20
-5
lines changed

1 file changed

+20
-5
lines changed

articles/aks/concepts-network.md

Lines changed: 20 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ In Kubernetes:
3232
* *Services* logically group pods to allow for direct access on a specific port via an IP address or DNS name.
3333
* *ServiceTypes* allow you to specify what kind of Service you want.
3434
* You can distribute traffic using a *load balancer*.
35-
* More complex routing of application traffic can also be achieved with *ingress controllers*.
35+
* Layer 7 routing of application traffic can also be achieved with *ingress controllers*.
3636
* You can *control outbound (egress) traffic* for cluster nodes.
3737
* Security and filtering of the network traffic for pods is possible with *network policies*.
3838

@@ -62,7 +62,7 @@ The following ServiceTypes are available:
6262

6363
![Diagram showing Load Balancer traffic flow in an AKS cluster][aks-loadbalancer]
6464

65-
For extra control and routing of the inbound traffic, you may instead use an [Ingress controller](#ingress-controllers).
65+
For HTTP load balancing functionalities of the inbound traffic, you may instead use an [Ingress controller](#ingress-controllers).
6666

6767
* **ExternalName**
6868

@@ -86,7 +86,7 @@ In AKS, you can deploy a cluster that uses one of the following network models:
8686

8787
The AKS cluster is connected to existing virtual network resources and configurations.
8888

89-
### Kubenet (basic) networking
89+
### Kubenet (legacy) networking
9090

9191
The *kubenet* networking option is the default configuration for AKS cluster creation. With *kubenet*:
9292

@@ -99,11 +99,14 @@ Nodes use the kubenet Kubernetes plugin. You can let the Azure platform create a
9999

100100
Only the nodes receive a routable IP address. The pods use NAT to communicate with other resources outside the AKS cluster. This approach reduces the number of IP addresses you need to reserve in your network space for pods to use.
101101

102+
> [!NOTE]
103+
> kubenet networking, although still available in AKS, is not anymore the recommeneded configuration for production environemnts, because Azure CNI offers superior scalability and performance.
104+
102105
For more information, see [Configure kubenet networking for an AKS cluster][aks-configure-kubenet-networking].
103106

104-
### Azure CNI (advanced) networking
107+
### Azure CNI networking
105108

106-
With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly.
109+
With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly. To mitigate these planning challenges is also possible to enable the feature [Azure CNI networking for dynamic allocation of IPs and enhanced subnet support][configure-azure-cni-dynamic-ip-allocation].
107110

108111
Unlike kubenet, traffic to endpoints in the same virtual network isn't NAT'd to the node's primary IP. The source address for traffic inside the virtual network is the pod IP. Traffic that's external to the virtual network still NATs to the node's primary IP.
109112

@@ -113,6 +116,18 @@ Nodes use the [Azure CNI][cni-networking] Kubernetes plugin.
113116

114117
For more information, see [Configure Azure CNI for an AKS cluster][aks-configure-advanced-networking].
115118

119+
### Azure CNI overlay networking
120+
121+
[Azure CNI Overlay][azure-cni-overlay] represents an evolution of Azure CNI, addressing scalability and planning challenges arising from the assignment of VNet IPs to pods. It achieves this by assigning private CIDR IPs to pods, which are separate from the VNet and can be reused across multiple clusters. Unlike Kubenet, where the traffic dataplane is handled by the Linux kernel networking stack of the Kubernetes nodes, Azure CNI Overlay delegates this responsibility to Azure networking.
122+
123+
### Azure CNI Powered by Cilium
124+
125+
In [Azure CNI Powered by Cilium][azure-cni-powered-by-cilium], the data plane for Pods is managed by the Linux kernel of the Kubernetes nodes. Unlike Kubenet, which faces scalability and performance issues with the Linux kernel networking stack, [Cilium][https://cilium.io/] bypasses the Linux kernel networking stack and instead leverages eBPF programs in the Linux Kernel to accelerate packet processing for faster performance.
126+
127+
### Bring your own CNI
128+
129+
It is possible to install in AKS a third party CNI using the [Bring your own CNI][use-byo-cni] feature.
130+
116131
### Compare network models
117132

118133
Both kubenet and Azure CNI provide network connectivity for your AKS clusters. However, there are advantages and disadvantages to each. At a high level, the following considerations apply:

0 commit comments

Comments
 (0)