Skip to content

Commit afce21b

Browse files
Merge pull request #247093 from zioproto/patch-4
Update concepts-network.md
2 parents 1d972a2 + 7e6eec0 commit afce21b

File tree

1 file changed

+33
-14
lines changed

1 file changed

+33
-14
lines changed

articles/aks/concepts-network.md

Lines changed: 33 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ In Kubernetes:
3232
* *Services* logically group pods to allow for direct access on a specific port via an IP address or DNS name.
3333
* *ServiceTypes* allow you to specify what kind of Service you want.
3434
* You can distribute traffic using a *load balancer*.
35-
* More complex routing of application traffic can also be achieved with *ingress controllers*.
35+
* Layer 7 routing of application traffic can also be achieved with *ingress controllers*.
3636
* You can *control outbound (egress) traffic* for cluster nodes.
3737
* Security and filtering of the network traffic for pods is possible with *network policies*.
3838

@@ -62,7 +62,7 @@ The following ServiceTypes are available:
6262

6363
![Diagram showing Load Balancer traffic flow in an AKS cluster][aks-loadbalancer]
6464

65-
For extra control and routing of the inbound traffic, you may instead use an [Ingress controller](#ingress-controllers).
65+
For HTTP load balancing of inbound traffic, another option is to use an [Ingress controller](#ingress-controllers).
6666

6767
* **ExternalName**
6868

@@ -99,11 +99,14 @@ Nodes use the kubenet Kubernetes plugin. You can let the Azure platform create a
9999

100100
Only the nodes receive a routable IP address. The pods use NAT to communicate with other resources outside the AKS cluster. This approach reduces the number of IP addresses you need to reserve in your network space for pods to use.
101101

102+
> [!NOTE]
103+
> While kubenet is the default networking option for an AKS cluster to create a virtual network and subnet, it isn't recommended for production deployments. For most production deployments, you should plan for and use Azure CNI networking due to its superior scalability and performance characteristics.
104+
102105
For more information, see [Configure kubenet networking for an AKS cluster][aks-configure-kubenet-networking].
103106

104107
### Azure CNI (advanced) networking
105108

106-
With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly.
109+
With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly. To avoid these planning challenges, it is possible to enable the feature [Azure CNI networking for dynamic allocation of IPs and enhanced subnet support][configure-azure-cni-dynamic-ip-allocation].
107110

108111
Unlike kubenet, traffic to endpoints in the same virtual network isn't NAT'd to the node's primary IP. The source address for traffic inside the virtual network is the pod IP. Traffic that's external to the virtual network still NATs to the node's primary IP.
109112

@@ -113,6 +116,18 @@ Nodes use the [Azure CNI][cni-networking] Kubernetes plugin.
113116

114117
For more information, see [Configure Azure CNI for an AKS cluster][aks-configure-advanced-networking].
115118

119+
### Azure CNI overlay networking
120+
121+
[Azure CNI Overlay][azure-cni-overlay] represents an evolution of Azure CNI, addressing scalability and planning challenges arising from the assignment of VNet IPs to pods. It achieves this by assigning private CIDR IPs to pods, which are separate from the VNet and can be reused across multiple clusters. Unlike Kubenet, where the traffic dataplane is handled by the Linux kernel networking stack of the Kubernetes nodes, Azure CNI Overlay delegates this responsibility to Azure networking.
122+
123+
### Azure CNI Powered by Cilium
124+
125+
In [Azure CNI Powered by Cilium][azure-cni-powered-by-cilium], the data plane for Pods is managed by the Linux kernel of the Kubernetes nodes. Unlike Kubenet, which faces scalability and performance issues with the Linux kernel networking stack, [Cilium][https://cilium.io/] bypasses the Linux kernel networking stack and instead leverages eBPF programs in the Linux Kernel to accelerate packet processing for faster performance.
126+
127+
### Bring your own CNI
128+
129+
It is possible to install in AKS a third party CNI using the [Bring your own CNI][use-byo-cni] feature.
130+
116131
### Compare network models
117132

118133
Both kubenet and Azure CNI provide network connectivity for your AKS clusters. However, there are advantages and disadvantages to each. At a high level, the following considerations apply:
@@ -131,17 +146,15 @@ Both kubenet and Azure CNI provide network connectivity for your AKS clusters. H
131146

132147
The following behavior differences exist between kubenet and Azure CNI:
133148

134-
| Capability | Kubenet | Azure CNI |
135-
|----------------------------------------------------------------------------------------------|-----------|-----------|
136-
| Deploy cluster in existing or new virtual network | Supported - UDRs manually applied | Supported |
137-
| Pod-pod connectivity | Supported | Supported |
138-
| Pod-VM connectivity; VM in the same virtual network | Works when initiated by pod | Works both ways |
139-
| Pod-VM connectivity; VM in peered virtual network | Works when initiated by pod | Works both ways |
140-
| On-premises access using VPN or Express Route | Works when initiated by pod | Works both ways |
141-
| Access to resources secured by service endpoints | Supported | Supported |
142-
| Expose Kubernetes services using a load balancer service, App Gateway, or ingress controller | Supported | Supported |
143-
| Default Azure DNS and Private Zones | Supported | Supported |
144-
| Support for Windows node pools | Not Supported | Supported |
149+
| Capability | Kubenet | Azure CNI | Azure CNI Overlay | Azure CNI Powered by Cilium |
150+
| -------------------------------------------------------------------------------------------- | --------------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------- |
151+
| Deploy cluster in existing or new virtual network | Supported - UDRs manually applied | Supported | Supported | Supported |
152+
| Pod-pod connectivity | Supported | Supported | Supported | Supported |
153+
| Pod-VM connectivity; VM in the same virtual network | Works when initiated by pod | Works both ways | Works when initiated by pod | Works when initiated by pod |
154+
| Pod-VM connectivity; VM in peered virtual network | Works when initiated by pod | Works both ways | Works when initiated by pod | Works when initiated by pod |
155+
| On-premises access using VPN or Express Route | Works when initiated by pod | Works both ways | Works when initiated by pod | Works when initiated by pod |
156+
| Expose Kubernetes services using a load balancer service, App Gateway, or ingress controller | Supported | Supported | [No Application Gateway Ingress Controller (AGIC) support][azure-cni-overlay-limitations] | Same limitations when using Overlay mode |
157+
| Support for Windows node pools | Not Supported | Supported | Supported | [Available only for Linux and not for Windows.][azure-cni-powered-by-cilium-limitations] |
145158

146159
Regarding DNS, with both kubenet and Azure CNI plugins DNS are offered by CoreDNS, a deployment running in AKS with its own autoscaler. For more information on CoreDNS on Kubernetes, see [Customizing DNS Service](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/). CoreDNS by default is configured to forward unknown domains to the DNS functionality of the Azure Virtual Network where the AKS cluster is deployed. Hence, Azure DNS and Private Zones will work for pods running in AKS.
147160

@@ -259,6 +272,7 @@ For more information on core Kubernetes and AKS concepts, see the following arti
259272
[aks-concepts-storage]: concepts-storage.md
260273
[aks-concepts-identity]: concepts-identity.md
261274
[agic-overview]: ../application-gateway/ingress-controller-overview.md
275+
[configure-azure-cni-dynamic-ip-allocation]: configure-azure-cni-dynamic-ip-allocation.md
262276
[use-network-policies]: use-network-policies.md
263277
[operator-best-practices-network]: operator-best-practices-network.md
264278
[support-policies]: support-policies.md
@@ -268,3 +282,8 @@ For more information on core Kubernetes and AKS concepts, see the following arti
268282
[ip-preservation]: https://techcommunity.microsoft.com/t5/fasttrack-for-azure/how-client-source-ip-preservation-works-for-loadbalancer/ba-p/3033722#:~:text=Enable%20Client%20source%20IP%20preservation%201%20Edit%20loadbalancer,is%20the%20same%20as%20the%20source%20IP%20%28srjumpbox%29.
269283
[nsg-traffic]: ../virtual-network/network-security-group-how-it-works.md
270284
[azure-cni-aks]: configure-azure-cni.md
285+
[azure-cni-overlay]: azure-cni-overlay.md
286+
[azure-cni-overlay-limitations]: azure-cni-overlay.md#limitations-with-azure-cni-overlay
287+
[azure-cni-powered-by-cilium]: azure-cni-powered-by-cilium.md
288+
[azure-cni-powered-by-cilium-limitations]: azure-cni-powered-by-cilium.md#limitations
289+
[use-byo-cni]: use-byo-cni.md

0 commit comments

Comments
 (0)