You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/concepts-network.md
+33-14Lines changed: 33 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ In Kubernetes:
32
32
**Services* logically group pods to allow for direct access on a specific port via an IP address or DNS name.
33
33
**ServiceTypes* allow you to specify what kind of Service you want.
34
34
* You can distribute traffic using a *load balancer*.
35
-
*More complex routing of application traffic can also be achieved with *ingress controllers*.
35
+
*Layer 7 routing of application traffic can also be achieved with *ingress controllers*.
36
36
* You can *control outbound (egress) traffic* for cluster nodes.
37
37
* Security and filtering of the network traffic for pods is possible with *network policies*.
38
38
@@ -62,7 +62,7 @@ The following ServiceTypes are available:
62
62
63
63
![Diagram showing Load Balancer traffic flow in an AKS cluster][aks-loadbalancer]
64
64
65
-
For extra control and routing of the inbound traffic, you may instead use an [Ingress controller](#ingress-controllers).
65
+
For HTTP load balancing of inbound traffic, another option is to use an [Ingress controller](#ingress-controllers).
66
66
67
67
***ExternalName**
68
68
@@ -99,11 +99,14 @@ Nodes use the kubenet Kubernetes plugin. You can let the Azure platform create a
99
99
100
100
Only the nodes receive a routable IP address. The pods use NAT to communicate with other resources outside the AKS cluster. This approach reduces the number of IP addresses you need to reserve in your network space for pods to use.
101
101
102
+
> [!NOTE]
103
+
> While kubenet is the default networking option for an AKS cluster to create a virtual network and subnet, it isn't recommended for production deployments. For most production deployments, you should plan for and use Azure CNI networking due to its superior scalability and performance characteristics.
104
+
102
105
For more information, see [Configure kubenet networking for an AKS cluster][aks-configure-kubenet-networking].
103
106
104
107
### Azure CNI (advanced) networking
105
108
106
-
With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly.
109
+
With Azure CNI, every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be planned in advance and unique across your network space. Each node has a configuration parameter for the maximum number of pods it supports. The equivalent number of IP addresses per node are then reserved up front. This approach can lead to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow, so it's important to plan properly. To avoid these planning challenges, it is possible to enable the feature [Azure CNI networking for dynamic allocation of IPs and enhanced subnet support][configure-azure-cni-dynamic-ip-allocation].
107
110
108
111
Unlike kubenet, traffic to endpoints in the same virtual network isn't NAT'd to the node's primary IP. The source address for traffic inside the virtual network is the pod IP. Traffic that's external to the virtual network still NATs to the node's primary IP.
109
112
@@ -113,6 +116,18 @@ Nodes use the [Azure CNI][cni-networking] Kubernetes plugin.
113
116
114
117
For more information, see [Configure Azure CNI for an AKS cluster][aks-configure-advanced-networking].
115
118
119
+
### Azure CNI overlay networking
120
+
121
+
[Azure CNI Overlay][azure-cni-overlay] represents an evolution of Azure CNI, addressing scalability and planning challenges arising from the assignment of VNet IPs to pods. It achieves this by assigning private CIDR IPs to pods, which are separate from the VNet and can be reused across multiple clusters. Unlike Kubenet, where the traffic dataplane is handled by the Linux kernel networking stack of the Kubernetes nodes, Azure CNI Overlay delegates this responsibility to Azure networking.
122
+
123
+
### Azure CNI Powered by Cilium
124
+
125
+
In [Azure CNI Powered by Cilium][azure-cni-powered-by-cilium], the data plane for Pods is managed by the Linux kernel of the Kubernetes nodes. Unlike Kubenet, which faces scalability and performance issues with the Linux kernel networking stack, [Cilium][https://cilium.io/] bypasses the Linux kernel networking stack and instead leverages eBPF programs in the Linux Kernel to accelerate packet processing for faster performance.
126
+
127
+
### Bring your own CNI
128
+
129
+
It is possible to install in AKS a third party CNI using the [Bring your own CNI][use-byo-cni] feature.
130
+
116
131
### Compare network models
117
132
118
133
Both kubenet and Azure CNI provide network connectivity for your AKS clusters. However, there are advantages and disadvantages to each. At a high level, the following considerations apply:
@@ -131,17 +146,15 @@ Both kubenet and Azure CNI provide network connectivity for your AKS clusters. H
131
146
132
147
The following behavior differences exist between kubenet and Azure CNI:
| Pod-VM connectivity; VM in the same virtual network | Works when initiated by pod | Works both ways | Works when initiated by pod | Works when initiated by pod |
154
+
| Pod-VM connectivity; VM in peered virtual network | Works when initiated by pod | Works both ways | Works when initiated by pod | Works when initiated by pod |
155
+
| On-premises access using VPN or Express Route | Works when initiated by pod | Works both ways | Works when initiated by pod | Works when initiated by pod |
156
+
| Expose Kubernetes services using a load balancer service, App Gateway, or ingress controller | Supported | Supported |[No Application Gateway Ingress Controller (AGIC) support][azure-cni-overlay-limitations]| Same limitations when using Overlay mode |
157
+
| Support for Windows node pools | Not Supported | Supported | Supported |[Available only for Linux and not for Windows.][azure-cni-powered-by-cilium-limitations]|
145
158
146
159
Regarding DNS, with both kubenet and Azure CNI plugins DNS are offered by CoreDNS, a deployment running in AKS with its own autoscaler. For more information on CoreDNS on Kubernetes, see [Customizing DNS Service](https://kubernetes.io/docs/tasks/administer-cluster/dns-custom-nameservers/). CoreDNS by default is configured to forward unknown domains to the DNS functionality of the Azure Virtual Network where the AKS cluster is deployed. Hence, Azure DNS and Private Zones will work for pods running in AKS.
147
160
@@ -259,6 +272,7 @@ For more information on core Kubernetes and AKS concepts, see the following arti
0 commit comments