You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
@@ -17,12 +17,25 @@ The traditional [Azure Container Networking Interface (CNI)](./configure-azure-c
17
17
With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network (VNet) subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (using the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS.
18
18
19
19
> [!NOTE]
20
-
> Azure CNI Overlay is currently available only in the following regions:
21
-
> - North Central US
22
-
> - West Central US
23
-
> - East US
24
-
> - UK South
25
-
> - Australia East
20
+
> Azure CNI Overlay is currently **_unavailable_** in the following regions:
21
+
> - East US 2
22
+
> - Central US
23
+
> - South Central US
24
+
> - West US
25
+
> - West US 2
26
+
> - West US 3
27
+
> - Southeast Asia
28
+
> - Sweden Central
29
+
> - France Central
30
+
> - Norway East
31
+
> - Switzerland North
32
+
> - Qatar Central
33
+
> - Jio India West
34
+
> - Jio India Central
35
+
> - UAE Central
36
+
> - UAE North
37
+
> - Brazil Southeast
38
+
26
39
27
40
## Overview of overlay networking
28
41
@@ -42,13 +55,13 @@ Ingress connectivity to the cluster can be achieved using an ingress controller
42
55
43
56
Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address space logically different from the VNet but has scaling and other limitations. The below table provides a detailed comparison between Kubenet and Azure CNI Overlay. If you do not want to assign VNet IP addresses to pods due to IP shortage, then Azure CNI Overlay is the recommended solution.
44
57
45
-
| Area | Azure CNI Overlay | Kubenet |
46
-
| -- | -- | -- |
47
-
| Cluster scale | 1000 nodes and 250 pods/node | 400 nodes and 250 pods/node |
48
-
| Network configuration | Simple - no additional configuration required for pod networking | Complex - requires route tables and UDRs on cluster subnet for pod networking |
49
-
| Pod connectivity performance | Performance on par with VMs in a VNet | Additional hop adds minor latency |
| Cluster scale | 1000 nodes and 250 pods/node | 400 nodes and 250 pods/node|
61
+
| Network configuration | Simple - no additional configuration required for pod networking | Complex - requires route tables and UDRs on cluster subnet for pod networking |
62
+
| Pod connectivity performance | Performance on par with VMs in a VNet | Additional hop adds minor latency|
0 commit comments