Skip to content

Commit 08a3201

Browse files
committed
Last set of changes before going live.
1 parent 649dd2c commit 08a3201

File tree

1 file changed

+4
-3
lines changed

1 file changed

+4
-3
lines changed

articles/aks/azure-cni-overlay.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.date: 08/29/2022
1111

1212
The traditional [Azure Container Networking Interface (CNI)](./configure-azure-cni.md) assigns a VNet IP address to every Pod either from a pre-reserved set of IPs on every node or from a separate subnet reserved for pods. This approach requires IP address planning and could lead to address exhaustion and difficulties in scaling your clusters as your application demands grow.
1313

14-
With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the pods. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (via the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS.
14+
With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (via the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS.
1515

1616
> [!NOTE]
1717
> Azure CNI overlay is currently only available in US West Central region.
@@ -22,6 +22,7 @@ In overlay networking, only the Kubernetes cluster nodes are assigned IPs from a
2222

2323
A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an overlay network for direct communication between pods. There is no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods. This provides connectivity performance between pods on par with VMs in a VNet.
2424

25+
:::image type="content" source="media/azure-cni-overlay/AzureCNI-Overlay.png" alt-text="A diagram showing two nodes with three pods each running in an overlay network. Pod traffic to endpoints outside the cluster is routed via NAT.":::
2526
![Azure CNI Overlay network model with an AKS cluster](media/azure-cni-overlay/AzureCNI-Overlay.png)
2627

2728
Communication with endpoints outside the cluster, such as on-premises and peered VNets, happens using the node IP through Network Address Translation. Azure CNI translates the source IP (overlay IP of the pod) of the traffic to the primary IP address of the VM, which enables the Azure Networking stack to route the traffic to the destination. Endpoints outside the cluster can't connect to a pod directly. You will have to publish the pod's application as a Kubernetes Load Balancer service to make it reachable on the VNet.
@@ -46,7 +47,7 @@ Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address spa
4647

4748
* **Cluster Nodes**: Cluster nodes go into a subnet in your VNet, so ensure that you have a subnet big enough to account for future scale. A simple `/24` subnet can host up to 251 nodes (the first three IP addresses in a subnet are reserved for management operations).
4849

49-
* **Pods**: The overlay solution assigns a `/24 address space for pods on every node from the private CIDR that you specify during cluster creation. The \/24 size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide \/24` address spaces for new nodes to support future cluster expansion.
50+
* **Pods**: The overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
5051
The following are additional factors to consider when planning pod address space:
5152
* Pod CIDR space must not overlap with the cluster subnet range.
5253
* Pod CIDR space must not overlap with IP ranges used in on-premises networks and peered networks.
@@ -86,7 +87,7 @@ The overlay solution has the following limitations today
8687
* Overlay can be enabled only for new clusters. Existing (already deployed) clusters can't be configured to use overlay.
8788
* You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster.
8889

89-
## Steps to setup overlay clusters
90+
## Steps to set up overlay clusters
9091

9192
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
9293

0 commit comments

Comments
 (0)