Skip to content

Commit de1719b

Browse files
Merge pull request #234685 from chasewilson/chase/newOverlayGa
update for GA information and formatting
2 parents 47228ca + a55e084 commit de1719b

File tree

2 files changed

+47
-38
lines changed

2 files changed

+47
-38
lines changed

articles/aks/TOC.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -403,7 +403,7 @@
403403
href: configure-azure-cni.md
404404
- name: Use Azure CNI for dynamic IP allocation and enhanced subnet support
405405
href: configure-azure-cni-dynamic-ip-allocation.md
406-
- name: Use Azure CNI Overlay (Preview)
406+
- name: Use Azure CNI Overlay
407407
href: azure-cni-overlay.md
408408
- name: DNS
409409
items:

articles/aks/azure-cni-overlay.md

Lines changed: 46 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -1,31 +1,31 @@
11
---
2-
title: Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS) (Preview)
2+
title: Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
33
description: Learn how to configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet.
44
author: asudbring
55
ms.author: allensu
66
ms.subservice: aks-networking
77
ms.topic: how-to
8-
ms.custom: references_regions, devx-track-azurecli
9-
ms.date: 03/21/2023
8+
ms.custom: references_regions
9+
ms.date: 04/17/2023
1010
---
1111

1212
# Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
1313

1414
The traditional [Azure Container Networking Interface (CNI)](./configure-azure-cni.md) assigns a VNet IP address to every Pod, either from a pre-reserved set of IPs on every node, or from a separate subnet reserved for pods. This approach requires planning IP addresses and could lead to address exhaustion, which introduces difficulties scaling your clusters as your application demands grow.
1515

16-
With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network (VNet) subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (using the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS.
16+
With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network (VNet) subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an Overlay network, and Network Address Translation (using the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS.
1717

18-
## Overview of overlay networking
18+
## Overview of Overlay networking
1919

20-
In overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR that is provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Additional nodes that are created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
20+
In Overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR that is provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Additional nodes that are created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
2121

22-
A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an overlay network for direct communication between pods. There is no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods. This provides connectivity performance between pods on par with VMs in a VNet.
22+
A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an Overlay network for direct communication between pods. There is no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods. This provides connectivity performance between pods on par with VMs in a VNet.
2323

24-
:::image type="content" source="media/azure-cni-overlay/azure-cni-overlay.png" alt-text="A diagram showing two nodes with three pods each running in an overlay network. Pod traffic to endpoints outside the cluster is routed via NAT.":::
24+
:::image type="content" source="media/azure-cni-Overlay/azure-cni-overlay.png" alt-text="A diagram showing two nodes with three pods each running in an Overlay network. Pod traffic to endpoints outside the cluster is routed via NAT.":::
2525

26-
Communication with endpoints outside the cluster, such as on-premises and peered VNets, happens using the node IP through Network Address Translation. Azure CNI translates the source IP (overlay IP of the pod) of the traffic to the primary IP address of the VM, which enables the Azure Networking stack to route the traffic to the destination. Endpoints outside the cluster can't connect to a pod directly. You will have to publish the pod's application as a Kubernetes Load Balancer service to make it reachable on the VNet.
26+
Communication with endpoints outside the cluster, such as on-premises and peered VNets, happens using the node IP through Network Address Translation. Azure CNI translates the source IP (Overlay IP of the pod) of the traffic to the primary IP address of the VM, which enables the Azure Networking stack to route the traffic to the destination. Endpoints outside the cluster can't connect to a pod directly. You will have to publish the pod's application as a Kubernetes Load Balancer service to make it reachable on the VNet.
2727

28-
Outbound (egress) connectivity to the internet for overlay pods can be provided using a [Standard SKU Load Balancer](./egress-outboundtype.md#outbound-type-of-loadbalancer) or [Managed NAT Gateway](./nat-gateway.md). You can also control egress traffic by directing it to a firewall using [User Defined Routes on the cluster subnet](./egress-outboundtype.md#outbound-type-of-userdefinedrouting).
28+
Outbound (egress) connectivity to the internet for Overlay pods can be provided using a [Standard SKU Load Balancer](./egress-outboundtype.md#outbound-type-of-loadbalancer) or [Managed NAT Gateway](./nat-gateway.md). You can also control egress traffic by directing it to a firewall using [User Defined Routes on the cluster subnet](./egress-outboundtype.md#outbound-type-of-userdefinedrouting).
2929

3030
Ingress connectivity to the cluster can be achieved using an ingress controller such as Nginx or [HTTP application routing](./http-application-routing.md).
3131

@@ -39,13 +39,13 @@ Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address spa
3939
| Network configuration | Simple - no additional configuration required for pod networking | Complex - requires route tables and UDRs on cluster subnet for pod networking |
4040
| Pod connectivity performance | Performance on par with VMs in a VNet | Additional hop adds minor latency |
4141
| Kubernetes Network Policies | Azure Network Policies, Calico, Cilium | Calico |
42-
| OS platforms supported | Linux and Windows Server 2022 | Linux only |
42+
| OS platforms supported | Linux and Windows Server 2022(Preview) | Linux only |
4343

4444
## IP address planning
4545

46-
- **Cluster Nodes**: Cluster nodes go into a subnet in your VNet, so verify you have a subnet large enough to account for future scale. Cluster can't scale to another subnet but you can add new nodepools in another subnet within the same VNet for expansion. A simple `/24` subnet can host up to 251 nodes (the first three IP addresses in a subnet are reserved for management operations).
46+
- **Cluster Nodes**: When setting up your AKS cluster, make sure your VNet subnet has enough room to grow for future scaling. Keep in mind that clusters can't scale across subnets, but you can always add new node pools in another subnet within the same VNet for extra space. Note that a `/24`subnet can fit up to 251 nodes since the first three IP addresses are reserved for management tasks.
4747

48-
- **Pods**: The overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
48+
- **Pods**: The Overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
4949

5050
The following are additional factors to consider when planning pods IP address space:
5151

@@ -75,9 +75,9 @@ You can configure the maximum number of pods per node at the time of cluster cre
7575

7676
## Choosing a network model to use
7777

78-
Azure CNI offers two IP addressing options for pods - the traditional configuration that assigns VNet IPs to pods, and overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model may be the most appropriate.
78+
Azure CNI offers two IP addressing options for pods - the traditional configuration that assigns VNet IPs to pods, and Overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model may be the most appropriate.
7979

80-
Use overlay networking when:
80+
Use Overlay networking when:
8181

8282
- You would like to scale to a large number of pods, but have limited IP address space in your VNet.
8383
- Most of the pod communication is within the cluster.
@@ -94,12 +94,30 @@ Use the traditional VNet option when:
9494

9595
Azure CNI Overlay has the following limitations:
9696

97-
- You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster.
98-
- Windows Server 2019 node pools are not supported for overlay.
99-
- Traffic from host network pods is not able to reach Windows overlay pods.
100-
- You can't use [DCsv2-series](/azure/virtual-machines/dcv2-series) virtual machines in node pools. In case you need Confidential Computing you must use [DCasv5 or DCadsv5-series confidential VMs](/azure/virtual-machines/dcasv5-dcadsv5-series).
97+
- You can't use Application Gateway as an Ingress Controller (AGIC) for an Overlay cluster.
98+
- Windows support is still in Preview
99+
- Windows Server 2019 node pools are **not** supported for Overlay
100+
- Traffic from host network pods is not able to reach Windows Overlay pods.
101+
- Sovereign Clouds are not supported
102+
- Virtual Machine Scale Sets (VMAS) are not supported for Overlay
103+
- Dualstack networking is not supported in Overlay
104+
- You can't use [DCsv2-series](/azure/virtual-machines/dcv2-series) virtual machines in node pools. To meet Confidential Computing requirements, consider using [DCasv5 or DCadsv5-series confidential VMs](/azure/virtual-machines/dcasv5-dcadsv5-series) instead.
101105

102-
## Install the aks-preview Azure CLI extension
106+
## Set up Overlay clusters
107+
108+
>[!NOTE]
109+
> You must have CLI version 2.47.0 or later to use the `--network-plugin-mode` argument. For Windows, you must have the latest aks-preview Azure CLI extension installed and can follow the instructions below.
110+
111+
Create a cluster with Azure CNI Overlay. Use the argument `--network-plugin-mode` to specify that this is an overlay cluster. If the pod CIDR is not specified then AKS assigns a default space, viz. 10.244.0.0/16. Replace the values for the variables `clusterName`, `resourceGroup`, and `location`.
112+
113+
```azurecli-interactive
114+
clusterName="myOverlayCluster"
115+
resourceGroup="myResourceGroup"
116+
location="westcentralus"
117+
az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16
118+
```
119+
120+
## Install the aks-preview Azure CLI extension - Windows only
103121

104122
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
105123

@@ -135,19 +153,10 @@ When the status reflects *Registered*, refresh the registration of the *Microsof
135153
az provider register --namespace Microsoft.ContainerService
136154
```
137155

138-
## Set up overlay clusters
156+
## Upgrade an existing cluster to CNI Overlay - Preview
139157

140-
Create a cluster with Azure CNI Overlay. Use the argument `--network-plugin-mode` to specify that this is an overlay cluster. If the pod CIDR is not specified then AKS assigns a default space, viz. 10.244.0.0/16. Replace the values for the variables `clusterName`, `resourceGroup`, and `location`.
141-
142-
```azurecli-interactive
143-
clusterName="myOverlayCluster"
144-
resourceGroup="myResourceGroup"
145-
location="westcentralus"
146-
147-
az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16
148-
```
149-
150-
## Upgrade an existing cluster to CNI Overlay
158+
> [!NOTE]
159+
> The upgrade capability is still in preview and requires the preview AKS Azure CLI extension.
151160
152161
You can update an existing Azure CNI cluster to Overlay if the cluster meets certain criteria. A cluster must:
153162

@@ -156,17 +165,17 @@ You can update an existing Azure CNI cluster to Overlay if the cluster meets cer
156165
- **not** have network policies enabled
157166
- **not** be using any Windows node pools with docker as the container runtime
158167

159-
The upgrade process will trigger each node pool to be re-imaged simultaneously (i.e. upgrading each node pool separately to overlay is not supported). Any disruptions to cluster networking will be similar to a node image upgrade or Kubernetes version upgrade where each node in a node pool is re-imaged.
168+
The upgrade process will trigger each node pool to be re-imaged simultaneously (i.e. upgrading each node pool separately to Overlay is not supported). Any disruptions to cluster networking will be similar to a node image upgrade or Kubernetes version upgrade where each node in a node pool is re-imaged.
160169

161170
> [!WARNING]
162-
> Due to the limitation around Windows overlay pods incorrectly SNATing packets from host network pods, this has a more detrimental effect for clusters upgrading to overlay.
171+
> Due to the limitation around Windows Overlay pods incorrectly SNATing packets from host network pods, this has a more detrimental effect for clusters upgrading to Overlay.
163172
164-
While nodes are being upgraded to use the CNI Overlay feature, pods that are on nodes which haven't been upgraded yet will not be able to communicate with pods on Windows nodes that have been upgraded to Overlay. In other words, overlay Windows pods will not be able to reply to any traffic from pods still running with an IP from the node subnet.
173+
While nodes are being upgraded to use the CNI Overlay feature, pods that are on nodes which haven't been upgraded yet will not be able to communicate with pods on Windows nodes that have been upgraded to Overlay. In other words, Overlay Windows pods will not be able to reply to any traffic from pods still running with an IP from the node subnet.
165174

166-
This network disruption will only occur during the upgrade. Once the migration to overlay has completed for all node pools, all overlay pods will be able to communicate successfully with the Windows pods.
175+
This network disruption will only occur during the upgrade. Once the migration to Overlay has completed for all node pools, all Overlay pods will be able to communicate successfully with the Windows pods.
167176

168177
> [!NOTE]
169-
> The upgrade completion doesn't change the existing limitation that host network pods **cannot** communicate with Windows overlay pods.
178+
> The upgrade completion doesn't change the existing limitation that host network pods **cannot** communicate with Windows Overlay pods.
170179
171180
## Next steps
172181

0 commit comments

Comments
 (0)