Skip to content

Commit d18bf85

Browse files
authored
Merge pull request #217651 from MGoedtel/bug33281
fixed issues with Overlay CNI article
2 parents 0877ec1 + ff8fa44 commit d18bf85

File tree

1 file changed

+60
-46
lines changed

1 file changed

+60
-46
lines changed

articles/aks/azure-cni-overlay.md

Lines changed: 60 additions & 46 deletions
Original file line numberDiff line numberDiff line change
@@ -4,24 +4,25 @@ description: Learn how to configure Azure CNI Overlay networking in Azure Kubern
44
services: container-service
55
ms.topic: article
66
ms.custom: references_regions
7-
ms.date: 08/29/2022
7+
ms.date: 11/08/2022
88
---
99

1010
# Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
1111

12-
The traditional [Azure Container Networking Interface (CNI)](./configure-azure-cni.md) assigns a VNet IP address to every Pod either from a pre-reserved set of IPs on every node or from a separate subnet reserved for pods. This approach requires IP address planning and could lead to address exhaustion and difficulties in scaling your clusters as your application demands grow.
12+
The traditional [Azure Container Networking Interface (CNI)](./configure-azure-cni.md) assigns a VNet IP address to every Pod, either from a pre-reserved set of IPs on every node, or from a separate subnet reserved for pods. This approach requires planning IP addresses and could lead to address exhaustion, which introduces difficulties scaling your clusters as your application demands grow.
1313

14-
With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (via the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS.
14+
With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Network (VNet) subnet, whereas pods are assigned IP addresses from a private CIDR logically different from the VNet hosting the nodes. Pod and node traffic within the cluster use an overlay network, and Network Address Translation (using the node's IP address) is used to reach resources outside the cluster. This solution saves a significant amount of VNet IP addresses and enables you to seamlessly scale your cluster to very large sizes. An added advantage is that the private CIDR can be reused in different AKS clusters, truly extending the IP space available for containerized applications in AKS.
1515

1616
> [!NOTE]
17-
> Azure CNI Overlay is currently available in the following regions:
17+
> Azure CNI Overlay is currently available only in the following regions:
1818
> - North Central US
1919
> - West Central US
20+
2021
## Overview of overlay networking
2122

2223
In overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR that is provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Additional nodes that are created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
2324

24-
A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an overlay network for direct communication between pods. There is no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods. This provides connectivity performance between pods on par with VMs in a VNet.
25+
A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an overlay network for direct communication between pods. There is no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods. This provides connectivity performance between pods on par with VMs in a VNet.
2526

2627
:::image type="content" source="media/azure-cni-overlay/azure-cni-overlay.png" alt-text="A diagram showing two nodes with three pods each running in an overlay network. Pod traffic to endpoints outside the cluster is routed via NAT.":::
2728

@@ -45,29 +46,31 @@ Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address spa
4546

4647
## IP address planning
4748

48-
* **Cluster Nodes**: Cluster nodes go into a subnet in your VNet, so ensure that you have a subnet big enough to account for future scale. A simple `/24` subnet can host up to 251 nodes (the first three IP addresses in a subnet are reserved for management operations).
49+
* **Cluster Nodes**: Cluster nodes go into a subnet in your VNet, so verify you have a subnet large enough to account for future scale. A simple `/24` subnet can host up to 251 nodes (the first three IP addresses in a subnet are reserved for management operations).
50+
51+
* **Pods**: The overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
52+
53+
The following are additional factors to consider when planning pods IP address space:
4954

50-
* **Pods**: The overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
51-
The following are additional factors to consider when planning pod address space:
5255
* Pod CIDR space must not overlap with the cluster subnet range.
5356
* Pod CIDR space must not overlap with IP ranges used in on-premises networks and peered networks.
5457
* The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet.
5558

5659
* **Kubernetes service address range**: The size of the service address CIDR depends on the number of cluster services you plan to create. It must be smaller than `/12`. This range should also not overlap with the pod CIDR range, cluster subnet range, and IP range used in peered VNets and on-premises networks.
5760

58-
* **Kubernetes DNS service IP address**: This is an IP address within the Kubernetes service address range that will be used by cluster service discovery. Don't use the first IP address in your address range. The first address in your subnet range is used for the kubernetes.default.svc.cluster.local address.
61+
* **Kubernetes DNS service IP address**: This is an IP address within the Kubernetes service address range that's used by cluster service discovery. Don't use the first IP address in your address range, as this address is used for the `kubernetes.default.svc.cluster.local` address.
5962

6063
## Maximum pods per node
6164

6265
You can configure the maximum number of pods per node at the time of cluster creation or when you add a new node pool. The default for Azure CNI Overlay is 30. The maximum value that you can specify in Azure CNI Overlay is 250, and the minimum value is 10. The maximum pods per node value configured during creation of a node pool applies to the nodes in that node pool only.
6366

6467
## Choosing a network model to use
6568

66-
Azure CNI offers two IP addressing options for pods- the traditional configuration that assigns VNet IPs to pods, and overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model may be the most appropriate.
69+
Azure CNI offers two IP addressing options for pods - the traditional configuration that assigns VNet IPs to pods, and overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model may be the most appropriate.
6770

6871
Use overlay networking when:
6972

70-
* You would like to scale to a large number of Pods but have limited IP address space in your VNet.
73+
* You would like to scale to a large number of pods, but have limited IP address space in your VNet.
7174
* Most of the pod communication is within the cluster.
7275
* You don't need advanced AKS features, such as virtual nodes.
7376

@@ -83,64 +86,75 @@ Use the traditional VNet option when:
8386
The overlay solution has the following limitations today
8487

8588
* Only available for Linux and not for Windows.
86-
* You can't deploy multiple overlay clusters in the same subnet.
89+
* You can't deploy multiple overlay clusters on the same subnet.
8790
* Overlay can be enabled only for new clusters. Existing (already deployed) clusters can't be configured to use overlay.
8891
* You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster.
89-
* v5 VM SKUs are not currently supported.
90-
91-
## Steps to set up overlay clusters
92+
* v5 VM SKUs are currently not supported.
9293

93-
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
94+
## Install the aks-preview Azure CLI extension
9495

95-
The following example walks through the steps to create a new virtual network with a subnet for the cluster nodes and an AKS cluster that uses Azure CNI Overlay. Be sure to replace the variables with your own values.
96+
[!INCLUDE [preview features callout](includes/preview/preview-callout.md)]
9697

97-
First, opt into the feature by running the following command:
98+
To install the aks-preview extension, run the following command:
9899

99-
```azurecli-interactive
100-
az feature register --namespace Microsoft.ContainerService --name AzureOverlayPreview
100+
```azurecli
101+
az extension add --name aks-preview
101102
```
102103

103-
Create a virtual network with a subnet for the cluster nodes.
104-
105-
```azurecli-interactive
106-
resourceGroup="myResourceGroup"
107-
vnet="myVirtualNetwork"
108-
location="westcentralus"
109-
110-
# Create the resource group
111-
az group create --name $resourceGroup --location $location
104+
Run the following command to update to the latest version of the extension released:
112105

113-
# Create a VNet and a subnet for the cluster nodes
114-
az network vnet create -g $resourceGroup --location $location --name $vnet --address-prefixes 10.0.0.0/8 -o none
115-
az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name nodesubnet --address-prefix 10.10.0.0/16 -o none
106+
```azurecli
107+
az extension update --name aks-preview
116108
```
117109

118-
Create a cluster with Azure CNI Overlay. Use `--network-plugin-mode` to specify that this is an overlay cluster. If the pod CIDR is not specified then AKS assigns a default space, viz. 10.244.0.0/16.
110+
## Register the 'AzureOverlayPreview' feature flag
119111

120-
```azurecli-interactive
121-
clusterName="myOverlayCluster"
122-
subscription="aaaaaaa-aaaaa-aaaaaa-aaaa"
112+
Register the `AzureOverlayPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example:
123113

124-
az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16 --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/nodesubnet
114+
```azurecli-interactive
115+
az feature register --namespace "Microsoft.ContainerService" --name "AzureOverlayPreview"
125116
```
126117

127-
## Frequently asked questions
118+
It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature list][az-feature-list] command:
128119

129-
* *How do pods and cluster nodes communicate with each other?*
120+
```azurecli-interactive
121+
az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/AzureOverlayPreview')].{Name:name,State:properties.state}"
122+
```
130123

131-
Pods and nodes talk to each other directly without any SNAT requirements.
124+
When ready, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command:
132125

126+
```azurecli-interactive
127+
az provider register --namespace Microsoft.ContainerService
128+
```
133129

134-
* *Can I configure the size of the address space assigned to each space?*
130+
## Set up overlay clusters
135131

136-
No, this is fixed at `/24` today and can't be changed.
132+
The following steps create a new virtual network with a subnet for the cluster nodes and an AKS cluster that uses Azure CNI Overlay.
137133

134+
1. Create a virtual network with a subnet for the cluster nodes. Replace the values for the variables `resourceGroup`, `vnet` and `location`.
138135

139-
* *Can I add more private pod CIDRs to a cluster after the cluster has been created?*
136+
```azurecli-interactive
137+
resourceGroup="myResourceGroup"
138+
vnet="myVirtualNetwork"
139+
location="westcentralus"
140+
141+
# Create the resource group
142+
az group create --name $resourceGroup --location $location
143+
144+
# Create a VNet and a subnet for the cluster nodes
145+
az network vnet create -g $resourceGroup --location $location --name $vnet --address-prefixes 10.0.0.0/8 -o none
146+
az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name nodesubnet --address-prefix 10.10.0.0/16 -o none
147+
```
140148
141-
No, a private pod CIDR can only be specified at the time of cluster creation.
149+
2. Create a cluster with Azure CNI Overlay. Use the argument `--network-plugin-mode` to specify that this is an overlay cluster. If the pod CIDR is not specified then AKS assigns a default space, viz. 10.244.0.0/16. Replace the values for the variables `clusterName` and `subscription`.
142150
151+
```azurecli-interactive
152+
clusterName="myOverlayCluster"
153+
subscription="aaaaaaa-aaaaa-aaaaaa-aaaa"
154+
155+
az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16 --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/nodesubnet
156+
```
143157
144-
* *What are the max nodes and pods per cluster supported by Azure CNI Overlay?*
158+
## Next steps
145159
146-
The max scale in terms of nodes and pods per cluster is the same as the max limits supported by AKS today.
160+
To learn how to utilize AKS with your own Container Network Interface (CNI) plugin, see [Bring your own Container Network Interface (CNI) plugin](use-byo-cni.md).

0 commit comments

Comments
 (0)