Skip to content

Commit e600852

Browse files
Merge pull request #263109 from denniszielke/main
Added documentation for multi-subnet support in AKS Overlay CNI
2 parents 787da19 + 5a9a8d3 commit e600852

File tree

1 file changed

+22
-3
lines changed

1 file changed

+22
-3
lines changed

articles/aks/azure-cni-overlay.md

Lines changed: 22 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
---
22
title: Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
3-
description: Learn how to configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnet.
3+
description: Learn how to configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS), including deploying an AKS cluster into an existing virtual network and subnets.
44
author: asudbring
55
ms.author: allensu
66
ms.subservice: aks-networking
@@ -17,7 +17,7 @@ With Azure CNI Overlay, the cluster nodes are deployed into an Azure Virtual Net
1717

1818
## Overview of Overlay networking
1919

20-
In Overlay networking, only the Kubernetes cluster nodes are assigned IPs from a subnet. Pods receive IPs from a private CIDR provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Extra nodes created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
20+
In Overlay networking, only the Kubernetes cluster nodes are assigned IPs from subnets. Pods receive IPs from a private CIDR provided at the time of cluster creation. Each node is assigned a `/24` address space carved out from the same CIDR. Extra nodes created when you scale out a cluster automatically receive `/24` address spaces from the same CIDR. Azure CNI assigns IPs to pods from this `/24` space.
2121

2222
A separate routing domain is created in the Azure Networking stack for the pod's private CIDR space, which creates an Overlay network for direct communication between pods. There's no need to provision custom routes on the cluster subnet or use an encapsulation method to tunnel traffic between pods, which provides connectivity performance between pods on par with VMs in a VNet. Workloads running within the pods are not even aware that network address manipulation is happening.
2323

@@ -43,7 +43,7 @@ Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address spa
4343

4444
## IP address planning
4545

46-
- **Cluster Nodes**: When setting up your AKS cluster, make sure your VNet subnet has enough room to grow for future scaling. Keep in mind that clusters can't scale across subnets, but you can always add new node pools in another subnet within the same VNet for extra space. A `/24`subnet can fit up to 251 nodes since the first three IP addresses are reserved for management tasks.
46+
- **Cluster Nodes**: When setting up your AKS cluster, make sure your VNet subnets have enough room to grow for future scaling. You can assign each node pool to a dedicated subnet. A `/24`subnet can fit up to 251 nodes since the first three IP addresses are reserved for management tasks.
4747
- **Pods**: The Overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
4848
- When planning IP address space for pods, consider the following factors:
4949
- The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet.
@@ -112,6 +112,25 @@ az aks create -n $clusterName -g $resourceGroup \
112112
--pod-cidr 192.168.0.0/16
113113
```
114114

115+
## Add a new nodepool to a dedicated subnet
116+
117+
After your have created a cluster with Azure CNI Overlay, you can create another nodepool and assign the nodes to a new subnet of the same VNet.
118+
This approach can be usefull if you want to control the ingress or egress IPs of the host from/ towards targets in the same VNET or peered VNets.
119+
120+
```azurecli-interactive
121+
clusterName="myOverlayCluster"
122+
resourceGroup="myResourceGroup"
123+
location="westcentralus"
124+
nodepoolName="newpool1"
125+
subscriptionId=$(az account show --query id -o tsv)
126+
vnetName="yourVnetName"
127+
subnetName="yourNewSubnetName"
128+
subnetResourceId="/subscriptions/$subscriptionId/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnetName/subnets/$subnetName"
129+
az aks nodepool add -g $resourceGroup --cluster-name $clusterName \
130+
--name $nodepoolName --node-count 1 \
131+
--mode system --vnet-subnet-id $subnetResourceId
132+
```
133+
115134
## Upgrade an existing cluster to CNI Overlay
116135

117136
> [!NOTE]

0 commit comments

Comments
 (0)