Skip to content

Commit 23b74eb

Browse files
author
Jill Grant
authored
Merge pull request #231620 from chasewilson/chase/overlayUpgrade
Adds upgrade cluster to CNI Overlay Info
2 parents 4dea90d + 3ab2690 commit 23b74eb

File tree

1 file changed

+42
-20
lines changed

1 file changed

+42
-20
lines changed

articles/aks/azure-cni-overlay.md

Lines changed: 42 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.author: allensu
66
ms.subservice: aks-networking
77
ms.topic: how-to
88
ms.custom: references_regions
9-
ms.date: 03/09/2023
9+
ms.date: 03/21/2023
1010
---
1111

1212
# Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
@@ -43,27 +43,27 @@ Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address spa
4343

4444
## IP address planning
4545

46-
* **Cluster Nodes**: Cluster nodes go into a subnet in your VNet, so verify you have a subnet large enough to account for future scale. A simple `/24` subnet can host up to 251 nodes (the first three IP addresses in a subnet are reserved for management operations).
46+
- **Cluster Nodes**: Cluster nodes go into a subnet in your VNet, so verify you have a subnet large enough to account for future scale. A simple `/24` subnet can host up to 251 nodes (the first three IP addresses in a subnet are reserved for management operations).
4747

48-
* **Pods**: The overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
48+
- **Pods**: The overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
4949

5050
The following are additional factors to consider when planning pods IP address space:
5151

52-
* Pod CIDR space must not overlap with the cluster subnet range.
53-
* Pod CIDR space must not overlap with IP ranges used in on-premises networks and peered networks.
54-
* The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet.
52+
- Pod CIDR space must not overlap with the cluster subnet range.
53+
- Pod CIDR space must not overlap with IP ranges used in on-premises networks and peered networks.
54+
- The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet.
5555

56-
* **Kubernetes service address range**: The size of the service address CIDR depends on the number of cluster services you plan to create. It must be smaller than `/12`. This range should also not overlap with the pod CIDR range, cluster subnet range, and IP range used in peered VNets and on-premises networks.
56+
- **Kubernetes service address range**: The size of the service address CIDR depends on the number of cluster services you plan to create. It must be smaller than `/12`. This range should also not overlap with the pod CIDR range, cluster subnet range, and IP range used in peered VNets and on-premises networks.
5757

58-
* **Kubernetes DNS service IP address**: This is an IP address within the Kubernetes service address range that's used by cluster service discovery. Don't use the first IP address in your address range, as this address is used for the `kubernetes.default.svc.cluster.local` address.
58+
- **Kubernetes DNS service IP address**: This is an IP address within the Kubernetes service address range that's used by cluster service discovery. Don't use the first IP address in your address range, as this address is used for the `kubernetes.default.svc.cluster.local` address.
5959

6060
## Network security groups
6161

6262
Pod to pod traffic with Azure CNI Overlay is not encapsulated and subnet [network security group][nsg] rules are applied. If the subnet NSG contains deny rules that would impact the pod CIDR traffic, make sure the following rules are in place to ensure proper cluster functionality (in addition to all [AKS egress requirements][aks-egress]):
6363

64-
* Traffic from the node CIDR to the node CIDR on all ports and protocols
65-
* Traffic from the node CIDR to the pod CIDR on all ports and protocols (required for service traffic routing)
66-
* Traffic from the pod CIDR to the pod CIDR on all ports and protocols (required for pod to pod and pod to service traffic, including DNS)
64+
- Traffic from the node CIDR to the node CIDR on all ports and protocols
65+
- Traffic from the node CIDR to the pod CIDR on all ports and protocols (required for service traffic routing)
66+
- Traffic from the pod CIDR to the pod CIDR on all ports and protocols (required for pod to pod and pod to service traffic, including DNS)
6767

6868
Traffic from a pod to any destination outside of the pod CIDR block will utilize SNAT to set the source IP to the IP of the node where the pod is running.
6969

@@ -79,23 +79,24 @@ Azure CNI offers two IP addressing options for pods - the traditional configurat
7979

8080
Use overlay networking when:
8181

82-
* You would like to scale to a large number of pods, but have limited IP address space in your VNet.
83-
* Most of the pod communication is within the cluster.
84-
* You don't need advanced AKS features, such as virtual nodes.
82+
- You would like to scale to a large number of pods, but have limited IP address space in your VNet.
83+
- Most of the pod communication is within the cluster.
84+
- You don't need advanced AKS features, such as virtual nodes.
8585

8686
Use the traditional VNet option when:
8787

88-
* You have available IP address space.
89-
* Most of the pod communication is to resources outside of the cluster.
90-
* Resources outside the cluster need to reach pods directly.
91-
* You need AKS advanced features, such as virtual nodes.
88+
- You have available IP address space.
89+
- Most of the pod communication is to resources outside of the cluster.
90+
- Resources outside the cluster need to reach pods directly.
91+
- You need AKS advanced features, such as virtual nodes.
9292

9393
## Limitations with Azure CNI Overlay
9494

9595
Azure CNI Overlay has the following limitations:
9696

97-
* You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster.
98-
* Windows Server 2019 node pools are not supported for overlay.
97+
- You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster.
98+
- Windows Server 2019 node pools are not supported for overlay.
99+
- Traffic from host network pods is not able to reach Windows overlay pods.
99100

100101
## Install the aks-preview Azure CLI extension
101102

@@ -145,6 +146,27 @@ location="westcentralus"
145146
az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16
146147
```
147148

149+
## Upgrade an existing cluster to CNI Overlay
150+
151+
You can update an existing Azure CNI cluster to Overlay if the cluster meets certain criteria. A cluster must:
152+
153+
- be on Kubernetes version 1.22+
154+
- **not** be using the dynamic pod IP allocation feature
155+
- **not** have network policies enabled
156+
- **not** be using any Windows node pools with docker as the container runtime
157+
158+
The upgrade process will trigger each node pool to be re-imaged simultaneously (i.e. upgrading each node pool separately to overlay is not supported). Any disruptions to cluster networking will be similar to a node image upgrade or Kubernetes version upgrade where each node in a node pool is re-imaged.
159+
160+
> [!WARNING]
161+
> Due to the limitation around Windows overlay pods incorrectly SNATing packets from host network pods, this has a more detrimental effect for clusters upgrading to overlay.
162+
163+
While nodes are being upgraded to use the CNI Overlay feature, pods that are on nodes which haven't been upgraded yet will not be able to communicate with pods on Windows nodes that have been upgraded to Overlay. In other words, overlay Windows pods will not be able to reply to any traffic from pods still running with an IP from the node subnet.
164+
165+
This network disruption will only occur during the upgrade. Once the migration to overlay has completed for all node pools, all overlay pods will be able to communicate successfully with the Windows pods.
166+
167+
> [!NOTE]
168+
> The upgrade completion doesn't change the existing limitation that host network pods **cannot** communicate with Windows overlay pods.
169+
148170
## Next steps
149171

150172
To learn how to utilize AKS with your own Container Network Interface (CNI) plugin, see [Bring your own Container Network Interface (CNI) plugin](use-byo-cni.md).

0 commit comments

Comments
 (0)