You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/azure-cni-overlay.md
+42-20Lines changed: 42 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,7 @@ ms.author: allensu
6
6
ms.subservice: aks-networking
7
7
ms.topic: how-to
8
8
ms.custom: references_regions
9
-
ms.date: 03/09/2023
9
+
ms.date: 03/21/2023
10
10
---
11
11
12
12
# Configure Azure CNI Overlay networking in Azure Kubernetes Service (AKS)
@@ -43,27 +43,27 @@ Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address spa
43
43
44
44
## IP address planning
45
45
46
-
***Cluster Nodes**: Cluster nodes go into a subnet in your VNet, so verify you have a subnet large enough to account for future scale. A simple `/24` subnet can host up to 251 nodes (the first three IP addresses in a subnet are reserved for management operations).
46
+
-**Cluster Nodes**: Cluster nodes go into a subnet in your VNet, so verify you have a subnet large enough to account for future scale. A simple `/24` subnet can host up to 251 nodes (the first three IP addresses in a subnet are reserved for management operations).
47
47
48
-
***Pods**: The overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
48
+
-**Pods**: The overlay solution assigns a `/24` address space for pods on every node from the private CIDR that you specify during cluster creation. The `/24` size is fixed and can't be increased or decreased. You can run up to 250 pods on a node. When planning the pod address space, ensure that the private CIDR is large enough to provide `/24` address spaces for new nodes to support future cluster expansion.
49
49
50
50
The following are additional factors to consider when planning pods IP address space:
51
51
52
-
* Pod CIDR space must not overlap with the cluster subnet range.
53
-
* Pod CIDR space must not overlap with IP ranges used in on-premises networks and peered networks.
54
-
* The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet.
52
+
- Pod CIDR space must not overlap with the cluster subnet range.
53
+
- Pod CIDR space must not overlap with IP ranges used in on-premises networks and peered networks.
54
+
- The same pod CIDR space can be used on multiple independent AKS clusters in the same VNet.
55
55
56
-
***Kubernetes service address range**: The size of the service address CIDR depends on the number of cluster services you plan to create. It must be smaller than `/12`. This range should also not overlap with the pod CIDR range, cluster subnet range, and IP range used in peered VNets and on-premises networks.
56
+
-**Kubernetes service address range**: The size of the service address CIDR depends on the number of cluster services you plan to create. It must be smaller than `/12`. This range should also not overlap with the pod CIDR range, cluster subnet range, and IP range used in peered VNets and on-premises networks.
57
57
58
-
***Kubernetes DNS service IP address**: This is an IP address within the Kubernetes service address range that's used by cluster service discovery. Don't use the first IP address in your address range, as this address is used for the `kubernetes.default.svc.cluster.local` address.
58
+
-**Kubernetes DNS service IP address**: This is an IP address within the Kubernetes service address range that's used by cluster service discovery. Don't use the first IP address in your address range, as this address is used for the `kubernetes.default.svc.cluster.local` address.
59
59
60
60
## Network security groups
61
61
62
62
Pod to pod traffic with Azure CNI Overlay is not encapsulated and subnet [network security group][nsg] rules are applied. If the subnet NSG contains deny rules that would impact the pod CIDR traffic, make sure the following rules are in place to ensure proper cluster functionality (in addition to all [AKS egress requirements][aks-egress]):
63
63
64
-
* Traffic from the node CIDR to the node CIDR on all ports and protocols
65
-
* Traffic from the node CIDR to the pod CIDR on all ports and protocols (required for service traffic routing)
66
-
* Traffic from the pod CIDR to the pod CIDR on all ports and protocols (required for pod to pod and pod to service traffic, including DNS)
64
+
- Traffic from the node CIDR to the node CIDR on all ports and protocols
65
+
- Traffic from the node CIDR to the pod CIDR on all ports and protocols (required for service traffic routing)
66
+
- Traffic from the pod CIDR to the pod CIDR on all ports and protocols (required for pod to pod and pod to service traffic, including DNS)
67
67
68
68
Traffic from a pod to any destination outside of the pod CIDR block will utilize SNAT to set the source IP to the IP of the node where the pod is running.
69
69
@@ -79,23 +79,24 @@ Azure CNI offers two IP addressing options for pods - the traditional configurat
79
79
80
80
Use overlay networking when:
81
81
82
-
* You would like to scale to a large number of pods, but have limited IP address space in your VNet.
83
-
* Most of the pod communication is within the cluster.
84
-
* You don't need advanced AKS features, such as virtual nodes.
82
+
- You would like to scale to a large number of pods, but have limited IP address space in your VNet.
83
+
- Most of the pod communication is within the cluster.
84
+
- You don't need advanced AKS features, such as virtual nodes.
85
85
86
86
Use the traditional VNet option when:
87
87
88
-
* You have available IP address space.
89
-
* Most of the pod communication is to resources outside of the cluster.
90
-
* Resources outside the cluster need to reach pods directly.
91
-
* You need AKS advanced features, such as virtual nodes.
88
+
- You have available IP address space.
89
+
- Most of the pod communication is to resources outside of the cluster.
90
+
- Resources outside the cluster need to reach pods directly.
91
+
- You need AKS advanced features, such as virtual nodes.
92
92
93
93
## Limitations with Azure CNI Overlay
94
94
95
95
Azure CNI Overlay has the following limitations:
96
96
97
-
* You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster.
98
-
* Windows Server 2019 node pools are not supported for overlay.
97
+
- You can't use Application Gateway as an Ingress Controller (AGIC) for an overlay cluster.
98
+
- Windows Server 2019 node pools are not supported for overlay.
99
+
- Traffic from host network pods is not able to reach Windows overlay pods.
99
100
100
101
## Install the aks-preview Azure CLI extension
101
102
@@ -145,6 +146,27 @@ location="westcentralus"
145
146
az aks create -n $clusterName -g $resourceGroup --location $location --network-plugin azure --network-plugin-mode overlay --pod-cidr 192.168.0.0/16
146
147
```
147
148
149
+
## Upgrade an existing cluster to CNI Overlay
150
+
151
+
You can update an existing Azure CNI cluster to Overlay if the cluster meets certain criteria. A cluster must:
152
+
153
+
- be on Kubernetes version 1.22+
154
+
-**not** be using the dynamic pod IP allocation feature
155
+
-**not** have network policies enabled
156
+
-**not** be using any Windows node pools with docker as the container runtime
157
+
158
+
The upgrade process will trigger each node pool to be re-imaged simultaneously (i.e. upgrading each node pool separately to overlay is not supported). Any disruptions to cluster networking will be similar to a node image upgrade or Kubernetes version upgrade where each node in a node pool is re-imaged.
159
+
160
+
> [!WARNING]
161
+
> Due to the limitation around Windows overlay pods incorrectly SNATing packets from host network pods, this has a more detrimental effect for clusters upgrading to overlay.
162
+
163
+
While nodes are being upgraded to use the CNI Overlay feature, pods that are on nodes which haven't been upgraded yet will not be able to communicate with pods on Windows nodes that have been upgraded to Overlay. In other words, overlay Windows pods will not be able to reply to any traffic from pods still running with an IP from the node subnet.
164
+
165
+
This network disruption will only occur during the upgrade. Once the migration to overlay has completed for all node pools, all overlay pods will be able to communicate successfully with the Windows pods.
166
+
167
+
> [!NOTE]
168
+
> The upgrade completion doesn't change the existing limitation that host network pods **cannot** communicate with Windows overlay pods.
169
+
148
170
## Next steps
149
171
150
172
To learn how to utilize AKS with your own Container Network Interface (CNI) plugin, see [Bring your own Container Network Interface (CNI) plugin](use-byo-cni.md).
0 commit comments