You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/concepts-network-azure-cni-overlay.md
+3-4Lines changed: 3 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,11 +28,11 @@ You can provide outbound (egress) connectivity to the internet for Overlay pods
28
28
29
29
You can configure ingress connectivity to the cluster using an ingress controller, such as Nginx or [HTTP application routing](./http-application-routing.md). You cannot configure ingress connectivity using Azure App Gateway. For details see [Limitations with Azure CNI Overlay](#limitations-with-azure-cni-overlay).
30
30
31
-
## Differences between Kubenet and Azure CNI Overlay
31
+
## Differences between kubenet and Azure CNI Overlay
32
32
33
-
The following table provides a detailed comparison between Kubenet and Azure CNI Overlay:
33
+
The following table provides a detailed comparison between kubenet and Azure CNI Overlay:
| Cluster scale | 5000 nodes and 250 pods/node | 400 nodes and 250 pods/node |
38
38
| Network configuration | Simple - no extra configurations required for pod networking | Complex - requires route tables and UDRs on cluster subnet for pod networking |
@@ -110,7 +110,6 @@ Azure CNI Overlay has the following limitations:
110
110
- If you're using your own subnet to deploy the cluster, the names of the subnet, VNet, and resource group containing the VNet, must be 63 characters or less. These names will be used as labels in AKS worker nodes and are subject to [Kubernetes label syntax rules](https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#syntax-and-character-set).
Copy file name to clipboardExpand all lines: articles/aks/concepts-network-azure-cni-pod-subnet.md
+9-8Lines changed: 9 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,11 +7,12 @@ author: schaffererin
7
7
ms.author: schaffererin
8
8
9
9
ms.custom: fasttrack-edit
10
+
ms.custom: references_regions
10
11
---
11
12
12
13
# Azure Container Networking Interface (CNI) Pod Subnet
13
14
14
-
Azure CNI Pod Subnet assigns IP addresses to pods from a separate subnet from your cluster Nodes. This feature is available in two modes: Dynamic IP Allocation and Static Block Allocation(Preview).
15
+
Azure CNI Pod Subnet assigns IP addresses to pods from a separate subnet from your cluster Nodes. This feature is available in two modes: Dynamic IP Allocation and Static Block Allocation(Preview).
15
16
16
17
## Prerequisites
17
18
@@ -37,15 +38,15 @@ The dynamic IP allocation mode offers the following benefits:
37
38
38
39
- **Better IP utilization**: IPs are dynamically allocated to cluster Pods from the Pod subnet. This leads to better utilization of IPs in the cluster compared to the traditional CNI solution, which does static allocation of IPs for every node.
39
40
- **Scalable and flexible**: Node and pod subnets can be scaled independently. A single pod subnet can be shared across multiple node pools of a cluster or across multiple AKS clusters deployed in the same VNet. You can also configure a separate pod subnet for a node pool.
40
-
- **High performance**: Since pod are assigned VNet IPs, they have direct connectivity to other cluster pod and resources in the VNet. The solution supports very large clusters without any degradation in performance.
41
+
- **High performance**: Since pods are assigned VNet IPs, they have direct connectivity to other cluster pods and resources in the VNet. The solution supports very large clusters without any degradation in performance.
41
42
- **Separate VNet policies for pods**: Since pods have a separate subnet, you can configure separate VNet policies for them that are different from node policies. This enables many useful scenarios, such as allowing internet connectivity only for pods and not for nodes, fixing the source IP for pod in a node pool using an Azure NAT Gateway, and using network security groups (NSGs) to filter traffic between node pools.
42
43
- **Kubernetes network policies**: Both the Azure Network Policies and Calico work with this mode.
43
44
44
45
### Plan IP addressing
45
46
46
47
With dynamic IP allocation, nodes and pods scale independently, so you can plan their address spaces separately. Since pod subnets can be configured to the granularity of a node pool, you can always add a new subnet when you add a node pool. The system pods in a cluster/node pool also receive IPs from the pod subnet, so this behavior needs to be accounted for.
47
48
48
-
IPs are allocated to nodes in batches of 16. Pod subnet IP allocation should be planned with a minimum of 16 IPs per node in the cluster, as the nodes request 16 IPs on startup and request another batch of 16 any time there are <8 IPs unallocated in their allotment.
49
+
IPs are allocated to nodes in batches of 16. Pod subnet IP allocation should be planned with a minimum of 16 IPs per node in the cluster, as the nodes request 16 IPs on startup and request another batch of 16 anytime there are <8 IPs unallocated in their allotment.
49
50
50
51
IP address planning for Kubernetes services and Docker Bridge remain unchanged.
51
52
@@ -66,13 +67,13 @@ The static block allocation mode offers the following benefits:
66
67
Below are some of the limitations of using Azure CNI Static Block allocation:
67
68
- Minimum Kubernetes Version required is 1.28
68
69
- Maximum subnet size supported is x.x.x.x/12 ~ 1 million IPs
69
-
- Not supported for Windows node pools (Windows support coming soon)
70
-
- Not supported for Cilium Data Plane (support coming soon)
70
+
- Not supported for Windows node pools
71
+
- Not supported for Cilium Data Plane
71
72
- Only a single mode of operation can be used per subnet. If a subnet uses Static Block allocation mode, it cannot be use Dynamic IP allocation mode in a different cluster or node pool with the same subnet and vice versa.
72
73
- Only supported in new clusters or when adding node pools with a different subnet to existing clusters. Migrating or updating existing clusters or node pools is not supported.
73
-
- Across all the CIDR blocks assigned to a node in the node pool, one IP will be selected as the primary IP of the node. Thus, for network administrators selecting the `--max-pods` value try to use the calculation below to best serve your needs and have optimal usage of IPs in the subnet:
74
-
`max_pods` = (N * 16) - 1`
75
-
where N is any positive integer and N > 0
74
+
- Across all the CIDR blocks assigned to a node in the node pool, one IP will be selected as the primary IP of the node. Thus, for network administrators selecting the `--max-pods` value try to use the calculation below to best serve your needs and have optimal usage of IPs in the subnet:
75
+
76
+
`max_pods = (N * 16) - 1` where `N` is any positive integer and `N` > 0
Copy file name to clipboardExpand all lines: articles/aks/concepts-network-cni-overview.md
+8-9Lines changed: 8 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -38,14 +38,14 @@ Azure Kubernetes Service provides the following CNI plugins for overlay networki
38
38
39
39
### Flat networks
40
40
41
-
Unlike an overlay network, a flat network model in AKS assigns IP addresses to pods from a subnet from the same Azure VNet as the AKS nodes. This means that traffic leaving you clusters is not SNAT'd, and the pod IP address is directly exposed to the destination. This can be useful for some scenarios, such as when you need to expose pod IP addresses to external services.
41
+
Unlike an overlay network, a flat network model in AKS assigns IP addresses to pods from a subnet from the same Azure VNet as the AKS nodes. This means that traffic leaving your clusters is not SNAT'd, and the pod IP address is directly exposed to the destination. This can be useful for some scenarios, such as when you need to expose pod IP addresses to external services.
42
42
43
-
:::image type="content" source="media/networking-overview/advanced-networking-diagram-01.png" alt-text="{A diagram showing two nodes with three pods each running in a flat network model}":::
43
+
:::image type="content" source="media/networking-overview/advanced-networking-diagram-01.png" alt-text="A diagram showing two nodes with three pods each running in a flat network model.":::
44
44
45
45
Azure Kubernetes Service provides two CNI plugins for flat networking. This article doesn't go into depth for each plugin option. For more information, see the linked documentation:
46
46
47
-
-[Azure CNI Pod Subnet][azure-cni-podsubnet], the recommended CNI plugin for flat networking scenarios.
48
-
-[Azure CNI Node Subnet][azure-cni-nodesubnet], a legacy flat network model CNI generally only recommends you use if you _**need**_ a managed VNet for your cluster.
47
+
-[Azure CNI Pod Subnet][azure-cni-pod-subnet], the recommended CNI plugin for flat networking scenarios.
48
+
-[Azure CNI Node Subnet][azure-cni-node-subnet], a legacy flat network model CNI generally only recommends you use if you _**need**_ a managed VNet for your cluster.
49
49
50
50
## Choosing a CNI
51
51
@@ -96,7 +96,7 @@ You might also want to compare the features of each CNI plugin. The following ta
96
96
97
97
Depending on the CNI you use, your cluster virtual network resources can be deployed in one of the following ways:
98
98
99
-
- The Azure platform can automatically create and configure the virtual network resources when you create an AKS cluster. like in Azure CNI Overlay, Azure CNI Nodesubnet, and Kubenet.
99
+
- The Azure platform can automatically create and configure the virtual network resources when you create an AKS cluster. like in Azure CNI Overlay, Azure CNI Node subnet, and Kubenet.
100
100
- You can manually create and configure the virtual network resources and attach to those resources when you create your AKS cluster.
101
101
102
102
Although capabilities like service endpoints or UDRs are supported, the [support policies for AKS][support-policies] define what changes you can make. For example:
@@ -119,9 +119,8 @@ There are several requirements and considerations to keep in mind when planning
119
119
120
120
## Next Steps
121
121
122
-
### CNI plugin documentation:
123
122
-[Azure CNI Overlay][azure-cni-overlay]
124
-
-[Azure CNI Pod Subnet][azure-cni-podsubnet]
123
+
-[Azure CNI Pod Subnet][azure-cni-pod-subnet]
125
124
-[Legacy CNI Options][legacy-cni-options]
126
125
-[IP Address Planning for your clusters][ip-address-planning]
127
126
@@ -131,9 +130,9 @@ There are several requirements and considerations to keep in mind when planning
Copy file name to clipboardExpand all lines: articles/aks/concepts-network-ip-address-planning.md
+10-9Lines changed: 10 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -27,7 +27,7 @@ It's important to ensure you allocate enough space in your private CIDR range fo
27
27
28
28
### Flat networks
29
29
30
-
Flat networks, like [Azure CNI Pod Subnet][azure-cni-podsubnet], require a large enough subnet to accommodate both nodes _and_ pods. Since nodes and pods receive IPs from your VNet, you need to plan for the maximum number of nodes and pods you expect to run. Azure CNI Pod Subnet uses a subnet for your nodes and a separate subnet for your pods, so you need to plan for both.
30
+
Flat networks, like [Azure CNI Pod Subnet][azure-cni-pod-subnet], require a large enough subnet to accommodate both nodes _and_ pods. Since nodes and pods receive IPs from your VNet, you need to plan for the maximum number of nodes and pods you expect to run. Azure CNI Pod Subnet uses a subnet for your nodes and a separate subnet for your pods, so you need to plan for both.
31
31
32
32
## IP address sizing
33
33
@@ -39,13 +39,13 @@ When you **upgrade** your AKS cluster, a new node is deployed in the cluster. Se
39
39
40
40
When you **scale** an AKS cluster, a new node is deployed in the cluster. Services and workloads begin to run on the new node. Your IP address range needs to take into considerations how you want to scale up the number of nodes and pods your cluster can support. One additional node for upgrade operations should also be included. Your node count is then `n + number-of-additional-scaled-nodes-you-anticipate + max surge`.
41
41
42
-
If you're using [Azure CNI Pod Subnet][azure-cni-podsubnet] and you expect your nodes to run the maximum number of pods and you regularly destroy and deploy pods, you should also factor in extra IP addresses per node. There can be few seconds latency required to delete a service and release its IP address for a new service to be deployed and acquire the address. The extra IP addresses account for this possibility.
42
+
If you're using [Azure CNI Pod Subnet][azure-cni-pod-subnet] and you expect your nodes to run the maximum number of pods and you regularly destroy and deploy pods, you should also factor in extra IP addresses per node. There can be few seconds latency required to delete a service and release its IP address for a new service to be deployed and acquire the address. The extra IP addresses account for this possibility.
43
43
44
44
The IP address plan for an AKS cluster consists of a virtual network, at least one subnet for nodes and pods, and a Kubernetes service address range.
45
45
46
46
| Azure Resource | Address Range | Limits and Sizing |
| Azure Virtual Network | Max size /8. 65,536 configured IP address limit. See [Azure CNI Pod Subnet Static Block Allocation][podsubnet-static-block-allocation] for exception| Overlapping address spaces within your network can cause issues. |
48
+
| Azure Virtual Network | Max size /8. 65,536 configured IP address limit. See [Azure CNI Pod Subnet Static Block Allocation][pod-subnet-static-block-allocation] for exception| Overlapping address spaces within your network can cause issues. |
49
49
| Subnet | Must be large enough to accommodate nodes, pods, and all Kubernetes and Azure resources in your cluster. For instance, if you deploy an internal Azure Load Balancer, its front-end IPs are allocated from the cluster subnet, not public IPs. | Subnet size should also account for upgrade operations and future scaling needs. <p/> Use the following equation to calculate the minimum subnet size, including an extra node for upgrade operations: `(number of nodes + 1) + ((number of nodes + 1) * maximum pods per node that you configure)` <p/> Example for a 50-node cluster: `(51) + (51 * 30 (default)) = 1,581` (/21 or larger) <p/> Example for a 50-node cluster, preparing to scale up an extra 10 nodes: `(61) + (61 * 30 (default)) = 1,891` (/21 or larger) <p/> If you don't specify a maximum number of pods per node when you create your cluster, the maximum number of pods per node is set to 30. The minimum number of IP addresses required is based on that value. If you calculate your minimum IP address requirements on a different maximum value, see [Maximum pods per node](#maximum-pods-per-node) to set this value when you deploy your cluster. |
50
50
| Kubernetes Service Address Range | Any network element on or connected to this virtual network must not use this range. | The service address CIDR must be smaller than /12. You can reuse this range across different AKS clusters. |
51
51
| Kubernetes DNS Service IP Address | IP address within the Kubernetes service address range used by cluster service discovery. | Don't use the first IP address in your address range. The first address in your subnet range is used for the _kubernetes.default.svc.cluster.local_ address. |
@@ -57,7 +57,7 @@ The maximum number of pods per node in an AKS cluster is 250. The _default_ maxi
57
57
| CNI | Default max pods | Configurable at deployment |
@@ -73,7 +73,7 @@ A minimum value for maximum pods per node is enforced to guarantee space for sys
73
73
| Kubenet | 10 | 250 |
74
74
75
75
> [!NOTE]
76
-
> The minimum value in the previous table is strictly enforced by the AKS service. You can not set a value for _maxPods_ that is lower than the minimum shown, as doing so can prevent the cluster from starting.
76
+
> The minimum value in the previous table is strictly enforced by the AKS service. You cannot set a value for _maxPods_ that is lower than the minimum shown, as doing so can prevent the cluster from starting.
77
77
78
78
### New clusters
79
79
@@ -85,14 +85,15 @@ You can define maximum pods per node when you create a new cluster using one of
85
85
86
86
### Existing clusters
87
87
88
-
You can define maximum pods per node when you create a new node pool. If you need to increase the _maxPods_ setting on an existing cluster, add a new node pool with the new desired _maxPods_ count. After migrating your pods to the new pool, delete the node older pool. Make sure you're setting node pool modes as defined in the [system node pools document][system-node-pools].
88
+
You can define maximum pods per node when you create a new node pool. If you need to increase the _maxPods_ setting on an existing cluster, add a new node pool with the new desired _maxPods_ count. After migrating your pods to the new pool, delete the node older pool.
0 commit comments