Skip to content

Commit 6b854cc

Browse files
committed
updates for PR comments
1 parent c04be8c commit 6b854cc

File tree

4 files changed

+7
-24
lines changed

4 files changed

+7
-24
lines changed

articles/aks/concepts-network-azure-cni-podsubnet.md

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -41,8 +41,6 @@ The dynamic IP allocation mode offers the following benefits:
4141
- **Separate VNet policies for pods**: Since pods have a separate subnet, you can configure separate VNet policies for them that are different from node policies. This enables many useful scenarios, such as allowing internet connectivity only for pods and not for nodes, fixing the source IP for pod in a node pool using an Azure NAT Gateway, and using network security groups (NSGs) to filter traffic between node pools.
4242
- **Kubernetes network policies**: Both the Azure Network Policies and Calico work with this mode.
4343
44-
This article shows you how to use Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in AKS.
45-
4644
### Plan IP addressing
4745
4846
With dynamic IP allocation, nodes and pods scale independently, so you can plan their address spaces separately. Since pod subnets can be configured to the granularity of a node pool, you can always add a new subnet when you add a node pool. The system pods in a cluster/node pool also receive IPs from the pod subnet, so this behavior needs to be accounted for.
@@ -63,8 +61,6 @@ The static block allocation mode offers the following benefits:
6361
- **Separate VNet policies for pods**: Since pods have a separate subnet, you can configure separate VNet policies for them that are different from node policies. This enables many useful scenarios such as allowing internet connectivity only for pods and not for nodes, fixing the source IP for pod in a node pool using an Azure NAT Gateway, and using NSGs to filter traffic between node pools.
6462
- **Kubernetes network policies**: Cilium, Azure NPM, and Calico work with this solution.
6563
66-
This article shows you how to use Azure CNI Networking for static allocation of CIDRs and enhanced subnet support in AKS.
67-
6864
### Limitations
6965
7066
Below are some of the limitations of using Azure CNI Static Block allocation:

articles/aks/concepts-network-cni-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Concepts - CNI networking in AKS
33
description: Learn about CNI networking options in Azure Kubernetes Service (AKS)
44
ms.topic: conceptual
5-
ms.date: 05/21/2024
5+
ms.date: 05/28/2024
66
author: schaffererin
77
ms.author: schaffererin
88

articles/aks/concepts-network-ip-address-planning.md

Lines changed: 2 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Concepts - IP address planning in Azure Kubernetes Service (AKS)
33
description: Learn about IP address planning in Azure Kubernetes Service (AKS).
44
ms.topic: conceptual
5-
ms.date: 05/21/2024
5+
ms.date: 05/28/2024
66
author: schaffererin
77
ms.author: schaffererin
88

@@ -43,24 +43,13 @@ If you're using [Azure CNI Pod Subnet][azure-cni-podsubnet] and you expect your
4343

4444
The IP address plan for an AKS cluster consists of a virtual network, at least one subnet for nodes and pods, and a Kubernetes service address range.
4545

46-
** TODO: Update Table to reflect all CNI's**
47-
4846
| Azure Resource | Address Range | Limits and Sizing |
4947
| -------------- | -------------- | ----------------- |
5048
| Azure Virtual Network | Max size /8. 65,536 configured IP address limit. See [Azure CNI Pod Subnet Static Block Allocation][podsubnet-static-block-allocation] for exception| Overlapping address spaces within your network can cause issues. |
5149
| Subnet | Must be large enough to accommodate nodes, pods, and all Kubernetes and Azure resources in your cluster. For instance, if you deploy an internal Azure Load Balancer, its front-end IPs are allocated from the cluster subnet, not public IPs. | Subnet size should also account for upgrade operations and future scaling needs. <p/> Use the following equation to calculate the minimum subnet size, including an extra node for upgrade operations: `(number of nodes + 1) + ((number of nodes + 1) * maximum pods per node that you configure)` <p/> Example for a 50-node cluster: `(51) + (51 * 30 (default)) = 1,581` (/21 or larger) <p/> Example for a 50-node cluster, preparing to scale up an extra 10 nodes: `(61) + (61 * 30 (default)) = 1,891` (/21 or larger) <p/> If you don't specify a maximum number of pods per node when you create your cluster, the maximum number of pods per node is set to 30. The minimum number of IP addresses required is based on that value. If you calculate your minimum IP address requirements on a different maximum value, see [Maximum pods per node](#maximum-pods-per-node) to set this value when you deploy your cluster. |
5250
| Kubernetes Service Address Range | Any network element on or connected to this virtual network must not use this range. | The service address CIDR must be smaller than /12. You can reuse this range across different AKS clusters. |
5351
| Kubernetes DNS Service IP Address | IP address within the Kubernetes service address range used by cluster service discovery. | Don't use the first IP address in your address range. The first address in your subnet range is used for the _kubernetes.default.svc.cluster.local_ address. |
5452

55-
56-
57-
| Address range | Azure resource | Limits and sizing |
58-
| ------------- | -------------- | ----------------- |
59-
| Virtual network | The Azure virtual network can be as large as /8, but is limited to 65,536 configured IP addresses. Consider all your networking needs, including communicating with services in other virtual networks, before configuring your address space. For example, if you configure too large of an address space, you might run into issues with overlapping other address spaces within your network.|
60-
| Subnet | Must be large enough to accommodate the nodes, pods, and all Kubernetes and Azure resources that might be provisioned in your cluster. For example, if you deploy an internal Azure Load Balancer, its front-end IPs are allocated from the cluster subnet, not public IPs. The subnet size should also take into account upgrade operations or future scaling needs.<p/> Use the following equation to calculate the _minimum_ subnet size including an extra node for upgrade operations: `(number of nodes + 1) + ((number of nodes + 1) * maximum pods per node that you configure)`<p/> Example for a 50 node cluster: `(51) + (51 * 30 (default)) = 1,581` (/21 or larger)<p/>Example for a 50 node cluster that also includes preparation to scale up an extra 10 nodes: `(61) + (61 * 30 (default)) = 1,891` (/21 or larger)<p>If you don't specify a maximum number of pods per node when you create your cluster, the maximum number of pods per node is set to _30_. The minimum number of IP addresses required is based on that value. If you calculate your minimum IP address requirements on a different maximum value, see [Maximum pods per node](#maximum-pods-per-node) to set this value when you deploy your cluster. |
61-
| Kubernetes service address range | Any network element on or connected to this virtual network must not use this range. Service address CIDR must be smaller than /12. You can reuse this range across different AKS clusters. |
62-
| Kubernetes DNS service IP address | IP address within the Kubernetes service address range that is used by cluster service discovery. Don't use the first IP address in your address range. The first address in your subnet range is used for the _kubernetes.default.svc.cluster.local_ address. |
63-
6453
## Maximum pods per node
6554

6655
The maximum number of pods per node in an AKS cluster is 250. The _default_ maximum number of pods per node varies between _kubenet_ and _Azure CNI_ networking, and the method of cluster deployment.
@@ -90,7 +79,7 @@ A minimum value for maximum pods per node is enforced to guarantee space for sys
9079

9180
You can define maximum pods per node when you create a new cluster using one of the following methods:
9281

93-
* **Azure CLI**: Specify the `--max-pods` argument when you deploy a cluster with the [`az aks create`][az-aks-create] command.
82+
- **Azure CLI**: Specify the `--max-pods` argument when you deploy a cluster with the [`az aks create`][az-aks-create] command.
9483
- **Azure Resource Manager template**: Specify the `maxPods` property in the [ManagedClusterAgentPoolProfile] object when you deploy a cluster with an Azure Resource Manager template.
9584
- **Azure portal**: Change the `Max pods per node` field in the node pool settings when creating a cluster or adding a new node pool.
9685

articles/aks/concepts-network-legacy-cni.md

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
title: Concepts - AKS Legacy Container Networking Interfaces (CNI)
33
description: Learn about legacy CNI networking options in Azure Kubernetes Service (AKS)
44
ms.topic: conceptual
5-
ms.date: 05/21/2024
5+
ms.date: 05/29/2024
66
author: schaffererin
77
ms.author: schaffererin
88

@@ -31,7 +31,9 @@ The following prerequisites are required for Azure CNI Node Subnet and kubenet:
3131
3232
## Azure CNI Node Subnet
3333

34-
With [Azure Container Networking Interface (CNI)][cni-networking], every pod gets an IP address from the subnet and can be accessed directly. Systems in the same virtual network as the AKS cluster see the pod IP as the source address for any traffic from the pod. Systems outside the AKS cluster virtual network see the node IP as the source address for any traffic from the pod. These IP addresses must be unique across your network space and must be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node are then reserved up front for that node. This approach requires more planning, and often leads to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.
34+
With [Azure Container Networking Interface (CNI)][cni-networking], every pod gets an IP address from the subnet and can be accessed directly. Systems in the same virtual network as the AKS cluster see the pod IP as the source address for any traffic from the pod. Systems outside the AKS cluster virtual network see the node IP as the source address for any traffic from the pod. These IP addresses must be unique across your network space and must be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node are then reserved up front for that node. This approach requires more planning, and often leads to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.
35+
36+
With Azure CNI Node Subnet, each pod receives an IP address in the IP subnet and can communicate directly with other pods and services. Your clusters can be as large as the IP address range you specify. However, you must plan the IP address range in advance, and all the IP addresses are consumed by the AKS nodes based on the maximum number of pods they can support. Advanced network features and scenarios such as [virtual nodes][virtual-nodes] or Network Policies (either Azure or Calico) are supported with Azure CNI.
3537

3638
### Deployment parameters
3739

@@ -71,8 +73,6 @@ With _kubenet_, only the nodes receive an IP address in the virtual network subn
7173

7274
Azure supports a maximum of _400_ routes in a UDR, so you can't have an AKS cluster larger than 400 nodes. AKS [virtual nodes][virtual-nodes] and Azure Network Policies aren't supported with _kubenet_. [Calico Network Policies][calico-network-policies] are supported.
7375

74-
With _Azure CNI_, each pod receives an IP address in the IP subnet and can communicate directly with other pods and services. Your clusters can be as large as the IP address range you specify. However, you must plan the IP address range in advance, and all the IP addresses are consumed by the AKS nodes based on the maximum number of pods they can support. Advanced network features and scenarios such as [virtual nodes][virtual-nodes] or Network Policies (either Azure or Calico) are supported with _Azure CNI_.
75-
7676
### Limitations & considerations for kubenet
7777

7878
- An additional hop is required in the design of kubenet, which adds minor latency to pod communication.
@@ -145,8 +145,6 @@ With [Azure Container Networking Interface (CNI)][cni-networking], every pod get
145145
It's not recommended, but this configuration is possible. The service address range is a set of virtual IPs (VIPs) that Kubernetes assigns to internal services in your cluster. Azure Networking has no visibility into the service IP range of the Kubernetes cluster. The lack of visibility into the cluster's service address range can lead to issues. It's possible to later create a new subnet in the cluster virtual network that overlaps with the service address range. If such an overlap occurs, Kubernetes could assign a service an IP that's already in use by another resource in the subnet, causing unpredictable behavior or failures. By ensuring you use an address range outside the cluster's virtual network, you can avoid this overlap risk.
146146
Yes, when you deploy a cluster with the Azure CLI or a Resource Manager template. See [Maximum pods per node][max-pods].
147147

148-
149-
150148
- **Can I use a different subnet within my cluster virtual network for the *Kubernetes service address range*?**
151149

152150
It's not recommended, but this configuration is possible. The service address range is a set of virtual IPs (VIPs) that Kubernetes assigns to internal services in your cluster. Azure Networking has no visibility into the service IP range of the Kubernetes cluster. The lack of visibility into the cluster's service address range can lead to issues. It's possible to later create a new subnet in the cluster virtual network that overlaps with the service address range. If such an overlap occurs, Kubernetes could assign a service an IP that's already in use by another resource in the subnet, causing unpredictable behavior or failures. By ensuring you use an address range outside the cluster's virtual network, you can avoid this overlap risk.

0 commit comments

Comments
 (0)