Skip to content

Commit 9e823b2

Browse files
authored
Merge pull request #196783 from wedaly/aks-multipool-subnet-updates
Update limitations for AKS multipool subnets
2 parents 24ef768 + 5e47757 commit 9e823b2

File tree

1 file changed

+14
-12
lines changed

1 file changed

+14
-12
lines changed

articles/aks/use-multiple-node-pools.md

Lines changed: 14 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -27,7 +27,7 @@ The following limitations apply when you create and manage AKS clusters that sup
2727
* See [Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)][quotas-skus-regions].
2828
* You can delete system node pools, provided you have another system node pool to take its place in the AKS cluster.
2929
* System pools must contain at least one node, and user node pools may contain zero or more nodes.
30-
* The AKS cluster must use the Standard SKU load balancer to use multiple node pools, the feature is not supported with Basic SKU load balancers.
30+
* The AKS cluster must use the Standard SKU load balancer to use multiple node pools, the feature isn't supported with Basic SKU load balancers.
3131
* The AKS cluster must use virtual machine scale sets for the nodes.
3232
* You can't change the VM size of a node pool after you create it.
3333
* The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter. For Linux node pools the length must be between 1 and 12 characters, for Windows node pools the length must be between 1 and 6 characters.
@@ -129,8 +129,9 @@ A workload may require splitting a cluster's nodes into separate pools for logic
129129
* All subnets assigned to nodepools must belong to the same virtual network.
130130
* System pods must have access to all nodes/pods in the cluster to provide critical functionality such as DNS resolution and tunneling kubectl logs/exec/port-forward proxy.
131131
* If you expand your VNET after creating the cluster you must update your cluster (perform any managed cluster operation but node pool operations don't count) before adding a subnet outside the original cidr. AKS will error out on the agent pool add now though we originally allowed it. If you don't know how to reconcile your cluster file a support ticket.
132-
* Azure Network Policy is not supported.
133-
* Kube-proxy is designed for a single contiguous CIDR and optimizes rules based on that value. When using multiple non-contiguous ranges, these optimizations cannot occur. See this [K.E.P.](https://github.com/kubernetes/enhancements/tree/master/keps/sig-network/2450-Remove-knowledge-of-pod-cluster-CIDR-from-iptables-rules) and the documentation for the [`--cluster-cidr` `kube-proxy` argument](https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/) for more details. In clusters configured with Azure CNI, `kube-proxy` will be configured with the subnet of the first node pool at cluster creation.
132+
* In clusters with Kubernetes version < 1.23.3, kube-proxy will SNAT traffic from new subnets, which can cause Azure Network Policy to drop the packets.
133+
* Windows nodes will SNAT traffic to the new subnets until the nodepool is reimaged.
134+
* Internal load balancers default to one of the node pool subnets (usually the first subnet of the node pool at cluster creation). To override this behavior, you can [specify the load balancer's subnet explicitly using an annotation][internal-lb-different-subnet].
134135

135136
To create a node pool with a dedicated subnet, pass the subnet resource ID as an additional parameter when creating a node pool.
136137

@@ -234,12 +235,12 @@ The valid Kubernetes upgrades for a cluster's control plane and node pools are v
234235
* Rules for valid versions to upgrade node pools:
235236
* The node pool version must have the same *major* version as the control plane.
236237
* The node pool *minor* version must be within two *minor* versions of the control plane version.
237-
* The node pool version cannot be greater than the control `major.minor.patch` version.
238+
* The node pool version can't be greater than the control `major.minor.patch` version.
238239

239240
* Rules for submitting an upgrade operation:
240-
* You cannot downgrade the control plane or a node pool Kubernetes version.
241-
* If a node pool Kubernetes version is not specified, behavior depends on the client being used. Declaration in Resource Manager templates falls back to the existing version defined for the node pool if used, if none is set the control plane version is used to fall back on.
242-
* You can either upgrade or scale a control plane or a node pool at a given time, you cannot submit multiple operations on a single control plane or node pool resource simultaneously.
241+
* You can't downgrade the control plane or a node pool Kubernetes version.
242+
* If a node pool Kubernetes version isn't specified, behavior depends on the client being used. Declaration in Resource Manager templates falls back to the existing version defined for the node pool if used, if none is set the control plane version is used to fall back on.
243+
* You can either upgrade or scale a control plane or a node pool at a given time, you can't submit multiple operations on a single control plane or node pool resource simultaneously.
243244

244245
## Scale a node pool manually
245246

@@ -360,7 +361,7 @@ Associating a node pool with an existing capacity reservation group can be done
360361
```azurecli-interactive
361362
az aks nodepool add -g MyRG --cluster-name MyMC -n myAP --capacityReservationGroup myCRG
362363
```
363-
Associating a system node pool with an existing capacity reservation group can be done using [az aks create][az-aks-create] command. If the capacity reservation group specified does not exist, then a warning is issued and the cluster gets created without any capacity reservation group association.
364+
Associating a system node pool with an existing capacity reservation group can be done using [az aks create][az-aks-create] command. If the capacity reservation group specified doesn't exist, then a warning is issued and the cluster gets created without any capacity reservation group association.
364365

365366
```azurecli-interactive
366367
az aks create -g MyRG --cluster-name MyMC --capacityReservationGroup myCRG
@@ -562,7 +563,7 @@ FIPS-enabled node pools have the following limitations:
562563
* Currently, you can only have FIPS-enabled Linux-based node pools running on Ubuntu 18.04.
563564
* FIPS-enabled node pools require Kubernetes version 1.19 and greater.
564565
* To update the underlying packages or modules used for FIPS, you must use [Node Image Upgrade][node-image-upgrade].
565-
* Container Images on the FIPS nodes have not been assessed for FIPS compliance.
566+
* Container Images on the FIPS nodes haven't been assessed for FIPS compliance.
566567

567568
> [!IMPORTANT]
568569
> The FIPS-enabled Linux image is a different image than the default Linux image used for Linux-based node pools. To enable FIPS on a node pool, you must create a new Linux-based node pool. You can't enable FIPS on existing node pools.
@@ -588,7 +589,7 @@ To verify your node pool is FIPS-enabled, use [az aks show][az-aks-show] to chec
588589
az aks show --resource-group myResourceGroup --cluster-name myAKSCluster --query="agentPoolProfiles[].{Name:name enableFips:enableFips}" -o table
589590
```
590591

591-
The following example output shows the *fipsnp* node pool is FIPS-enabled and *nodepool1* is not.
592+
The following example output shows the *fipsnp* node pool is FIPS-enabled and *nodepool1* isn't.
592593

593594
```output
594595
Name enableFips
@@ -702,7 +703,7 @@ Edit these values as need to update, add, or delete node pools as needed:
702703
}
703704
```
704705

705-
Deploy this template using the [az deployment group create][az-deployment-group-create] command, as shown in the following example. You are prompted for the existing AKS cluster name and location:
706+
Deploy this template using the [az deployment group create][az-deployment-group-create] command, as shown in the following example. You're prompted for the existing AKS cluster name and location:
706707

707708
```azurecli-interactive
708709
az deployment group create \
@@ -733,7 +734,7 @@ It may take a few minutes to update your AKS cluster depending on the node pool
733734

734735
## Assign a public IP per node for your node pools
735736

736-
AKS nodes do not require their own public IP addresses for communication. However, scenarios may require nodes in a node pool to receive their own dedicated public IP addresses. A common scenario is for gaming workloads, where a console needs to make a direct connection to a cloud virtual machine to minimize hops. This scenario can be achieved on AKS by using Node Public IP.
737+
AKS nodes don't require their own public IP addresses for communication. However, scenarios may require nodes in a node pool to receive their own dedicated public IP addresses. A common scenario is for gaming workloads, where a console needs to make a direct connection to a cloud virtual machine to minimize hops. This scenario can be achieved on AKS by using Node Public IP.
737738

738739
First, create a new resource group.
739740

@@ -877,3 +878,4 @@ Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
877878
[use-tags]: use-tags.md
878879
[use-labels]: use-labels.md
879880
[cordon-and-drain]: resize-node-pool.md#cordon-the-existing-nodes
881+
[internal-lb-different-subnet]: internal-lb.md#specify-a-different-subnet

0 commit comments

Comments
 (0)