You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/use-multiple-node-pools.md
+28-22Lines changed: 28 additions & 22 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -32,7 +32,7 @@ The following limitations apply when you create and manage AKS clusters that sup
32
32
* The AKS cluster must use the Standard SKU load balancer to use multiple node pools, the feature is not supported with Basic SKU load balancers.
33
33
* The AKS cluster must use virtual machine scale sets for the nodes.
34
34
* You can't add or delete node pools using an existing Resource Manager template as with most operations. Instead, [use a separate Resource Manager template](#manage-node-pools-using-a-resource-manager-template) to make changes to node pools in an AKS cluster.
35
-
* The name of a node pool must start with a lowercase letter and can only contain alphanumeric characters. For Linux node pools the length must be between 1 and 12 characters, for Windows node pools the length must be between 1 and 6 characters.
35
+
* The name of a node pool may only contain lowercase alphanumeric characters and must begin with a lowercase letter. For Linux node pools the length must be between 1 and 12 characters, for Windows node pools the length must be between 1 and 6 characters.
36
36
* The AKS cluster can have a maximum of eight node pools.
37
37
* The AKS cluster can have a maximum of 400 nodes across those eight node pools.
38
38
* All node pools must reside in the same subnet.
@@ -42,7 +42,7 @@ The following limitations apply when you create and manage AKS clusters that sup
42
42
To get started, create an AKS cluster with a single node pool. The following example uses the [az group create][az-group-create] command to create a resource group named *myResourceGroup* in the *eastus* region. An AKS cluster named *myAKSCluster* is then created using the [az aks create][az-aks-create] command. A *--kubernetes-version* of *1.13.10* is used to show how to update a node pool in a following step. You can specify any [supported Kubernetes version][supported-versions].
43
43
44
44
> [!NOTE]
45
-
> The *Basic* load balanacer SKU is not supported when using multiple node pools. By default, AKS clusters are created with the *Standard* load balancer SKU from Azure CLI and Azure portal.
45
+
> The *Basic* load balancer SKU is **not supported** when using multiple node pools. By default, AKS clusters are created with the *Standard* load balancer SKU from Azure CLI and Azure portal.
46
46
47
47
```azurecli-interactive
48
48
# Create a resource group in East US
@@ -187,28 +187,34 @@ As a best practice, you should upgrade all node pools in an AKS cluster to the s
187
187
## Upgrade a cluster control plane with multiple node pools
188
188
189
189
> [!NOTE]
190
-
> Kubernetes uses the standard [Semantic Versioning](https://semver.org/) versioning scheme. The version number is expressed as *x.y.z*, where *x* is the major version, *y* is the minor version, and *z* is the patch version. For example, in version *1.12.6*, 1 is the major version, 12 is the minor version and 6 is the patch version. The Kubernetes version of the control plane as well as the initial node pool is set during cluster creation. All additional node pools have their Kubernetes version set when they are added to the cluster. The Kubernetes versions may differ between node pools as well as between a node pool and the control plane, but the follow restrictions apply:
191
-
>
192
-
> * The node pool version must have the same major version as the control plane.
193
-
> * The node pool version may be one minor version less than the control plane version.
194
-
> * The node pool version may be any patch version as long as the other two constraints are followed.
190
+
> Kubernetes uses the standard [Semantic Versioning](https://semver.org/) versioning scheme. The version number is expressed as *x.y.z*, where *x* is the major version, *y* is the minor version, and *z* is the patch version. For example, in version *1.12.6*, 1 is the major version, 12 is the minor version, and 6 is the patch version. The Kubernetes version of the control plane and the initial node pool are set during cluster creation. All additional node pools have their Kubernetes version set when they are added to the cluster. The Kubernetes versions may differ between node pools as well as between a node pool and the control plane.
195
191
196
-
An AKS cluster has two cluster resource objects with Kubernetes versions associated. The first is a control plane Kubernetes version. The second is an agent pool with a Kubernetes version. A control plane maps to one or many node pools. The behavior of an upgrade operation depends on which Azure CLI command is used.
192
+
An AKS cluster has two cluster resource objects with Kubernetes versions associated.
197
193
198
-
* Upgrading the control plane requires using `az aks upgrade`
199
-
* This upgrades the control plane version and all node pools in the cluster
200
-
* By passing `az aks upgrade` with the `--control-plane-only` flag only the cluster control plane gets upgraded and none of the associated node pools are changed.
201
-
* Upgrading individual node pools requires using `az aks nodepool upgrade`
202
-
* This upgrades only the target node pool with the specified Kubernetes version
194
+
1. A cluster control plane Kubernetes version.
195
+
2. A node pool with a Kubernetes version.
203
196
204
-
The relationship between Kubernetes versions held by node pools must also follow a set of rules.
197
+
A control plane maps to one or many node pools. The behavior of an upgrade operation depends on which Azure CLI command is used.
205
198
206
-
* You cannot downgrade the control plane nor a node pool Kubernetes version.
207
-
* If a node pool Kubernetes version is not specified, behavior depends on the client being used. For declaration in Resource Manager template the existing version defined for the node pool is used, if none is set the control plane version is used.
208
-
* You can either upgrade or scale a control plane or node pool at a given time, you cannot submit both operations simultaneously.
209
-
* A node pool Kubernetes version must be the same major version as the control plane.
210
-
* A node pool Kubernetes version can be at most two (2) minor versions less than the control plane, never greater.
211
-
* A node pool can be any Kubernetes patch version less than or equal to the control plane, never greater.
199
+
Upgrading an AKS control plane requires using `az aks upgrade`. This upgrades the control plane version and all node pools in the cluster.
200
+
201
+
Issuing the `az aks upgrade` command with the `--control-plane-only` flag upgrades only the cluster control plane. None of the associated node pools in the cluster are changed.
202
+
203
+
Upgrading individual node pools requires using `az aks nodepool upgrade`. This upgrades only the target node pool with the specified Kubernetes version
204
+
205
+
### Validation rules for upgrades
206
+
207
+
The valid upgrades for Kubernetes versions held by a cluster's control plane or node pools are validated by the following sets of rules.
208
+
209
+
* Rules for valid versions to upgrade to:
210
+
* The node pool version must have the same *major* version as the control plane.
211
+
* The node pool version may be two *minor* versions less than the control plane version.
212
+
* The node pool version may be two *patch* versions less than the control plane version.
213
+
214
+
* Rules for submitting an upgrade operation:
215
+
* You cannot downgrade the control plane or a node pool Kubernetes version.
216
+
* If a node pool Kubernetes version is not specified, behavior depends on the client being used. Declaration in Resource Manager templates fall back to the existing version defined for the node pool if used, if none is set the control plane version is used to fall back on.
217
+
* You can either upgrade or scale a control plane or a node pool at a given time, you cannot submit multiple operations on a single control plane or node pool resource simultaneously.
212
218
213
219
## Scale a node pool manually
214
220
@@ -446,11 +452,11 @@ Only pods that have this taint applied can be scheduled on nodes in *gpunodepool
446
452
447
453
## Manage node pools using a Resource Manager template
448
454
449
-
When you use an Azure Resource Manager template to create and managed resources, you can typically update the settings in your template and redeploy to update the resource. With node pools in AKS, the initial node pool profile can't be updated once the AKS cluster has been created. This behavior means that you can't update an existing Resource Manager template, make a change to the node pools, and redeploy. Instead, you must create a separate Resource Manager template that updates only the agent pools for an existing AKS cluster.
455
+
When you use an Azure Resource Manager template to create and managed resources, you can typically update the settings in your template and redeploy to update the resource. With node pools in AKS, the initial node pool profile can't be updated once the AKS cluster has been created. This behavior means that you can't update an existing Resource Manager template, make a change to the node pools, and redeploy. Instead, you must create a separate Resource Manager template that updates only the node pools for an existing AKS cluster.
450
456
451
457
Create a template such as `aks-agentpools.json` and paste the following example manifest. This example template configures the following settings:
452
458
453
-
* Updates the *Linux* agent pool named *myagentpool* to run three nodes.
459
+
* Updates the *Linux* node pool named *myagentpool* to run three nodes.
454
460
* Sets the nodes in the node pool to run Kubernetes version *1.13.10*.
0 commit comments