You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/resize-node-pool.md
+25-27Lines changed: 25 additions & 27 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,16 +13,16 @@ Due to an increasing number of deployments or to run a larger workload, you may
13
13
14
14
> AKS agent nodes appear in the Azure portal as regular Azure IaaS resources. But these virtual machines are deployed into a custom Azure resource group (usually prefixed with MC_*). You cannot do any direct customizations to these nodes using the IaaS APIs or resources. Any custom changes that are not done via the AKS API will not persist through an upgrade, scale, update or reboot.
15
15
16
-
This also includes the resize operation, thus, resizing AKS instances is in this manner is not supported. In this how-to guide, you'll learn the recommended method to address this scenario.
16
+
This lack of persistence also applies to the resize operation, thus, resizing AKS instances is in this manner isn't supported. In this how-to guide, you'll learn the recommended method to address this scenario.
17
17
18
18
> [!IMPORTANT]
19
19
> This method is specific to virtual machine scale set-based AKS clusters. When using virtual machine availability sets, you are limited to only one node pool per cluster.
20
20
21
21
## Example resources
22
22
23
-
Suppose you want to resize an existing node pool, called `nodepool1`, from SKU size Standard_DS2_v2 to Standard_DS3_v2. To accomplish this, you will need to create a new node pool using Standard_DS3_v2, move workloads from `nodepool1` to the new node pool, and remove `nodepool1`. In this example, we will call this new node pool `mynodepool`.
23
+
Suppose you want to resize an existing node pool, called `nodepool1`, from SKU size Standard_DS2_v2 to Standard_DS3_v2. To accomplish this task, you'll need to create a new node pool using Standard_DS3_v2, move workloads from `nodepool1` to the new node pool, and remove `nodepool1`. In this example, we'll call this new node pool `mynodepool`.
24
24
25
-
:::image type="content" source="./media/resize-node-pool/node-pool-ds2.png" alt-text="The Azure Portal page for the cluster, navigated to Settings > Node pools. One node pool, named nodepool1 is shown.":::
25
+
:::image type="content" source="./media/resize-node-pool/node-pool-ds2.png" alt-text="The Azure portal page for the cluster, navigated to Settings > Node pools. One node pool, named nodepool1 is shown.":::
Use the [az aks nodepool add][az-aks-nodepool-add] command to create a new node pool called `mynodepool` with 3 nodes using the `Standard_DS3_v2` VM SKU:
64
+
Use the [az aks nodepool add][az-aks-nodepool-add] command to create a new node pool called `mynodepool` with three nodes using the `Standard_DS3_v2` VM SKU:
65
65
66
66
```azurecli-interactive
67
67
az aks nodepool add \
@@ -77,11 +77,11 @@ az aks nodepool add \
77
77
> [!NOTE]
78
78
> Every AKS cluster must contain at least one system node pool with at least one node. In the below example, we are using a `--mode` of `System`, as the cluster is assumed to have only one node pool, necessitating a `System` node pool to replace it. A node pool's mode can be [updated at any time][update-node-pool-mode].
79
79
80
-
When resizing, be sure to consider other requirements and configure your node pool accordingly. You may need to modify the above command. For a full list of the configuration options, please see the [az aks nodepool add][az-aks-nodepool-add] reference page.
80
+
When resizing, be sure to consider other requirements and configure your node pool accordingly. You may need to modify the above command. For a full list of the configuration options, see the [az aks nodepool add][az-aks-nodepool-add] reference page.
81
81
82
82
After a few minutes, the new node pool has been created:
83
83
84
-
:::image type="content" source="./media/resize-node-pool/node-pool-both.png" alt-text="The Azure Portal page for the cluster, navigated to Settings > Node pools. Two node pools, named nodepool1 and mynodepool, respectively, are shown.":::
84
+
:::image type="content" source="./media/resize-node-pool/node-pool-both.png" alt-text="The Azure portal page for the cluster, navigated to Settings > Node pools. Two node pools, named nodepool1 and mynodepool, respectively, are shown.":::
@@ -132,26 +130,13 @@ Draining nodes will cause pods running on them to be evicted and recreated on th
132
130
To drain nodes, use `kubectl drain <node-names> --ignore-daemonsets --delete-emptydir-data`, again using a space-separated list of node names:
133
131
134
132
> [!IMPORTANT]
135
-
> Using `--delete-emptydir-data` is required to evict the AKS-created `coredns` and `metrics-server` pods. If this flag isn't used, an error is expected. Please see the [documentation on emptydir][empty-dir] for more information.
133
+
> Using `--delete-emptydir-data` is required to evict the AKS-created `coredns` and `metrics-server` pods. If this flag isn't used, an error is expected. For more information, see the [documentation on emptydir][empty-dir].
> error when evicting pods/<podname> -n <namespace> (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
145
-
146
-
By default, your cluster has AKS_managed pod disruption budgets (such as `coredns-pdb` or `konnectivity-agent`) with a `MinAvailable` of 1. If, for example, there are two `coredns` pods running, while one of them is getting recreated and is unavailable, the other is unable to be affected due to the pod disruption budget. This resolves itself after the initial `coredns` pod is scheduled and running, allowing the second pod to be properly evicted and recreated.
147
-
148
-
> [!TIP]
149
-
> Consider draining nodes one-by-one for a smoother eviction experience and to avoid throttling. For more information, see:
150
-
> *[Plan for availability using a pod disruption budget][pod-disruption-budget].
151
-
> *[Specifying a Disruption Budget for your Application][specify-disruption-budget]
152
-
> *[Disruptions][disruptions]
153
-
154
-
After the drain operation finishes, all pods other than those controlled by daemonsets are running on the new nodepool:
139
+
After the drain operation finishes, all pods other than those controlled by daemon sets are running on the new node pool:
> Error when evicting pods/[podname] -n [namespace] (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
183
+
184
+
By default, your cluster has AKS_managed pod disruption budgets (such as `coredns-pdb` or `konnectivity-agent`) with a `MinAvailable` of 1. If, for example, there are two `coredns` pods running, while one of them is getting recreated and is unavailable, the other is unable to be affected due to the pod disruption budget. This resolves itself after the initial `coredns` pod is scheduled and running, allowing the second pod to be properly evicted and recreated.
185
+
186
+
> [!TIP]
187
+
> Consider draining nodes one-by-one for a smoother eviction experience and to avoid throttling. For more information, see:
188
+
> *[Plan for availability using a pod disruption budget][pod-disruption-budget].
189
+
> *[Specifying a Disruption Budget for your Application][specify-disruption-budget]
190
+
> *[Disruptions][disruptions]
191
+
194
192
## Remove the existing node pool
195
193
196
-
To delete the existing node pool, use the Azure Portal or the [az aks delete][az-aks-delete] command:
194
+
To delete the existing node pool, use the Azure portal or the [az aks delete][az-aks-delete] command:
197
195
198
196
```bash
199
197
kubectl delete nodepool /
@@ -204,7 +202,7 @@ kubectl delete nodepool /
204
202
205
203
After completion, the final result is the AKS cluster having a single, new node pool with the new, desired SKU size and all the applications and pods properly running:
206
204
207
-
:::image type="content" source="./media/resize-node-pool/node-pool-ds3.png" alt-text="The Azure Portal page for the cluster, navigated to Settings > Node pools. One node pool, named mynodepool is shown.":::
205
+
:::image type="content" source="./media/resize-node-pool/node-pool-ds3.png" alt-text="The Azure portal page for the cluster, navigated to Settings > Node pools. One node pool, named mynodepool is shown.":::
208
206
209
207
```bash
210
208
kubectl get nodes
@@ -222,7 +220,7 @@ After resizing a node pool by cordoning and draining, learn more about [using mu
Copy file name to clipboardExpand all lines: articles/aks/use-multiple-node-pools.md
-96Lines changed: 0 additions & 96 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -299,99 +299,6 @@ It takes a few minutes for the scale operation to complete.
299
299
300
300
AKS offers a separate feature to automatically scale node pools with a feature called the [cluster autoscaler](cluster-autoscaler.md). This feature can be enabled per node pool with unique minimum and maximum scale counts per node pool. Learn how to [use the cluster autoscaler per node pool](cluster-autoscaler.md#use-the-cluster-autoscaler-with-multiple-node-pools-enabled).
301
301
302
-
## Resize a node pool
303
-
304
-
To increase of number of deployments or run a larger workload, you may want to change the virtual machine scale set plan or resize AKS instances. However, you should not do any direct customizations to these nodes using the IaaS APIs or resources, as any custom changes that are not done via the AKS API will not persist through an upgrade, scale, update or reboot. This means resizing your AKS instances in this manner is not supported.
305
-
306
-
The recommended method to resize a node pool to the desired SKU size is as follows:
307
-
308
-
* Create a new node pool with the new SKU size
309
-
* Cordon and drain the nodes in the old node pool in order to move workloads to the new nodes
310
-
* Remove the old node pool.
311
-
312
-
> [!IMPORTANT]
313
-
> This method is specific to virtual machine scale set-based AKS clusters. When using virtual machine availability sets, you are limited to only one node pool per cluster.
314
-
315
-
### Create a new node pool with the desired SKU
316
-
317
-
The following command creates a new node pool with 2 nodes using the `Standard_DS3_v2` VM SKU:
318
-
319
-
> [!NOTE]
320
-
> Every AKS cluster must contain at least one system node pool with at least one node. In the below example, we are using a `--mode` of `System`, as the cluster is assumed to have only one node pool, necessitating a `System` node pool to replace it. A node pool's mode can be [updated at any time][update-node-pool-mode].
321
-
322
-
```azurecli-interactive
323
-
az aks nodepool add \
324
-
--resource-group myResourceGroup \
325
-
--cluster-name myAKSCluster \
326
-
--name mynodepool \
327
-
--node-count 2 \
328
-
--node-vm-size Standard_DS3_v2 \
329
-
--mode System \
330
-
--no-wait
331
-
```
332
-
333
-
Be sure to consider other requirements and configure your node pool accordingly. You may need to modify the above command. For a full list of the configuration options, please see the [az aks nodepool add][az-aks-nodepool-add] reference page.
334
-
335
-
### Cordon the existing nodes
336
-
337
-
Cordoning marks specified nodes as unschedulable and prevents any additional pods from being added to the nodes.
338
-
339
-
First, obtain the names of the nodes you'd like to cordon with `kubectl get nodes`. Your output should look similar to the following:
If succesful, your output should look similar to the following:
355
-
356
-
```bash
357
-
node/aks-nodepool1-31721111-vmss000000 cordoned
358
-
node/aks-nodepool1-31721111-vmss000001 cordoned
359
-
node/aks-nodepool1-31721111-vmss000002 cordoned
360
-
```
361
-
362
-
### Drain the existing nodes
363
-
364
-
> [!IMPORTANT]
365
-
> To successfully drain nodes and evict running pods, ensure that any PodDisruptionBudgets (PDBs) allow for at least 1 pod replica to be moved at a time, otherwise the drain/evict operation will fail. To check this, you can run `kubectl get pdb -A` and make sure `ALLOWED DISRUPTIONS` is at least 1 or higher.
366
-
367
-
Draining nodes will cause pods running on them to be evicted and recreated on the other, schedulable nodes.
368
-
369
-
To drain nodes, use `kubectl drain <node-names> --ignore-daemonsets --delete-emptydir-data`, again using a space-separated list of node names:
370
-
371
-
> [!IMPORTANT]
372
-
> Using `--delete-emptydir-data` is required to evict the AKS-created `coredns` and `metrics-server` pods. If this flag isn't used, an error is expected. Please see the [documentation on emptydir][empty-dir] for more information.
> By default, your cluster has AKS_managed pod disruption budgets (such as `coredns-pdb` or `konnectivity-agent`) with a `MinAvailable` of 1. If, for example, there are two `coredns` pods running, while one of them is getting recreated and is unavailable, the other is unable to be affected due to the pod disruption budget. This resolves itself after the initial `coredns` pod is scheduled and running, allowing the second pod to be properly evicted and recreated.
380
-
>
381
-
> Consider draining nodes one-by-one for a smoother eviction experience and to avoid throttling. For more information, see [plan for availability using a pod disruption budget][pod-disruption-budget].
382
-
383
-
After the drain operation finishes, verify pods are running on the new nodepool:
384
-
385
-
```bash
386
-
kubectl get pods -o wide -A
387
-
```
388
-
389
-
### Remove the existing node pool
390
-
391
-
To delete the existing node pool, see the section on [Deleting a node pool](#delete-a-node-pool).
392
-
393
-
After completion, the final result is the AKS cluster having a single, new node pool with the new, desired SKU size and all the applications and pods properly running.
394
-
395
302
## Delete a node pool
396
303
397
304
If you no longer need a pool, you can delete it and remove the underlying VM nodes. To delete a node pool, use the [az aks node pool delete][az-aks-nodepool-delete] command and specify the node pool name. The following example deletes the *mynoodepool* created in the previous steps:
@@ -968,7 +875,6 @@ Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
0 commit comments