You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/use-multiple-node-pools.md
+31Lines changed: 31 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -348,6 +348,35 @@ az aks nodepool list -g myResourceGroup --cluster-name myAKSCluster
348
348
349
349
It takes a few minutes to delete the nodes and the node pool.
350
350
351
+
## Associate capacity reservation groups to node pools (preview)
352
+
353
+
[!INCLUDE [preview features callout](./includes/preview/preview-callout.md)]
354
+
355
+
As your application workloads demands, you may associate node pools to capacity reservation groups created prior. This ensures guaranteed capacity is allocated for your node pools.
356
+
357
+
For more information on the capacity reservation groups, please refer to [Capacity Reservation Groups][capacity-reservation-groups].
358
+
359
+
Associating a node pool with an existing capacity reservation group can be done using [az aks nodepool add][az-aks-nodepool-add] command and specifying a capacity reservation group with the --capacityReservationGroup flag" The capacity reservation group should already exist , otherwise the node pool will be added to the cluster with a warning and no capacity reservation group gets associated.
360
+
361
+
```azurecli-interactive
362
+
az aks nodepool add -g MyRG --cluster-name MyMC -n myAP --capacityReservationGroup myCRG
363
+
```
364
+
Associating a system node pool with an existing capacity reservation group can be done using [az aks create][az-aks-create] command. If the capacity reservation group specified does not exist, then a warning is issued and the cluster gets created without any capacity reservation group association.
365
+
366
+
```azurecli-interactive
367
+
az aks create -g MyRG --cluster-name MyMC --capacityReservationGroup myCRG
368
+
```
369
+
Deleting a node pool command will implicitly dissociate a node pool from any associated capacity reservation group, before that node pool is deleted.
370
+
371
+
```azurecli-interactive
372
+
az aks nodepool delete -g MyRG --cluster-name MyMC -n myAP
373
+
```
374
+
Deleting a cluster command implicitly dissociates all node pools in a cluster from their associated capacity reservation groups.
375
+
376
+
```azurecli-interactive
377
+
az aks delete -g MyRG --cluster-name MyMC
378
+
```
379
+
351
380
## Specify a VM size for a node pool
352
381
353
382
In the previous examples to create a node pool, a default VM size was used for the nodes created in the cluster. A more common scenario is for you to create node pools with different VM sizes and capabilities. For example, you may create a node pool that contains nodes with large amounts of CPU or memory, or a node pool that provides GPU support. In the next step, you [use taints and tolerations](#setting-nodepool-taints) to tell the Kubernetes scheduler how to limit access to pods that can run on these nodes.
@@ -843,6 +872,7 @@ Use [proximity placement groups][reduce-latency-ppg] to reduce latency for your
0 commit comments