You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/spot-node-pool.md
+48-52Lines changed: 48 additions & 52 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,81 +2,82 @@
2
2
title: Add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster
3
3
description: Learn how to add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster.
4
4
ms.topic: article
5
-
ms.date: 01/21/2022
5
+
ms.date: 03/29/2023
6
6
7
7
#Customer intent: As a cluster operator or developer, I want to learn how to add an Azure Spot node pool to an AKS Cluster.
8
8
---
9
9
10
10
# Add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster
11
11
12
-
A Spot node pool is a node pool backed by an [Azure Spot Virtual machine scale set][vmss-spot]. Using Spot VMs for nodes with your AKS cluster allows you to take advantage of unutilized capacity in Azure at a significant cost savings. The amount of available unutilized capacity will vary based on many factors, including node size, region, and time of day.
12
+
A Spot node pool is a node pool backed by an [Azure Spot Virtual machine scale set][vmss-spot]. With Spot VMs in your AKS cluster, you can take advantage of unutilized Azure capacity with significant cost savings. The amount of available unutilized capacity varies based on many factors, such as node size, region, and time of day.
13
13
14
-
When you deploy a Spot node pool, Azure will allocate the Spot nodes if there's capacity available. There's no SLA for the Spot nodes. A Spot scale set that backs the Spot node pool is deployed in a single fault domain and offers no high availability guarantees. At any time when Azure needs the capacity back, the Azure infrastructure will evict Spot nodes.
14
+
When you deploy a Spot node pool, Azure allocates the Spot nodes if there's capacity available and deploys a Spot scale set that backs the Spot node pool in a single default domain. There's no SLA for the Spot nodes. There are no high availability guarantees. If Azure needs capacity back, the Azure infrastructure will evict the Spot nodes.
15
15
16
16
Spot nodes are great for workloads that can handle interruptions, early terminations, or evictions. For example, workloads such as batch processing jobs, development and testing environments, and large compute workloads may be good candidates to schedule on a Spot node pool.
17
17
18
18
In this article, you add a secondary Spot node pool to an existing Azure Kubernetes Service (AKS) cluster.
19
19
20
-
This article assumes a basic understanding of Kubernetes and Azure Load Balancer concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
21
-
22
-
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
23
-
24
20
## Before you begin
25
21
26
-
When you create a cluster to use a Spot node pool, that cluster must use Virtual Machine Scale Sets for node pools and the *Standard* SKU load balancer. You must also add another node pool after you create your cluster, which is covered in a later step.
27
-
28
-
This article requires that you're running the Azure CLI version 2.14 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
22
+
* This article assumes a basic understanding of Kubernetes and Azure Load Balancer concepts. For more information, see [Kubernetes core concepts for Azure Kubernetes Service (AKS)][kubernetes-concepts].
23
+
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
24
+
* When you create a cluster to use a Spot node pool, the cluster must use Virtual Machine Scale Sets for node pools and the *Standard* SKU load balancer. You must also add another node pool after you create your cluster, which is covered in this tutorial.
25
+
* This article requires that you're running the Azure CLI version 2.14 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
29
26
30
27
### Limitations
31
28
32
29
The following limitations apply when you create and manage AKS clusters with a Spot node pool:
33
30
34
-
* A Spot node pool can't be the cluster's default node pool. A Spot node pool can only be used for a secondary pool.
31
+
* A Spot node pool can't be a default node pool, it can only be used as a secondary pool.
35
32
* The control plane and node pools can't be upgraded at the same time. You must upgrade them separately or remove the Spot node pool to upgrade the control plane and remaining node pools at the same time.
36
33
* A Spot node pool must use Virtual Machine Scale Sets.
37
-
* You can't change ScaleSetPriority or SpotMaxPrice after creation.
38
-
* When setting SpotMaxPrice, the value must be -1 or a positive value with up to five decimal places.
39
-
* A Spot node pool will have the label *kubernetes.azure.com/scalesetpriority:spot*, the taint *kubernetes.azure.com/scalesetpriority=spot:NoSchedule*, and system pods will have anti-affinity.
34
+
* You can't change `ScaleSetPriority` or `SpotMaxPrice` after creation.
35
+
* When setting `SpotMaxPrice`, the value must be *-1* or a *positive value with up to five decimal places*.
36
+
* A Spot node pool will have the `kubernetes.azure.com/scalesetpriority:spot` label, the taint `kubernetes.azure.com/scalesetpriority=spot:NoSchedule`, and the system pods will have anti-affinity.
40
37
* You must add a [corresponding toleration][spot-toleration] and affinity to schedule workloads on a Spot node pool.
41
38
42
39
## Add a Spot node pool to an AKS cluster
43
40
44
-
You must add a Spot node pool to an existing cluster that has multiple node pools enabled. For more details on creating an AKS cluster with multiple node pools, see [use multiple node pools][use-multiple-node-pools].
45
-
46
-
Create a node pool using the [az aks nodepool add][az-aks-nodepool-add] command:
47
-
48
-
```azurecli-interactive
49
-
az aks nodepool add \
50
-
--resource-group myResourceGroup \
51
-
--cluster-name myAKSCluster \
52
-
--name spotnodepool \
53
-
--priority Spot \
54
-
--eviction-policy Delete \
55
-
--spot-max-price -1 \
56
-
--enable-cluster-autoscaler \
57
-
--min-count 1 \
58
-
--max-count 3 \
59
-
--no-wait
60
-
```
41
+
When adding a Spot node pool to an existing cluster, it must be a cluster with multiple node pools enabled. When you create an AKS cluster with multiple node pools enabled, you create a node pool with a `priority` of `Regular` by default. To add a Spot node pool, you must specify `Spot` as the value for `priority`. For more details on creating an AKS cluster with multiple node pools, see [use multiple node pools][use-multiple-node-pools].
42
+
43
+
* Create a node pool with a `priority` of `Spot` using the [az aks nodepool add][az-aks-nodepool-add] command.
61
44
62
-
By default, you create a node pool with a *priority* of *Regular* in your AKS cluster when you create a cluster with multiple node pools. The above command adds an auxiliary node pool to an existing AKS cluster with a *priority* of *Spot*. The *priority* of *Spot* makes the node pool a Spot node pool. The *eviction-policy* parameter is set to *Delete* in the above example, which is the default value. When you set the [eviction policy][eviction-policy] to *Delete*, nodes in the underlying scale set of the node pool are deleted when they're evicted. You can also set the eviction policy to *Deallocate*. When you set the eviction policy to *Deallocate*, nodes in the underlying scale set are set to the stopped-deallocated state upon eviction. Nodes in the stopped-deallocated state count against your compute quota and can cause issues with cluster scaling or upgrading. The *priority* and *eviction-policy* values can only be set during node pool creation. Those values can't be updated later.
45
+
```azurecli-interactive
46
+
az aks nodepool add \
47
+
--resource-group myResourceGroup \
48
+
--cluster-name myAKSCluster \
49
+
--name spotnodepool \
50
+
--priority Spot \
51
+
--eviction-policy Delete \
52
+
--spot-max-price -1 \
53
+
--enable-cluster-autoscaler \
54
+
--min-count 1 \
55
+
--max-count 3 \
56
+
--no-wait
57
+
```
63
58
64
-
The command also enables the [cluster autoscaler][cluster-autoscaler], which is recommended to use with Spot node pools. Based on the workloads running in your cluster, the cluster autoscaler scales up and scales down the number of nodes in the node pool. For Spot node pools, the cluster autoscaler will scale up the number of nodes after an eviction if more nodes are still needed. If you change the maximum number of nodes a node pool can have, you also need to adjust the `maxCount` value associated with the cluster autoscaler. If you don't use a cluster autoscaler, upon eviction, the Spot pool will eventually decrease to zero and require a manual operation to receive any additional Spot nodes.
59
+
In the previous command, the `priority` of `Spot` makes the node pool a Spot node pool. The `eviction-policy` parameter is set to `Delete`, which is the default value. When you set the [eviction policy][eviction-policy] to `Delete`, nodes in the underlying scale set of the node pool are deleted when they're evicted.
60
+
61
+
You can also set the eviction policy to `Deallocate`, which means that the nodes in the underlying scale set are set to the *stopped-deallocated* state upon eviction. Nodes in the *stopped-deallocated* state count against your compute quota and can cause issues with cluster scaling or upgrading. The `priority` and `eviction-policy` values can only be set during node pool creation. Those values can't be updated later.
62
+
63
+
The previous command also enables the [cluster autoscaler][cluster-autoscaler], which we recommend using with Spot node pools. Based on the workloads running in your cluster, the cluster autoscaler scales the number of nodes up and down. For Spot node pools, the cluster autoscaler will scale up the number of nodes after an eviction if more nodes are still needed. If you change the maximum number of nodes a node pool can have, you also need to adjust the `maxCount` value associated with the cluster autoscaler. If you don't use a cluster autoscaler, upon eviction, the Spot pool will eventually decrease to *0* and require manual operation to receive any additional Spot nodes.
65
64
66
65
> [!IMPORTANT]
67
-
> Only schedule workloads on Spot node pools that can handle interruptions, such as batch processing jobs and testing environments. It is recommended that you set up [taints and tolerations][taints-tolerations] on your Spot node pool to ensure that only workloads that can handle node evictions are scheduled on a Spot node pool. For example, the above command by default adds a taint of *kubernetes.azure.com/scalesetpriority=spot:NoSchedule* so only pods with a corresponding toleration are scheduled on this node.
66
+
> Only schedule workloads on Spot node pools that can handle interruptions, such as batch processing jobs and testing environments. We recommend you set up [taints and tolerations][taints-tolerations] on your Spot node pool to ensure that only workloads that can handle node evictions are scheduled on a Spot node pool. For example, the above command adds a taint of `kubernetes.azure.com/scalesetpriority=spot:NoSchedule`, so only pods with a corresponding toleration are scheduled on this node.
68
67
69
-
## Verify the Spot node pool
68
+
### Verify the Spot node pool
70
69
71
-
To verify your node pool has been added as a Spot node pool:
70
+
* Verify your node pool has been added using the [`az aks nodepool show`][az-aks-nodepool-show] command and confirming the `scaleSetPriority` is `Spot`.
72
71
73
-
```azurecli
74
-
az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --name spotnodepool
75
-
```
72
+
```azurecli
73
+
az aks nodepool show --resource-group myResourceGroup --cluster-name myAKSCluster --name spotnodepool
74
+
```
76
75
77
-
Confirm *scaleSetPriority* is *Spot*.
76
+
### Schedule a pod to run on the Spot node
78
77
79
-
To schedule a pod to run on a Spot node, add a toleration and node affinity that corresponds to the taint applied to your Spot node. The following example shows a portion of a yaml file that defines a toleration that corresponds to the *kubernetes.azure.com/scalesetpriority=spot:NoSchedule* taint and a node affinity that corresponds to the *kubernetes.azure.com/scalesetpriority=spot* label used in the previous step.
78
+
To schedule a pod to run on a Spot node, you can add a toleration and node affinity that corresponds to the taint applied to your Spot node.
79
+
80
+
The following example shows a portion of a YAML file that defines a toleration corresponding to the `kubernetes.azure.com/scalesetpriority=spot:NoSchedule` taint and a node affinity corresponding to the `kubernetes.azure.com/scalesetpriority=spot` label used in the previous step.
80
81
81
82
```yaml
82
83
spec:
@@ -99,32 +100,28 @@ spec:
99
100
...
100
101
```
101
102
102
-
When a pod with this toleration and node affinity is deployed, Kubernetes will successfully schedule the pod on the nodes with the taint and label applied.
103
+
When you deploy a pod with this toleration and node affinity, Kubernetes will successfully schedule the pod on the nodes with the taint and label applied.
103
104
104
105
## Upgrade a Spot node pool
105
106
106
-
Upgrading Spot node pools was previously unsupported, but is now an available operation. When Upgrading a Spot node pool, AKS will internally issue a cordon and an eviction notice, but no drain is applied. There are no surge nodes available for Spot node pool upgrades. Outside of these changes, behavior when upgrading Spot node pools is consistent with other node pool types.
107
+
When you upgrade a Spot node pool, AKS internally issues a cordon and an eviction notice, but no drain is applied. There are no surge nodes available for Spot node pool upgrades. Outside of these changes, the behavior when upgrading Spot node pools is consistent with that of other node pool types.
107
108
108
-
For more information on upgrading, see [Upgrade an AKS cluster][upgrade-cluster] and the Azure CLI command [az aks upgrade][az-aks-upgrade].
109
+
For more information on upgrading, see [Upgrade an AKS cluster][upgrade-cluster].
109
110
110
111
## Max price for a Spot pool
111
112
112
-
[Pricing for Spot instances is variable][pricing-spot], based on region and SKU. For more information, see pricing for [Linux][pricing-linux] and [Windows][pricing-windows].
113
+
[Pricing for Spot instances is variable][pricing-spot], based on region and SKU. For more information, see pricing information for [Linux][pricing-linux] and [Windows][pricing-windows].
113
114
114
-
With variable pricing, you have option to set a max price, in US dollars (USD), using up to five decimal places. For example, the value *0.98765* would be a max price of $0.98765 USD per hour. If you set the max price to *-1*, the instance won't be evicted based on price. The price for the instance will be the current price for Spot or the price for a standard instance, whichever is less, as long as there's capacity and quota available.
115
+
With variable pricing, you have the option to set a max price, in US dollars (USD) using up to five decimal places. For example, the value *0.98765* would be a max price of *$0.98765 USD per hour*. If you set the max price to *-1*, the instance won't be evicted based on price. As long as there's capacity and quota available, the price for the instance will be the lower price of either the current price for a Spot instance or for a standard instance.
115
116
116
117
## Next steps
117
118
118
119
In this article, you learned how to add a Spot node pool to an AKS cluster. For more information about how to control pods across node pools, see [Best practices for advanced scheduler features in AKS][operator-best-practices-advanced-scheduler].
0 commit comments