Skip to content

Commit 0ff80a7

Browse files
authored
Update cluster-autoscaler-overview.md
1 parent 80e4774 commit 0ff80a7

File tree

1 file changed

+3
-2
lines changed

1 file changed

+3
-2
lines changed

articles/aks/cluster-autoscaler-overview.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,15 +31,16 @@ It's a common practice to enable cluster autoscaler for nodes and either the Ver
3131
* To **effectively run workloads concurrently on both Spot and Fixed node pools**, consider using [*priority expanders*](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders). This approach allows you to schedule pods based on the priority of the node pool.
3232
* Exercise caution when **assigning CPU/Memory requests on pods**. The cluster autoscaler scales up based on pending pods rather than CPU/Memory pressure on nodes.
3333
* For **clusters concurrently hosting both long-running workloads, like web apps, and short/bursty job workloads**, we recommend separating them into distinct node pools with [Affinity Rules](./operator-best-practices-advanced-scheduler.md#node-affinity)/[expanders](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) or using [PriorityClass](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) to help prevent unnecessary node drain or scale down operations.
34-
* **Do not make direct changes to nodes in autoscaled node pools**. All nodes in the same node group should have uniform capacity, labels, taints and system pods running on them.
35-
* Scale down nodes by removing workloads, instead of manually reducing the node count in an autoscaler-enabled node pool. This becomes especially problematic when the node pool is already at its maximum count or when there are existing workloads running on the nodes, potentially causing unexpected behaviour by the cluster autoscaler.
34+
* In an autoscaler-enabled node pool, scale down nodes by removing workloads, instead of manually reducing the node count. This can be problematic if the node pool is already at maximum capacity or if there are active workloads running on the nodes, potentially causing unexpected behavior by the cluster autoscaler
3635
* Nodes don't scale up if pods have a PriorityClass value below -10. Priority -10 is reserved for [overprovisioning pods](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-configure-overprovisioning-with-cluster-autoscaler). For more information, see [Using the cluster autoscaler with Pod Priority and Preemption](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-cluster-autoscaler-work-with-pod-priority-and-preemption).
3736
* **Don't combine other node autoscaling mechanisms**, such as Virtual Machine Scale Set autoscalers, with the cluster autoscaler.
3837
* The cluster autoscaler **might be unable to scale down if pods can't move, such as in the following situations**:
3938
* A directly created pod not backed by a controller object, such as a Deployment or ReplicaSet.
4039
* A pod disruption budget (PDB) that's too restrictive and doesn't allow the number of pods to fall below a certain threshold.
4140
* A pod uses node selectors or anti-affinity that can't be honored if scheduled on a different node.
4241
For more information, see [What types of pods can prevent the cluster autoscaler from removing a node?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node).
42+
>[!IMPORTANT]
43+
> **Do not make changes to individual nodes within the autoscaled node pools**. All nodes in the same node group should have uniform capacity, labels, taints and system pods running on them.
4344
4445
## Cluster autoscaler profile
4546

0 commit comments

Comments
 (0)