Skip to content

Commit b190c1c

Browse files
authored
Update cluster-autoscaler-overview.md
1 parent 92bed76 commit b190c1c

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

articles/aks/cluster-autoscaler-overview.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,8 +31,8 @@ It's a common practice to enable cluster autoscaler for nodes and either the Ver
3131
* To **effectively run workloads concurrently on both Spot and Fixed node pools**, consider using [*priority expanders*](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders). This approach allows you to schedule pods based on the priority of the node pool.
3232
* Exercise caution when **assigning CPU/Memory requests on pods**. The cluster autoscaler scales up based on pending pods rather than CPU/Memory pressure on nodes.
3333
* For **clusters concurrently hosting both long-running workloads, like web apps, and short/bursty job workloads**, we recommend separating them into distinct node pools with [Affinity Rules](./operator-best-practices-advanced-scheduler.md#node-affinity)/[expanders](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) or using [PriorityClass](https://kubernetes.io/docs/concepts/scheduling-eviction/pod-priority-preemption/#priorityclass) to help prevent unnecessary node drain or scale down operations.
34-
* We **don't recommend making direct changes to nodes in autoscaled node pools**. All nodes in the same node group should have uniform capacity, labels, taints and system pods running on them.
35-
* It is recommended to initiate the scaling down of nodes by removing workloads, instead of manually reducing the node count in an autoscaler-enabled node pool. This becomes especially problematic when the node pool is already at its maximum count or when there are existing workloads running on the nodes.
34+
* **Do not make direct changes to nodes in autoscaled node pools**. All nodes in the same node group should have uniform capacity, labels, taints and system pods running on them.
35+
* Scale down nodes by removing workloads, instead of manually reducing the node count in an autoscaler-enabled node pool. This becomes especially problematic when the node pool is already at its maximum count or when there are existing workloads running on the nodes, potentially causing unexpected behaviour by the cluster autoscaler.
3636
* Nodes don't scale up if pods have a PriorityClass value below -10. Priority -10 is reserved for [overprovisioning pods](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-can-i-configure-overprovisioning-with-cluster-autoscaler). For more information, see [Using the cluster autoscaler with Pod Priority and Preemption](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-cluster-autoscaler-work-with-pod-priority-and-preemption).
3737
* **Don't combine other node autoscaling mechanisms**, such as Virtual Machine Scale Set autoscalers, with the cluster autoscaler.
3838
* The cluster autoscaler **might be unable to scale down if pods can't move, such as in the following situations**:

0 commit comments

Comments
 (0)