Skip to content

Commit fb83afe

Browse files
Clean up all Acrolynx issues.
1 parent a9fabb8 commit fb83afe

File tree

1 file changed

+14
-14
lines changed

1 file changed

+14
-14
lines changed

articles/aks/cluster-autoscaler.md

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -24,13 +24,13 @@ To adjust to changing application demands, such as between workdays and evenings
2424

2525
![The cluster autoscaler and horizontal pod autoscaler often work together to support the required application demands](media/autoscaler/cluster-autoscaler.png)
2626

27-
Both the horizontal pod autoscaler and cluster autoscaler can decrease the number of pods and nodes as needed. The cluster autoscaler decreases the number of nodes when there has been unused capacity for a period of time. Any pods on a node to be removed by the cluster autoscaler are safely scheduled elsewhere in the cluster.
27+
Both the horizontal pod autoscaler and cluster autoscaler can decrease the number of pods and nodes as needed. The cluster autoscaler decreases the number of nodes when there has been unused capacity after a period of time. Any pods on a node removed by the cluster autoscaler are safely scheduled elsewhere in the cluster.
2828

29-
If the current node pool size is lower than the specified minimum or greater than the specified maximum when you enable autoscaling, the autoscaler waits to take effect until a new node is needed in the node pool or until a node can be safely deleted from the node pool. For more information, see [How does scale-down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work)
29+
With autoscaling enabled, when the node pool size is lower than the minimum or greater than the maximum it applies the scaling rules. Next, the autoscaler waits to take effect until a new node is needed in the node pool or until a node may be safely deleted from the current node pool. For more information, see [How does scale-down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work)
3030

3131
The cluster autoscaler may be unable to scale down if pods can't move, such as in the following situations:
3232

33-
* A pod is directly created and isn't backed by a controller object, such as a deployment or replica set.
33+
* A directly created pod not backed by a controller object, such as a deployment or replica set.
3434
* A pod disruption budget (PDB) is too restrictive and doesn't allow the number of pods to fall below a certain threshold.
3535
* A pod uses node selectors or anti-affinity that can't be honored if scheduled on a different node.
3636

@@ -115,7 +115,7 @@ You can re-enable the cluster autoscaler on an existing cluster using the [`az a
115115
> [!IMPORTANT]
116116
> If you have multiple node pools in your AKS cluster, skip to the [autoscale with multiple agent pools section](#use-the-cluster-autoscaler-with-multiple-node-pools-enabled). Clusters with multiple agent pools require the `az aks nodepool` command instead of `az aks`.
117117
118-
In the previous step to create an AKS cluster or update an existing node pool, the cluster autoscaler minimum node count was set to one and the maximum node count was set to three. As your application demands change, you may need to adjust the cluster autoscaler node count.
118+
In our example to enable cluster autoscaling, your cluster autoscaler's minimum node count was set to one and maximum node count was set to three. As your application demands change, you need to adjust the cluster autoscaler node count to scale efficiently.
119119
120120
* Change the node count using the [`az aks update`][az-aks-update] command and update the cluster autoscaler using the `--update-cluster-autoscaler` parameter and specifying your updated node `--min-count` and `--max-count`.
121121
@@ -137,16 +137,16 @@ Monitor the performance of your applications and services, and adjust the cluste
137137
138138
You can also configure more granular details of the cluster autoscaler by changing the default values in the cluster-wide autoscaler profile. For example, a scale down event happens after nodes are under-utilized after 10 minutes. If you have workloads that run every 15 minutes, you may want to change the autoscaler profile to scale down under-utilized nodes after 15 or 20 minutes. When you enable the cluster autoscaler, a default profile is used unless you specify different settings. The cluster autoscaler profile has the following settings you can update:
139139
140-
Example profile update that scales after every 15 minutes and change after 10 minutes of non-use.
140+
* Example profile update that scales after 15 minutes and changes after 10 minutes of idle use.
141141
142-
```azurecli-interactive
143-
az aks update \
144-
-g learn-aks-cluster-scalability \
145-
-n learn-aks-cluster-scalability \
146-
--cluster-autoscaler-profile scan-interval=5s \
147-
scale-down-unready-time=10m \
148-
scale-down-delay-after-add=15m
149-
```
142+
```azurecli-interactive
143+
az aks update \
144+
-g learn-aks-cluster-scalability \
145+
-n learn-aks-cluster-scalability \
146+
--cluster-autoscaler-profile scan-interval=5s \
147+
scale-down-unready-time=10m \
148+
scale-down-delay-after-add=15m
149+
```
150150
151151
| Setting | Description | Default value |
152152
|----------------------------------|------------------------------------------------------------------------------------------|---------------|
@@ -156,7 +156,7 @@ az aks update \
156156
| scale-down-delay-after-failure | How long after scale down failure that scale down evaluation resumes | 3 minutes |
157157
| scale-down-unneeded-time | How long a node should be unneeded before it's eligible for scale down | 10 minutes |
158158
| scale-down-unready-time | How long an unready node should be unneeded before it's eligible for scale down | 20 minutes |
159-
| scale-down-utilization-threshold | Node utilization level, defined as sum of requested resources divided by capacity, below which a node can be considered for scale down | 0.5 |
159+
| scale-down-utilization-threshold | Node utilization level, defined as sum of requested resources divided by capacity, in which a node can be considered for scale down | 0.5 |
160160
| max-graceful-termination-sec | Maximum number of seconds the cluster autoscaler waits for pod termination when trying to scale down a node | 600 seconds |
161161
| balance-similar-node-groups | Detects similar node pools and balances the number of nodes between them | false |
162162
| expander | Type of node pool [expander](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-expanders) to be used in scale up. Possible values: `most-pods`, `random`, `least-waste`, `priority` | random |

0 commit comments

Comments
 (0)