You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/aks/vertical-pod-autoscaler.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -7,7 +7,7 @@ ms.date: 01/12/2023
7
7
8
8
# Vertical Pod Autoscaling (preview) in Azure Kubernetes Service (AKS)
9
9
10
-
This article provides an overview of Vertical Pod Autoscaler (VPA) (preview) in Azure Kubernetes Service (AKS), which is based on the open source [Kubernetes](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) version. When configured, it automatically sets resource requests and limits on containers per workload based on past usage. This ensures pods are scheduled onto nodes that have the required CPU and memory resources.
10
+
This article provides an overview of Vertical Pod Autoscaler (VPA) (preview) in Azure Kubernetes Service (AKS), which is based on the open source [Kubernetes](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) version. When configured, it automatically sets resource requests and limits on containers per workload based on past usage. VPA makes certain pods are scheduled onto nodes that have the required CPU and memory resources.
11
11
12
12
## Benefits
13
13
@@ -180,7 +180,7 @@ The following steps create a deployment with two pods, each running a single con
180
180
181
181
The pod has 100 millicpu and 50 Mibibytes of memory reserved in this example. For this sample application, the pod needs less than 100 millicpu to run, so there's no CPU capacity available. The pods also reserves much less memory than needed. The Vertical Pod Autoscaler *vpa-recommender* deployment analyzes the pods hosting the hamster application to see if the CPU and memory requirements are appropriate. If adjustments are needed, the vpa-updater relaunches the pods with updated values.
182
182
183
-
1. Wait for the vpa-updater to launch a new hamster pod. This should take a few minutes. You can monitor the pods using the [kubectl get][kubectl-get] command.
183
+
1. Wait for the vpa-updater to launch a new hamster pod, which should take a few minutes. You can monitor the pods using the [kubectl get][kubectl-get] command.
184
184
185
185
```bash
186
186
kubectl get --watch pods -l app=hamster
@@ -395,9 +395,9 @@ Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automaticall
395
395
396
396
## Metrics server VPA throttling
397
397
398
-
With AKS clusters version 1.24 and higher, vertical pod autoscaling is enabled for the metrics server. This enables you to adjust the resource limit when the metrics server is experiencing consistent CPU and memory resource constraints.
398
+
With AKS clusters version 1.24 and higher, vertical pod autoscaling is enabled for the metrics server. VPA enables you to adjust the resource limit when the metrics server is experiencing consistent CPU and memory resource constraints.
399
399
400
-
If the metrics server throttling rate is high and the memory usage of its two pods are unbalanced, it indicates the metrics server requires more resources than specified by our default values.
400
+
If the metrics server throttling rate is high and the memory usage of its two pods are unbalanced, this indicates the metrics server requires more resources than the default values specified.
401
401
402
402
To update the coefficient values, create a ConfigMap in the overlay *kube-system* namespace to override the values in the metrics server specification. Perform the following steps to update the metrics server.
403
403
@@ -424,8 +424,8 @@ To update the coefficient values, create a ConfigMap in the overlay *kube-system
424
424
425
425
In this ConfigMap example, it changes the resource limit and request to the following:
426
426
427
-
* cpu: (100+1n)millicore
428
-
* memory: (100+8n)mebibyte
427
+
* cpu: (100+1n)millicore
428
+
* memory: (100+8n)mebibyte
429
429
430
430
Where *n* is the number of nodes.
431
431
@@ -435,7 +435,7 @@ To update the coefficient values, create a ConfigMap in the overlay *kube-system
435
435
kubectl apply -f metrics-server-config.yaml
436
436
```
437
437
438
-
Be cautious of the *baseCPU*, *cpuPerNode*, *baseMemory*, and the *memoryPerNode*. This ConfigMap will not be reconciled by AKS. As a recommended practice, increase the value gradually to avoid unnecessary resource consumption. Proactively monitor resource usage when updating or creating the ConfigMap. A large number of resource requests could negatively impact the node.
438
+
Be cautious of the *baseCPU*, *cpuPerNode*, *baseMemory*, and the *memoryPerNode* as the ConfigMap won't be validated by AKS. As a recommended practice, increase the value gradually to avoid unnecessary resource consumption. Proactively monitor resource usage when updating or creating the ConfigMap. A large number of resource requests could negatively impact the node.
0 commit comments