|
1 | 1 | ---
|
2 | 2 | title: Vertical Pod Autoscaling (preview) in Azure Kubernetes Service (AKS)
|
3 | 3 | description: Learn how to vertically autoscale your pod on an Azure Kubernetes Service (AKS) cluster.
|
4 |
| -services: container-service |
5 | 4 | ms.topic: article
|
6 |
| -ms.date: 09/30/2022 |
| 5 | +ms.date: 01/12/2023 |
7 | 6 | ---
|
8 | 7 |
|
9 | 8 | # Vertical Pod Autoscaling (preview) in Azure Kubernetes Service (AKS)
|
10 | 9 |
|
11 |
| -This article provides an overview of Vertical Pod Autoscaler (VPA) (preview) in Azure Kubernetes Service (AKS), which is based on the open source [Kubernetes](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) version. When configured, it automatically sets resource requests and limits on containers per workload based on past usage. This ensures pods are scheduled onto nodes that have the required CPU and memory resources. |
| 10 | +This article provides an overview of Vertical Pod Autoscaler (VPA) (preview) in Azure Kubernetes Service (AKS), which is based on the open source [Kubernetes](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) version. When configured, it automatically sets resource requests and limits on containers per workload based on past usage. VPA makes certain pods are scheduled onto nodes that have the required CPU and memory resources. |
12 | 11 |
|
13 | 12 | ## Benefits
|
14 | 13 |
|
@@ -181,7 +180,7 @@ The following steps create a deployment with two pods, each running a single con
|
181 | 180 |
|
182 | 181 | The pod has 100 millicpu and 50 Mibibytes of memory reserved in this example. For this sample application, the pod needs less than 100 millicpu to run, so there's no CPU capacity available. The pods also reserves much less memory than needed. The Vertical Pod Autoscaler *vpa-recommender* deployment analyzes the pods hosting the hamster application to see if the CPU and memory requirements are appropriate. If adjustments are needed, the vpa-updater relaunches the pods with updated values.
|
183 | 182 |
|
184 |
| -1. Wait for the vpa-updater to launch a new hamster pod. This should take a few minutes. You can monitor the pods using the [kubectl get][kubectl-get] command. |
| 183 | +1. Wait for the vpa-updater to launch a new hamster pod, which should take a few minutes. You can monitor the pods using the [kubectl get][kubectl-get] command. |
185 | 184 |
|
186 | 185 | ```bash
|
187 | 186 | kubectl get --watch pods -l app=hamster
|
@@ -394,6 +393,50 @@ Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automaticall
|
394 | 393 |
|
395 | 394 | The Vertical Pod Autoscaler uses the `lowerBound` and `upperBound` attributes to decide whether to delete a Pod and replace it with a new Pod. If a Pod has requests less than the lower bound or greater than the upper bound, the Vertical Pod Autoscaler deletes the Pod and replaces it with a Pod that meets the target attribute.
|
396 | 395 |
|
| 396 | +## Metrics server VPA throttling |
| 397 | + |
| 398 | +With AKS clusters version 1.24 and higher, vertical pod autoscaling is enabled for the metrics server. VPA enables you to adjust the resource limit when the metrics server is experiencing consistent CPU and memory resource constraints. |
| 399 | + |
| 400 | +If the metrics server throttling rate is high and the memory usage of its two pods are unbalanced, this indicates the metrics server requires more resources than the default values specified. |
| 401 | + |
| 402 | +To update the coefficient values, create a ConfigMap in the overlay *kube-system* namespace to override the values in the metrics server specification. Perform the following steps to update the metrics server. |
| 403 | + |
| 404 | +1. Create a ConfigMap file named *metrics-server-config.yaml* and copy in the following manifest. |
| 405 | + |
| 406 | + ```yml |
| 407 | + apiVersion: v1 |
| 408 | + kind: ConfigMap |
| 409 | + metadata: |
| 410 | + name: metrics-server-config |
| 411 | + namespace: kube-system |
| 412 | + labels: |
| 413 | + kubernetes.io/cluster-service: "true" |
| 414 | + addonmanager.kubernetes.io/mode: EnsureExists |
| 415 | + data: |
| 416 | + NannyConfiguration: |- |
| 417 | + apiVersion: nannyconfig/v1alpha1 |
| 418 | + kind: NannyConfiguration |
| 419 | + baseCPU: 100m |
| 420 | + cpuPerNode: 1m |
| 421 | + baseMemory: 100Mi |
| 422 | + memoryPerNode: 8Mi |
| 423 | + ``` |
| 424 | + |
| 425 | + In this ConfigMap example, it changes the resource limit and request to the following: |
| 426 | + |
| 427 | + * cpu: (100+1n) millicore |
| 428 | + * memory: (100+8n) mebibyte |
| 429 | + |
| 430 | + Where *n* is the number of nodes. |
| 431 | + |
| 432 | +2. Create the ConfigMap using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest: |
| 433 | + |
| 434 | + ```bash |
| 435 | + kubectl apply -f metrics-server-config.yaml |
| 436 | + ``` |
| 437 | + |
| 438 | +Be cautious of the *baseCPU*, *cpuPerNode*, *baseMemory*, and the *memoryPerNode* as the ConfigMap won't be validated by AKS. As a recommended practice, increase the value gradually to avoid unnecessary resource consumption. Proactively monitor resource usage when updating or creating the ConfigMap. A large number of resource requests could negatively impact the node. |
| 439 | +
|
397 | 440 | ## Next steps
|
398 | 441 |
|
399 | 442 | This article showed you how to automatically scale resource utilization, such as CPU and memory, of cluster nodes to match application requirements. You can also use the horizontal pod autoscaler to automatically adjust the number of pods that run your application. For steps on using the horizontal pod autoscaler, see [Scale applications in AKS][scale-applications-in-aks].
|
|
0 commit comments