You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
title: Resource management best practices for Azure Kubernetes Service (AKS)
3
3
titleSuffix: Azure Kubernetes Service
4
-
description: Learn the application developer best practices for resource management in Azure Kubernetes Service (AKS)
4
+
description: Learn the application developer best practices for resource management in Azure Kubernetes Service (AKS).
5
5
ms.topic: conceptual
6
-
ms.date: 03/15/2021
6
+
ms.date: 05/25/2023
7
7
---
8
8
9
9
# Best practices for application developers to manage resources in Azure Kubernetes Service (AKS)
10
10
11
-
As you develop and run applications in Azure Kubernetes Service (AKS), there are a few key areas to consider. How you manage your application deployments can negatively impact the end-user experience of services that you provide. To succeed, keep in mind some best practices you can follow as you develop and run applications in AKS.
11
+
As you develop and run applications in Azure Kubernetes Service (AKS), there are a few key areas to consider. The way you manage your application deployments can negatively impact the end-user experience of services you provide.
12
12
13
-
This article focuses on running your cluster and workloads from an application developer perspective. For information about administrative best practices, see [Cluster operator best practices for isolation and resource management in Azure Kubernetes Service (AKS)][operator-best-practices-isolation]. In this article, you learn:
13
+
This article focuses on running your clusters and workloads from an application developer perspective. For information about administrative best practices, see [Cluster operator best practices for isolation and resource management in Azure Kubernetes Service (AKS)][operator-best-practices-isolation].
14
+
15
+
This article covers the following topics:
14
16
15
17
> [!div class="checklist"]
18
+
>
16
19
> * Pod resource requests and limits.
17
-
> * Ways to develop and deploy applications with Bridge to Kubernetes and Visual Studio Code.
20
+
> * Ways to develop, debug, and deploy applications with Bridge to Kubernetes and Visual Studio Code.
18
21
19
22
## Define pod resource requests and limits
20
23
21
24
> **Best practice guidance**
22
-
>
25
+
>
23
26
> Set pod requests and limits on all pods in your YAML manifests. If the AKS cluster uses *resource quotas* and you don't define these values, your deployment may be rejected.
24
27
25
-
Use pod requests and limits to manage the compute resources within an AKS cluster. Pod requests and limits inform the Kubernetes scheduler which compute resources to assign to a pod.
28
+
Use pod requests and limits to manage compute resources within an AKS cluster. Pod requests and limits inform the Kubernetes scheduler of the compute resources to assign to a pod.
26
29
27
30
### Pod CPU/Memory requests
28
-
*Pod requests* define a set amount of CPU and memory that the pod needs regularly.
29
31
30
-
In your pod specifications, it's **best practice and very important** to define these requests and limits based on the above information. If you don't include these values, the Kubernetes scheduler cannot take into account the resources your applications require to aid in scheduling decisions.
32
+
*Pod requests*define a set amount of CPU and memory the pod needs regularly.
31
33
32
-
Monitor the performance of your application to adjust pod requests.
33
-
* If you underestimate pod requests, your application may receive degraded performance due to over-scheduling a node.
34
-
* If requests are overestimated, your application may have increased difficulty getting scheduled.
34
+
In your pod specifications, it's important you define these requests and limits based on the above information. If you don't include these values, the Kubernetes scheduler can't consider the resources your applications requires to help with scheduling decisions.
35
35
36
-
### Pod CPU/Memory limits**
37
-
*Pod limits* set the maximum amount of CPU and memory that a pod can use.
36
+
Monitor the performance of your application to adjust pod requests. If you underestimate pod requests, your application may receive degraded performance due to over-scheduling a node. If requests are overestimated, your application may have increased scheduling difficulty.
38
37
39
-
**Memory limits* define which pods should be killed when nodes are unstable due to insufficient resources. Without proper limits set, pods will be killed until resource pressure is lifted.
40
-
* While a pod may exceed the *CPU limit* periodically, the pod will not be killed for exceeding the CPU limit.
38
+
### Pod CPU/Memory limits
41
39
42
-
Pod limits define when a pod has lost control of resource consumption. When it exceeds the limit, the pod is marked for killing. This behavior maintains node health and minimizes impact to pods sharing the node. Not setting a pod limit defaults it to the highest available value on a given node.
40
+
*Pod limits* set the maximum amount of CPU and memory a pod can use. *Memory limits* define which pods should be removed when nodes are unstable due to insufficient resources. Without proper limits set, pods are removed until resource pressure is lifted. While a pod may exceed the *CPU limit* periodically, the pod isn't removed for exceeding the CPU limit.
41
+
42
+
Pod limits define when a pod loses control of resource consumption. When it exceeds the limit, the pod is marked for removal. This behavior maintains node health and minimizes impact to pods sharing the node. If you don't set a pod limit, it defaults to the highest available value on a given node.
43
43
44
44
Avoid setting a pod limit higher than your nodes can support. Each AKS node reserves a set amount of CPU and memory for the core Kubernetes components. Your application may try to consume too many resources on the node for other pods to successfully run.
45
45
46
46
Monitor the performance of your application at different times during the day or week. Determine peak demand times and align the pod limits to the resources required to meet maximum needs.
47
47
48
48
> [!IMPORTANT]
49
49
>
50
-
> In your pod specifications, define these requests and limits based on the above information. Failing to include these values prevents the Kubernetes scheduler from accounting for resources your applications require to aid in scheduling decisions.
50
+
> In your pod specifications, define these requests and limits based on the above information. Failing to include these values prevents the Kubernetes scheduler from accounting for resources your applications requires to help with scheduling decisions.
51
+
52
+
If the scheduler places a pod on a node with insufficient resources, application performance is degraded. Cluster administrators **must set *resource quotas*** on a namespace that requires you to set resource requests and limits. For more information, see [resource quotas on AKS clusters][resource-quotas].
51
53
52
-
If the scheduler places a pod on a node with insufficient resources, application performance will be degraded. Cluster administrators **must** set *resource quotas* on a namespace that requires you to set resource requests and limits. For more information, see [resource quotas on AKS clusters][resource-quotas].
54
+
When you define a CPU request or limit, the value is measured in CPU units.
53
55
54
-
When you define a CPU request or limit, the value is measured in CPU units.
55
-
**1.0* CPU equates to one underlying virtual CPU core on the node.
56
-
* The same measurement is used for GPUs.
57
-
* You can define fractions measured in millicores. For example, *100m* is *0.1* of an underlying vCPU core.
56
+
**1.0* CPU equates to one underlying virtual CPU core on the node.
57
+
* The same measurement is used for GPUs.
58
+
* You can define fractions measured in millicores. For example, *100 m* is *0.1* of an underlying vCPU core.
58
59
59
-
In the following basic example for a single NGINX pod, the pod requests *100m* of CPU time, and *128Mi* of memory. The resource limits for the pod are set to *250m* CPU and *256Mi* memory:
60
+
In the following basic example for a single NGINX pod, the pod requests *100 m* of CPU time and *128Mi* of memory. The resource limits for the pod are set to *250 m* CPU and *256Mi* memory.
60
61
61
62
```yaml
62
63
kind: Pod
@@ -80,37 +81,36 @@ For more information about resource measurements and assignments, see [Managing
80
81
81
82
## Develop and debug applications against an AKS cluster
82
83
83
-
> **Best practice guidance**
84
+
> **Best practice guidance**
84
85
>
85
86
> Development teams should deploy and debug against an AKS cluster using Bridge to Kubernetes.
86
87
87
-
With Bridge to Kubernetes, you can develop, debug, and test applications directly against an AKS cluster. Developers within a team collaborate to build and test throughout the application lifecycle. You can continue to use existing tools such as Visual Studio or Visual Studio Code with the Bridge to Kubernetes extension.
88
+
With Bridge to Kubernetes, you can develop, debug, and test applications directly against an AKS cluster. Developers within a team collaborate to build and test throughout the application lifecycle. You can continue to use existing tools such as Visual Studio or Visual Studio Code with the Bridge to Kubernetes extension.
88
89
89
-
Using integrated development and test process with Bridge to Kubernetes reduces the need for local test environments like [minikube][minikube]. Instead, you develop and test against an AKS cluster, even secured and isolated clusters.
90
+
Using integrated development and test process with Bridge to Kubernetes reduces the need for local test environments like [minikube][minikube]. Instead, you develop and test against an AKS cluster, even in secured and isolated clusters.
90
91
91
92
> [!NOTE]
92
-
> Bridge to Kubernetes is intended for use with applications that run on Linux pods and nodes.
93
+
> Bridge to Kubernetes is intended for use with applications running on Linux pods and nodes.
93
94
94
95
## Use the Visual Studio Code (VS Code) extension for Kubernetes
95
96
96
-
> **Best practice guidance**
97
+
> **Best practice guidance**
97
98
>
98
99
> Install and use the VS Code extension for Kubernetes when you write YAML manifests. You can also use the extension for integrated deployment solution, which may help application owners that infrequently interact with the AKS cluster.
99
100
100
-
The [Visual Studio Code extension for Kubernetes][vscode-kubernetes] helps you develop and deploy applications to AKS. The extension provides:
101
-
* Intellisense for Kubernetes resources, Helm charts, and templates.
102
-
* Browse, deploy, and edit capabilities for Kubernetes resources from within VS Code.
103
-
* An intellisense check for resource requests or limits being set in the pod specifications:
101
+
The [Visual Studio Code extension for Kubernetes][vscode-kubernetes] helps you develop and deploy applications to AKS. The extension provides the following features:
102
+
103
+
* Intellisense for Kubernetes resources, Helm charts, and templates.
104
+
* The ability to browse, deploy, and edit capabilities for Kubernetes resources from within VS Code.
105
+
* Intellisense checks for resource requests or limits being set in the pod specifications:
104
106
105
107

106
108
107
109
## Next steps
108
110
109
111
This article focused on how to run your cluster and workloads from a cluster operator perspective. For information about administrative best practices, see [Cluster operator best practices for isolation and resource management in Azure Kubernetes Service (AKS)][operator-best-practices-isolation].
110
112
111
-
To implement some of these best practices, see the following articles:
112
-
113
-
* [Develop with Bridge to Kubernetes][btk]
113
+
To implement some of these best practices, see [Develop with Bridge to Kubernetes][btk].
0 commit comments