You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: support/azure/azure-kubernetes/availability-performance/identify-memory-saturation-aks.md
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,9 @@
1
1
---
2
2
title: Troubleshoot Memory Saturation in AKS Clusters
3
3
description: Troubleshoot memory saturation in Azure Kubernetes Service (AKS) clusters across namespaces and containers. Learn how to identify the hosting node.
ms.custom: sap:Node/node pool availability and performance
9
9
---
@@ -61,9 +61,9 @@ Container Insights is a feature within AKS that monitors container workload perf
61
61
1. Because the first node has the highest memory usage, select that node to investigate the memory usage of the pods that are running on the node.
62
62
63
63
:::image type="complex" source="./media/identify-memory-saturation-aks/containers-containerinsights-memorypressure.png" alt-text="Azure portal screenshot of a node's containers under the Nodes view in Container Insights within an Azure Kubernetes Service (AKS) cluster." lightbox="./media/identify-memory-saturation-aks/containers-containerinsights-memorypressure.png":::
64
-
64
+
65
65
The Azure portal screenshot shows a table of nodes. The first node is expanded to display an **Other processes** heading and a sublist of processes that are running within the first node. As for the nodes themselves, the table column values for the processes include **Name**, **Status**, **Max %** (the percentage of memory capacity that's used), **Max** (memory usage), **Containers**, **UpTime**, **Controller**, and **Trend Max % (1 bar = 15m)**. The processes also have an expand/collapse arrow icon next to their names.
66
-
66
+
67
67
Nine processes are listed under the node. The statuses are all **Ok**, the maximum percentage of memory used for the processes ranges from 16 to 0.3 percent, the maximum memory used is from 0.7 mc to 22 mc, the number of containers used is 1 to 3, and the uptime is 3 to 4 days. Unlike for the node, the processes all have a corresponding controller listed. In this screenshot, the controller names are prefixes of the process names, and they're hyperlinked.
68
68
:::image-end:::
69
69
@@ -91,7 +91,7 @@ This procedure uses the kubectl commands in a console. It displays only the curr
2. Get the list of pods that are running on the node and their memory usage by running the [kubectl get pods](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) and [kubectl top pods](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-pod-em-) commands:
94
+
2. Get the list of pods that are running on the node and their memory usage by running the [kubectl get pods](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) and [kubectl top pods](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#-em-pod-em-) commands:
95
95
96
96
```bash
97
97
kubectl get pods --all-namespaces --output wide \
@@ -205,7 +205,7 @@ For advanced process level memory analysis, use [Inspektor Gadget](https://go.mi
You can use this output to identify the processes that are consuming the most memory on the node. The output can include the node name, namespace, pod name, container name, process ID (PID), command name (COMM), CPU, and memory usage. For more details, see [the documentation](https://aka.ms/igtopprocess).
211
211
@@ -215,7 +215,7 @@ Review the following table to learn how to implement best practices for avoiding
215
215
216
216
| Best practice | Description |
217
217
|---|---|
218
-
| Use memory [requests and limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits) | Kubernetes provides options to specify the minimum memory size (*request*) and the maximum memory size (*limit*) for a container. By configuring limits on pods, you can avoid memory pressure on the node. Make sure that the aggregate limits for all pods that are running doesn't exceed the node's available memory. This situation is called *overcommitting*. The Kubernetes scheduler allocates resources based on set requests and limits through [Quality of Service](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/) (QoS). Without appropriate limits, the scheduler might schedule too many pods on a single node. This situation might eventually bring down the node. Additionally, while the kubelet is evicting pods, it prioritizes pods in which the memory usage exceeds their defined requests. We recommend that you set the memory request close to the actual usage. |
218
+
| Use memory [requests and limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits) | Kubernetes provides options to specify the minimum memory size (_request_) and the maximum memory size (_limit_) for a container. By configuring limits on pods, you can avoid memory pressure on the node. Make sure that the aggregate limits for all pods that are running doesn't exceed the node's available memory. This situation is called _overcommitting_. The Kubernetes scheduler allocates resources based on set requests and limits through [Quality of Service](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/) (QoS). Without appropriate limits, the scheduler might schedule too many pods on a single node. This situation might eventually bring down the node. Additionally, while the kubelet is evicting pods, it prioritizes pods in which the memory usage exceeds their defined requests. We recommend that you set the memory request close to the actual usage. |
219
219
| Enable the [horizontal pod autoscaler](/azure/aks/tutorial-kubernetes-scale?tabs=azure-cli#autoscale-pods) | By scaling the cluster, you can balance the requests across many pods to prevent memory saturation. This technique can reduce the memory footprint on the specific node. |
220
220
| Use [anti-affinity tags](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) | For scenarios in which memory is unbounded by design, you can use node selectors and affinity or anti-affinity tags, which can isolate the workload to specific nodes. By using anti-affinity tags, you can prevent other workloads from scheduling pods on these nodes and reduce the memory saturation problem. |
221
221
| Choose [higher SKU VMs](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) | VMs that have more random-access memory (RAM) are better suited to handle high memory usage. To use this option, you must create a new node pool, cordon the nodes (make them unschedulable), and drain the existing node pool. |
You can [review the kubelet logs](/azure/aks/kubelet-logs) on the node to see if there are messages indicating that the OOM killer was triggered at the time of the issue and that pod's memory usage reached its limit.
126
126
127
127
Alternatively, you can [SSH into the node](/azure/aks/node-access) where the pod was running and check the kernel logs for any OOM messages. This command will display which processes the OOM killer terminated:
128
128
129
-
`chroot /host \# access the node session`
129
+
`chroot /host # access the node session`
130
130
131
-
`grep -i \"Memory cgroup out of memory\" /var/log/syslog`
131
+
`grep -i "Memory cgroup out of memory" /var/log/syslog`
132
132
133
133
### Events
134
134
135
-
- Use `kubectl get events \--sort-by=.lastTimestamp -n \<namespace\>` to find OOMKilled pods.
135
+
- Use `kubectl get events --sort-by=.lastTimestamp -n <namespace>` to find OOMKilled pods.
136
136
137
137
- Use the events section from the pod description to look for OOM-related messages:
138
138
139
-
-`kubectl describe pod \<pod-name\> -n \<namespace\>`
139
+
-`kubectl describe pod <pod-name> -n <namespace>`
140
140
141
141
## Handling OOMKilled for system pods
142
142
@@ -186,8 +186,8 @@ restart.
186
186
To solve, review request and limits documentation to understand how to modify
187
187
your deployment accordingly. For more information, see [Resource Management for Pods and Containers](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits).
188
188
189
-
`kubectl set resources deployment \<deployment-name\>
This helps to confirm whether the pod is approaching or exceeding its
217
215
memory limits.
218
216
219
217
- Check for OOMKilled Events
220
218
221
-
`kubectl get events \--sort-by=\'.lastTimestamp\' -n \<namespace\>`
219
+
`kubectl get events --sort-by='.lastTimestamp' -n <namespace>`
222
220
223
221
To resolve, engage the application vendor. If the app is from a third party, check
224
222
if they have known issues or memory tuning guides. Also, depending on the application framework, ask the vendor to verify whether they are using the latest version of Java or .Net as recommended in [Memory saturation occurs in pods after cluster upgrade to Kubernetes 1.25](../create-upgrade-delete/aks-memory-saturation-after-upgrade.md).
0 commit comments