Skip to content

Commit f81c150

Browse files
authored
Update identify-memory-saturation-aks.md
Edit review per CI 7058
1 parent c1d9bde commit f81c150

File tree

1 file changed

+7
-8
lines changed

1 file changed

+7
-8
lines changed

support/azure/azure-kubernetes/availability-performance/identify-memory-saturation-aks.md

Lines changed: 7 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ The following table outlines the common symptoms of memory saturation.
2222

2323
| Symptom | Description |
2424
|---|---|
25-
| Unschedulable pods | Additional pods can't be scheduled if the node is close to its set memory limit. |
25+
| Unschedulable pods | More pods can't be scheduled if the node is close to its set memory limit. |
2626
| Pod eviction | If a node is running out of memory, the kubelet can evict pods. Although the control plane tries to reschedule the evicted pods on other nodes that have resources, there's no guarantee that other nodes have sufficient memory to run these pods. |
2727
| Node not ready | Memory saturation can cause `kubelet` and `containerd` to become unresponsive, eventually causing node readiness issues. |
2828
| Out-of-memory (OOM) kill | An OOM problem occurs if the pod eviction can't prevent a node issue. For more information, see [Troubleshoot OOMkilled in AKS clusters](./troubleshoot-oomkilled-aks-clusters.md).|
@@ -157,11 +157,11 @@ This procedure uses the kubectl commands in a console. It displays only the curr
157157
```
158158

159159
> [!NOTE]
160-
> The percentage of CPU or memory usage for the node is based on the allocatable resources on the node rather than the actual node capacity.
160+
> The percentage of CPU or memory usage for the node is based on the allocatable resources on the node instead of on the actual node capacity.
161161
162162
---
163163

164-
Now that you've identified the pods that are using high memory, you can identify the applications that are running on the pod or identify processes that may be consuming excess memory.
164+
Now that you've identified the pods that are using high memory, you can identify the applications that are running on the pod or identify processes that might be consuming excess memory.
165165

166166
### Step 2: Identify process level memory usage
167167

@@ -195,7 +195,7 @@ For advanced process level memory analysis, use [Inspektor Gadget](https://go.mi
195195
kubectl gadget run top_process --sort -memoryRelative --filter k8s.podName==<pod-name>
196196
```
197197

198-
The output of the Inspektor Gadget `top_process` command resembles the following:
198+
The output of the Inspektor Gadget `top_process` command resembles the following output:
199199

200200
```output
201201
@@ -207,18 +207,17 @@ For advanced process level memory analysis, use [Inspektor Gadget](https://go.mi
207207
208208
```
209209

210-
You can use this output to identify the processes that are consuming the most memory on the node. The output can include the node name, namespace, pod name, container name, process ID (PID), command name (COMM), CPU and memory usage, check [the documentation](https://aka.ms/igtopprocess) for more details.
211-
210+
You can use this output to identify the processes that are consuming the most memory on the node. The output can include the node name, namespace, pod name, container name, process ID (PID), command name (COMM), CPU, and memory usage. For more details, see [the documentation](https://aka.ms/igtopprocess).
212211

213212
### Step 3: Review best practices to avoid memory saturation
214213

215214
Review the following table to learn how to implement best practices for avoiding memory saturation.
216215

217216
| Best practice | Description |
218217
|---|---|
219-
| Use memory [requests and limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits) | Kubernetes provides options to specify the minimum memory size (*request*) and the maximum memory size (*limit*) for a container. By configuring limits on pods, you can avoid memory pressure on the node. Make sure that the aggregate limits for all pods that are running doesn't exceed the node's available memory. This situation is called *overcommitting*. The Kubernetes scheduler allocates resources based on set requests and limits through [Quality of Service](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/) (QoS). Without appropriate limits, the scheduler might schedule too many pods on a single node. This might eventually bring down the node. Additionally, while the kubelet is evicting pods, it prioritizes pods in which the memory usage exceeds their defined requests. We recommend that you set the memory request close to the actual usage. |
218+
| Use memory [requests and limits](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits) | Kubernetes provides options to specify the minimum memory size (*request*) and the maximum memory size (*limit*) for a container. By configuring limits on pods, you can avoid memory pressure on the node. Make sure that the aggregate limits for all pods that are running doesn't exceed the node's available memory. This situation is called *overcommitting*. The Kubernetes scheduler allocates resources based on set requests and limits through [Quality of Service](https://kubernetes.io/docs/tasks/configure-pod-container/quality-service-pod/) (QoS). Without appropriate limits, the scheduler might schedule too many pods on a single node. This situation might eventually bring down the node. Additionally, while the kubelet is evicting pods, it prioritizes pods in which the memory usage exceeds their defined requests. We recommend that you set the memory request close to the actual usage. |
220219
| Enable the [horizontal pod autoscaler](/azure/aks/tutorial-kubernetes-scale?tabs=azure-cli#autoscale-pods) | By scaling the cluster, you can balance the requests across many pods to prevent memory saturation. This technique can reduce the memory footprint on the specific node. |
221-
| Use [anti-affinity tags](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) | For scenarios in which memory is unbounded by design, you can use node selectors and affinity or anti-affinity tags, which can isolate the workload to specific nodes. By using anti-affinity tags, you can prevent other workloads from scheduling pods on these nodes. This reduces the memory saturation problem. |
220+
| Use [anti-affinity tags](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) | For scenarios in which memory is unbounded by design, you can use node selectors and affinity or anti-affinity tags, which can isolate the workload to specific nodes. By using anti-affinity tags, you can prevent other workloads from scheduling pods on these nodes and reduce the memory saturation problem. |
222221
| Choose [higher SKU VMs](https://azure.microsoft.com/pricing/details/virtual-machines/linux/) | VMs that have more random-access memory (RAM) are better suited to handle high memory usage. To use this option, you must create a new node pool, cordon the nodes (make them unschedulable), and drain the existing node pool. |
223222
| Isolate [system and user workloads](/azure/aks/use-system-pools#system-and-user-node-pools) | We recommend that you run your applications on a user node pool. This configuration makes sure that you can isolate the Kubernetes-specific pods to the system node pool and maintain the cluster performance. |
224223

0 commit comments

Comments
 (0)