Skip to content

Commit 17842d7

Browse files
authored
editorial changes 2
1 parent 6f38ae1 commit 17842d7

File tree

2 files changed

+35
-34
lines changed

2 files changed

+35
-34
lines changed

support/azure/azure-kubernetes/availability-performance/high-memory-consumption-disk-intensive-applications.md

Lines changed: 34 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -10,55 +10,60 @@ ms.custom: sap:Node/node pool availability and performance
1010

1111
Disk input and output operations are costly, and most operating systems implement caching strategies for reading and writing data to the filesystem. [Linux kernel](https://www.kernel.org/doc) usually uses strategies such as the [page cache](https://www.kernel.org/doc/gorman/html/understand/understand013.html) to improve the overall performance. The primary goal of the page cache is to store data that's read from the filesystem in cache, making it available in memory for future read operations.
1212

13-
When disk-intensive applications perform frequent filesystem operations, high memory consumption might occur. This article helps you to identity and resolve this issue due to Linux kernel behaviors on Kubernetes pods.
13+
This article helps you to identity and avoid high memory consumed by disk-intensive applications due to Linux kernel behaviors on Kubernetes pods.
1414

1515
## Prerequisites
1616

17-
- A tool to connect to the Kubernetes cluster, such as the kubectl tool. To install kubectl using the [Azure CLI](/cli/azure/install-azure-cli), run the [az aks install-cli](/cli/azure/aks#az-aks-install-cli) command.
17+
- A tool to connect to the Kubernetes cluster, such as the `kubectl` tool. To install `kubectl` using the [Azure CLI](/cli/azure/install-azure-cli), run the [az aks install-cli](/cli/azure/aks#az-aks-install-cli) command.
1818

1919
## Symptoms
2020

21+
When an disk-intensive application running on a pod perform frequent filesystem operations, high memory consumption might occur.
22+
2123
The following table outlines the common symptoms of memory saturation:
2224

2325
| Symptom | Description |
2426
| --- | --- |
25-
| [Working set](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#memory) metric too high | This issue occurs when there is a significant difference between the working_set metric reported by the [Kubernetes Metrics API](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-server) and the actual memory consumed by an application. |
27+
| The [working set](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#memory) metric too high | This issue occurs when there is a significant difference between the [working set](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#memory) metric reported by the [Kubernetes Metrics API](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-server) and the actual memory consumed by an application. |
2628
| Out-of-memory (OOM) kill | This issue indicates memory issues exist on your pod. |
2729

2830
## Troubleshooting checklist
2931

3032
### Step 1: Inspect pod working set
3133

32-
1. Identify which pod is consuming excessive memory by following the guide[Troubleshoot memory saturation in AKS clusters](identify-memory-saturation-aks.md).
33-
2. Use the following [kubectl top pods](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/) command to show the actual [Working_Set](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#memory) reported by the [Kubernetes metrics API](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-server):
34+
To inspect the working set of pods reported by the Kubernetes Metrics API, run the following [kubectl top pods](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/) command:
3435

35-
```console
36-
$ kubectl top pods -A | grep -i "<DEPLOYMENT_NAME>"
37-
NAME CPU(cores) MEMORY(bytes)
38-
my-deployment-fc94b7f98-m9z2l 1m 344Mi
39-
```
36+
```console
37+
$ kubectl top pods -A | grep -i "<DEPLOYMENT_NAME>"
38+
NAME CPU(cores) MEMORY(bytes)
39+
my-deployment-fc94b7f98-m9z2l 1m 344Mi
40+
```
41+
42+
For detailed steps about how to identify which pod is consuming excessive memory, see [Troubleshoot memory saturation in AKS clusters](identify-memory-saturation-aks.md#step-1-identify-nodes-that-have-memory-saturation).
4043

4144
### Step 2: Inspect pod memory statistics
4245

43-
Inspect the memory statistics of the [cgroup](https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html) of the pod by following these steps:
46+
To inspect the memory statistics of the [cgroup](https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html) on the pod that's consuming excessive memory, follow these steps:
4447

4548
1. Connect to the pod:
4649

4750
```console
4851
$ kubectl exec <POD_NAME> -it -- bash
4952
```
5053

51-
2. Navigate to the cgroup statistics directory and list memory-related files:
54+
2. Navigate to the `cgroup` statistics directory and list memory-related files:
5255

5356
```console
5457
$ ls /sys/fs/cgroup | grep -e memory.stat -e memory.current
5558
memory.current memory.stat
5659
```
5760

58-
- `memory.current`: Total memory currently used by the cgroup and its descendants.
59-
- `memory.stat`: This breaks down the cgroup's memory footprint into different types of memory, type-specific details, and other information on the state and past events of the memory management system.
61+
- `memory.current`: Total memory currently used by the `cgroup` and its descendants.
62+
- `memory.stat`: This breaks down the cgroup's memory footprint into different types of memory, type-specific details, and other information about the state and past events of the memory management system.
63+
64+
All the values listed on those files are in bytes.
6065

61-
3. All the values listed on those files are in bytes. Get an overview of how the memory consumption is distributed on the `pod`:
66+
3. Get an overview about how the memory consumption is distributed on the pod:
6267

6368
```console
6469
$ cat /sys/fs/cgroup/memory.current
@@ -78,29 +83,25 @@ Inspect the memory statistics of the [cgroup](https://www.kernel.org/doc/html/la
7883
...
7984
```
8085

81-
`cAdvisor` uses `memory.current` and `inactive_file` to compute the working set metric. You can replicate the calculation using the following formula:
86+
`cAdvisor` uses `memory.current` and `inactive_file` to compute the working set metric. You can replicate the calculation using the following formula:
8287

83-
```sh
84-
working_set = (memory.current - inactive_file) / 1048576
85-
= (10645012480 - 10256207872) / 1048576
86-
= 370 MB
87-
```
88+
working_set = (memory.current - inactive_file) / 1048576
89+
= (10645012480 - 10256207872) / 1048576
90+
= 370 MB
8891

89-
### Step 3: Determine kernel vs. application memory consumption
92+
### Step 3: Determine kernel and application memory consumption
9093

9194
The following table describes some memory segments:
9295

9396
| Segment | Description |
9497
|---|---|
95-
| anon | Amount of memory used in anonymous mappings. The majority languages use this segment to allocate memory. |
98+
| `anon` | Amount of memory used in anonymous mappings. The majority languages use this segment to allocate memory. |
9699
| file | Amount of memory used to cache filesystem data, including tmpfs and shared memory. |
97-
| slab | Amount of memory used for storing in-kernel data structures. |
100+
| `slab` | Amount of memory used for storing data structures in the Linux kernel. |
98101

99-
The majority of languages use the anon memory segment to allocate resources. In this case, the `anon` represents 5197824 bytes which is not even close to the total amount reported by the working set metric.
102+
In this case, the `anon` represents 5197824 bytes which is not even close to the total amount reported by the working set metric. The `slab` memory segment used by the Linux kernel represents 354682456 bytes, which is almost all the memory reported by working set metric on the pod.
100103

101-
On the other hand, there is one of the segments that Kernel uses the `slab` representing 354682456 bytes, which is almost all the memory reported by working set metric on this pod.
102-
103-
### Step 4: Run a node drop cache
104+
### Step 4: Drop the kernel cache on a debugger pod
104105

105106
> [!NOTE]
106107
> This step might lead to availability and performance issues. Avoid running it in a production environment.
@@ -126,7 +127,7 @@ On the other hand, there is one of the segments that Kernel uses the `slab` repr
126127
echo 1 > /proc/sys/vm/drop_caches
127128
```
128129

129-
4. Verify if the command in the previous step causes the effect by repeating [Step 1](#step-1-inspect-pod-working-set) and [Step 2](#step-2-inspect-pod-memory-statistics):
130+
4. Verify if the command in the previous step causes any effect by repeating [Step 1](#step-1-inspect-pod-working-set) and [Step 2](#step-2-inspect-pod-memory-statistics):
130131

131132
```console
132133
$ kubectl top pods -A | grep -i "<DEPLOYMENT_NAME>"
@@ -142,9 +143,9 @@ On the other hand, there is one of the segments that Kernel uses the `slab` repr
142143
slab 392768
143144
```
144145

145-
If you observe a significant decrease in both working set and slab memory segment, you are experiencing the issue where a great amount of pod's memory is used by the Kernel.
146+
If you observe a significant decrease in both working set and `slab` memory segment, you are experiencing the issue where a great amount of memory is used by the Linux kernel on the pod.
146147

147-
## Workaround: Set appropriate memory limits and requests
148+
## Workaround: Configure appropriate memory limits and requests
148149

149150
The only effective workaround for high memory consumption on Kubernetes pods is to set realistic resource [limits and requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits). For example:
150151

@@ -158,8 +159,8 @@ resources:
158159

159160
By configuring appropriate memory limits and requests in the Kubernetes or specification, you can ensure that Kubernetes manages memory allocation more efficiently, mitigating the impact of excessive kernel-level caching on pod memory usage.
160161

161-
> [!NOTE]
162-
> Misconfigured pod memory limits can lead to problems such as OOM-Killed errors.
162+
> [!CAUTION]
163+
> Misconfigured pod memory limits can lead to problems such as OOMKilled errors.
163164
164165
## References
165166

support/azure/azure-kubernetes/toc.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -168,7 +168,7 @@
168168
href: availability-performance/identify-high-cpu-consuming-containers-aks.md
169169
- name: Identify memory saturation in AKS clusters
170170
href: availability-performance/identify-memory-saturation-aks.md
171-
- name: Troubleshoot high memory consumption in disk-intensive applications
171+
- name: Troubleshoot high memory consumption due to Linux kernel behaviors
172172
href: availability-performance/high-memory-consumption-disk-intensive-applications.md
173173
- name: Troubleshoot cluster service health probe mode issues
174174
href: availability-performance/cluster-service-health-probe-mode-issues.md

0 commit comments

Comments
 (0)