Skip to content

Commit 75f8db3

Browse files
authored
Updates from editor
1 parent 46e9402 commit 75f8db3

File tree

1 file changed

+21
-21
lines changed

1 file changed

+21
-21
lines changed

support/azure/azure-kubernetes/availability-performance/high-memory-consumption-disk-intensive-applications.md

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1,37 +1,37 @@
11
---
22
title: Troubleshoot High Memory Consumption in Disk-Intensive Applications
33
description: Helps identify and resolve excessive memory usage due to Linux kernel behaviors on Kubernetes pods.
4-
ms.date: 04/28/2025
4+
ms.date: 04/29/2025
55
ms.reviewer: claudiogodoy, v-weizhu
66
ms.service: azure-kubernetes-service
77
ms.custom: sap:Node/node pool availability and performance
88
---
99
# Troubleshoot high memory consumption in disk-intensive applications
1010

11-
Disk input and output operations are costly, and most operating systems implement caching strategies for reading and writing data to the filesystem. [Linux kernel](https://www.kernel.org/doc) usually uses strategies such as the [page cache](https://www.kernel.org/doc/gorman/html/understand/understand013.html) to improve the overall performance. The primary goal of the page cache is to store data that's read from the filesystem in cache, making it available in memory for future read operations.
11+
Disk input and output operations are costly, and most operating systems implement caching strategies for reading and writing data to the filesystem. The [Linux kernel](https://www.kernel.org/doc) usually uses strategies such as the [page cache](https://www.kernel.org/doc/gorman/html/understand/understand013.html) to improve overall performance. The primary goal of the page cache is to store data read from the filesystem in the cache, making it available in memory for future read operations.
1212

13-
This article helps you to identity and avoid high memory consumed by disk-intensive applications due to Linux kernel behaviors on Kubernetes pods.
13+
This article helps you identity and avoid the high memory consumption caused by disk-intensive applications due to Linux kernel behaviors on Kubernetes pods.
1414

1515
## Prerequisites
1616

17-
- A tool to connect to the Kubernetes cluster, such as the `kubectl` tool. To install `kubectl` using the [Azure CLI](/cli/azure/install-azure-cli), run the [az aks install-cli](/cli/azure/aks#az-aks-install-cli) command.
17+
A tool to connect to the Kubernetes cluster, such as the `kubectl` tool. To install `kubectl` using the [Azure CLI](/cli/azure/install-azure-cli), run the [az aks install-cli](/cli/azure/aks#az-aks-install-cli) command.
1818

1919
## Symptoms
2020

21-
When a disk-intensive application running on a pod perform frequent filesystem operations, high memory consumption might occur.
21+
When a disk-intensive application running on a pod performs frequent filesystem operations, high memory consumption might occur.
2222

23-
The following table outlines the common symptoms of high memory consumption:
23+
The following table outlines common symptoms of high memory consumption:
2424

2525
| Symptom | Description |
2626
| --- | --- |
27-
| The [working set](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#memory) metric too high | This issue occurs when there is a significant difference between the [working set](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#memory) metric reported by the [Kubernetes Metrics API](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-server) and the actual memory consumed by an application. |
28-
| Out-of-memory (OOM) kill | This issue indicates memory issues exist on your pod. |
29-
| Increased memory usage after heavy disk activity | After operations such as backups, large file reads/writes, or data imports, memory consumption rises. |
30-
| Memory usage grows indefinitely | The pod's memory consumption increases over time without reducing, like a memory leak, even if the application itself isnt leaking memory.|
27+
| The [working set](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#memory) metric is too high. | This issue occurs when there's a significant difference between the [working set](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#memory) metric reported by the [Kubernetes Metrics API](https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/#metrics-server) and the actual memory consumed by an application. |
28+
| Out-of-memory (OOM) kill. | This issue indicates memory issues exist on your pod. |
29+
| Increased memory usage after heavy disk activity. | After operations such as backups, large file reads/writes, or data imports, memory consumption rises. |
30+
| Memory usage grows indefinitely. | The pod's memory consumption increases over time without reducing, like a memory leak, even if the application itself isn't leaking memory.|
3131

3232
## Troubleshooting checklist
3333

34-
### Step 1: Inspect pod working set
34+
### Step 1: Inspect the pod working set
3535

3636
To inspect the working set of pods reported by the Kubernetes Metrics API, run the following [kubectl top pods](https://kubernetes.io/docs/reference/kubectl/generated/kubectl_top/) command:
3737

@@ -53,7 +53,7 @@ To inspect the memory statistics of the [cgroup](https://www.kernel.org/doc/html
5353
$ kubectl exec <POD_NAME> -it -- bash
5454
```
5555

56-
2. Navigate to the `cgroup` statistics directory and list memory-related files:
56+
2. Navigate to the `cgroup` statistics directory and list the memory-related files:
5757

5858
```console
5959
$ ls /sys/fs/cgroup | grep -e memory.stat -e memory.current
@@ -63,9 +63,9 @@ To inspect the memory statistics of the [cgroup](https://www.kernel.org/doc/html
6363
- `memory.current`: Total memory currently used by the `cgroup` and its descendants.
6464
- `memory.stat`: This breaks down the cgroup's memory footprint into different types of memory, type-specific details, and other information about the state and past events of the memory management system.
6565

66-
All the values listed on those files are in bytes.
66+
All the values listed in those files are in bytes.
6767

68-
3. Get an overview about how the memory consumption is distributed on the pod:
68+
3. Get an overview of how memory consumption is distributed on the pod:
6969

7070
```console
7171
$ cat /sys/fs/cgroup/memory.current
@@ -95,11 +95,11 @@ The following table describes some memory segments:
9595

9696
| Segment | Description |
9797
|---|---|
98-
| `anon` | Amount of memory used in anonymous mappings. The majority languages use this segment to allocate memory. |
99-
| `file` | Amount of memory used to cache filesystem data, including tmpfs and shared memory. |
100-
| `slab` | Amount of memory used for storing data structures in the Linux kernel. |
98+
| `anon` | The amount of memory used in anonymous mappings. Most languages use this segment to allocate memory. |
99+
| `file` | The amount of memory used to cache filesystem data, including tmpfs and shared memory. |
100+
| `slab` | The amount of memory used to store data structures in the Linux kernel. |
101101

102-
Combined with the [Step 2](#step-2-inspect-pod-memory-statistics), the `anon` represents 5197824 bytes which isn't close to the total amount reported by the working set metric. The `slab` memory segment used by the Linux kernel represents 354682456 bytes, which is almost all the memory reported by working set metric on the pod.
102+
Combined with [Step 2](#step-2-inspect-pod-memory-statistics), `anon` represents 5,197,824 bytes, which isn't close to the total amount reported by the working set metric. The `slab` memory segment used by the Linux kernel represents 354,682,456 bytes, which is almost all the memory reported by the working set metric on the pod.
103103

104104
### Step 4: Drop the kernel cache on a debugger pod
105105

@@ -143,21 +143,21 @@ Combined with the [Step 2](#step-2-inspect-pod-memory-statistics), the `anon` re
143143
slab 392768
144144
```
145145

146-
If you observe a significant decrease in both working set and `slab` memory segment, you are experiencing the issue where a great amount of memory is used by the Linux kernel on the pod.
146+
If you observe a significant decrease in both the working set and the `slab` memory segment, you're experiencing an issue with the Linux kernel using a great amount of memory on the pod.
147147

148148
## Workaround: Configure appropriate memory limits and requests
149149

150150
The only effective workaround for high memory consumption on Kubernetes pods is to set realistic resource [limits and requests](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#requests-and-limits). For example:
151151

152-
```ymal
152+
```yaml
153153
resources:
154154
requests:
155155
memory: 30Mi
156156
limits:
157157
memory: 60Mi
158158
```
159159

160-
By configuring appropriate memory limits and requests in the Kubernetes or specification, you can ensure that Kubernetes manages memory allocation more efficiently, mitigating the impact of excessive kernel-level caching on pod memory usage.
160+
By configuring appropriate memory limits and requests in Kubernetes or the specification, you can ensure that Kubernetes manages memory allocation more efficiently, mitigating the impact of excessive kernel-level caching on pod memory usage.
161161

162162
> [!CAUTION]
163163
> Misconfigured pod memory limits can lead to problems such as OOMKilled errors.

0 commit comments

Comments
 (0)