Skip to content

Commit 13b5035

Browse files
committed
Enhance PSI example CPU stress pod; fix PSI Prometheus metrics grep regex
Add CPU resource requests and limits to the example CPU stress pod to produce more reliable CPU stress. Fix PSI Prometheus metrics grep regex so that users can correctly query the metrics for the target containers.
1 parent 7179839 commit 13b5035

File tree

1 file changed

+8
-3
lines changed

1 file changed

+8
-3
lines changed

content/en/docs/reference/instrumentation/understand-psi-metrics.md

Lines changed: 8 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -62,6 +62,11 @@ spec:
6262
- "stress"
6363
- "--cpus"
6464
- "1"
65+
resources:
66+
limits:
67+
cpu: "500m"
68+
requests:
69+
cpu: "500m"
6570
```
6671
6772
Apply it to your cluster: `kubectl apply -f cpu-pressure-pod.yaml`
@@ -85,7 +90,7 @@ Query the `/metrics/cadvisor` endpoint to see the `container_pressure_cpu_waitin
8590
```shell
8691
# Replace <node-name> with the name of the node where the pod is running
8792
kubectl get --raw "/api/v1/nodes/<node-name>/proxy/metrics/cadvisor" | \
88-
grep 'container_pressure_cpu_waiting_seconds_total{container="cpu-stress",pod="cpu-pressure-pod"}'
93+
grep 'container_pressure_cpu_waiting_seconds_total{container="cpu-stress"'
8994
```
9095
The output should show an increasing value, indicating that the container is spending time stalled waiting for CPU resources.
9196

@@ -139,7 +144,7 @@ Query the `/metrics/cadvisor` endpoint to see the `container_pressure_memory_wai
139144
```shell
140145
# Replace <node-name> with the name of the node where the pod is running
141146
kubectl get --raw "/api/v1/nodes/<node-name>/proxy/metrics/cadvisor" | \
142-
grep 'container_pressure_memory_waiting_seconds_total{container="memory-stress",pod="memory-pressure-pod"}'
147+
grep 'container_pressure_memory_waiting_seconds_total{container="memory-stress"'
143148
```
144149
In the output, you will observe an increasing value for the metric, indicating that the system is under significant memory pressure.
145150

@@ -188,7 +193,7 @@ Query the `/metrics/cadvisor` endpoint to see the `container_pressure_io_waiting
188193
```shell
189194
# Replace <node-name> with the name of the node where the pod is running
190195
kubectl get --raw "/api/v1/nodes/<node-name>/proxy/metrics/cadvisor" | \
191-
grep 'container_pressure_io_waiting_seconds_total{container="io-stress",pod="io-pressure-pod"}'
196+
grep 'container_pressure_io_waiting_seconds_total{container="io-stress"'
192197
```
193198
You will see the metric's value increase as the Pod continuously writes to disk.
194199

0 commit comments

Comments
 (0)