Skip to content

Commit 00ee49a

Browse files
authored
Merge pull request #5409 from roycaihw/psi-condition-updates
KEP-4205: Split the two phases into two KEPs and update Alpha and Beta requirements
2 parents 2197b2e + ec090f0 commit 00ee49a

File tree

3 files changed

+87
-116
lines changed

3 files changed

+87
-116
lines changed

keps/prod-readiness/sig-node/4205.yaml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,3 +4,5 @@
44
kep-number: 4205
55
alpha:
66
approver: "@johnbelamaric"
7+
beta:
8+
approver: "@johnbelamaric"

keps/sig-node/4205-psi-metric/README.md

Lines changed: 72 additions & 109 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
1-
# KEP-4205: PSI Based Node Conditions
1+
# KEP-4205: Expose PSI Metrics
22
<!-- toc -->
33
- [Release Signoff Checklist](#release-signoff-checklist)
44
- [Summary](#summary)
@@ -8,22 +8,18 @@
88
- [Proposal](#proposal)
99
- [User Stories (Optional)](#user-stories-optional)
1010
- [Story 1](#story-1)
11-
- [Story 2](#story-2)
1211
- [Risks and Mitigations](#risks-and-mitigations)
1312
- [Design Details](#design-details)
14-
- [Phase 1](#phase-1)
1513
- [CPU](#cpu)
1614
- [Memory](#memory)
1715
- [IO](#io)
18-
- [Phase 2 to add PSI based actions.](#phase-2-to-add-psi-based-actions)
1916
- [Test Plan](#test-plan)
2017
- [Prerequisite testing updates](#prerequisite-testing-updates)
2118
- [Unit tests](#unit-tests)
2219
- [Integration tests](#integration-tests)
2320
- [e2e tests](#e2e-tests)
2421
- [Graduation Criteria](#graduation-criteria)
25-
- [Phase 1: Alpha](#phase-1-alpha)
26-
- [Phase 2: Alpha](#phase-2-alpha)
22+
- [Alpha](#alpha)
2723
- [Beta](#beta)
2824
- [GA](#ga)
2925
- [Deprecation](#deprecation)
@@ -85,7 +81,7 @@ Items marked with (R) are required *prior to targeting to a milestone / release*
8581

8682
## Summary
8783

88-
This KEP proposes adding support in kubelet to read Pressure Stall Information (PSI) metric pertaining to CPU, Memory and IO resources exposed from cAdvisor and runc. This will enable kubelet to report node conditions which will be utilized to prevent scheduling of pods on nodes experiencing significant resource constraints.
84+
This KEP proposes adding support in kubelet to read Pressure Stall Information (PSI) metric pertaining to CPU, Memory and IO resources exposed from cAdvisor and runc.
8985

9086
## Motivation
9187

@@ -98,11 +94,6 @@ In short, PSI metric are like barometers that provide fair warning of impending
9894
This proposal aims to:
9995
1. Enable the kubelet to have the PSI metric of cgroupv2 exposed from cAdvisor and Runc.
10096
2. Enable the pod level PSI metric and expose it in the Summary API.
101-
3. Utilize the node level PSI metric to set node condition and node taints.
102-
103-
It will have two phases:
104-
Phase 1: includes goal 1, 2
105-
Phase 2: includes goal 3
10697

10798
### Non-Goals
10899

@@ -119,23 +110,13 @@ Today, to identify disruptions caused by resource crunches, Kubernetes users nee
119110
install node exporter to read PSI metric. With the feature proposed in this enhancement,
120111
PSI metric will be available for users in the Kubernetes metrics API.
121112

122-
#### Story 2
123-
124-
Kubernetes users want to prevent new pods to be scheduled on the nodes that have resource starvation. By using PSI metric, the kubelet will set Node Condition to avoid pods being scheduled on nodes under high resource pressure. The node controller could then set a [taint on the node based on these new Node Conditions](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition).
125-
126113
### Risks and Mitigations
127114

128-
There are no significant risks associated with Phase 1 implementation that involves integrating
115+
There are no significant risks associated with integrating
129116
the PSI metric in kubelet from either from cadvisor runc libcontainer library or kubelet's CRI runc libcontainer implementation which doesn't involve any shelled binary operations.
130117

131-
Phase 2 involves utilizing the PSI metric to report node conditions. There is a potential
132-
risk of early reporting for nodes under pressure. We intend to address this concern
133-
by conducting careful experimentation with PSI threshold values to identify the optimal
134-
default threshold to be used for reporting the nodes under heavy resource pressure.
135-
136118
## Design Details
137119

138-
#### Phase 1
139120
1. Add new Data structures PSIData and PSIStats corresponding to the PSI metric output format as following:
140121

141122
```
@@ -144,16 +125,25 @@ full avg10=0.00 avg60=0.00 avg300=0.00 total=0
144125
```
145126

146127
```go
128+
// PSI data for an individual resource.
147129
type PSIData struct {
148-
Avg10 *float64 `json:"avg10"`
149-
Avg60 *float64 `json:"avg60"`
150-
Avg300 *float64 `json:"avg300"`
151-
Total *float64 `json:"total"`
130+
// Total time duration for tasks in the cgroup have waited due to congestion.
131+
// Unit: nanoseconds.
132+
Total uint64 `json:"total"`
133+
// The average (in %) tasks have waited due to congestion over a 10 second window.
134+
Avg10 float64 `json:"avg10"`
135+
// The average (in %) tasks have waited due to congestion over a 60 second window.
136+
Avg60 float64 `json:"avg60"`
137+
// The average (in %) tasks have waited due to congestion over a 300 second window.
138+
Avg300 float64 `json:"avg300"`
152139
}
153140

141+
// PSI statistics for an individual resource.
154142
type PSIStats struct {
155-
Some *PSIData `json:"some,omitempty"`
156-
Full *PSIData `json:"full,omitempty"`
143+
// PSI data for some tasks in the cgroup.
144+
Some PSIData `json:"some,omitempty"`
145+
// PSI data for all tasks in the cgroup.
146+
Full PSIData `json:"full,omitempty"`
157147
}
158148
```
159149

@@ -165,15 +155,15 @@ metric data will be available through CRI instead.
165155
```go
166156
type CPUStats struct {
167157
// PSI stats of the overall node
168-
PSI cadvisorapi.PSIStats `json:"psi,omitempty"`
158+
PSI *PSIStats `json:"psi,omitempty"`
169159
}
170160
```
171161

172162
##### Memory
173163
```go
174164
type MemoryStats struct {
175165
// PSI stats of the overall node
176-
PSI cadvisorapi.PSIStats `json:"psi,omitempty"`
166+
PSI *PSIStats `json:"psi,omitempty"`
177167
}
178168
```
179169

@@ -185,7 +175,7 @@ type IOStats struct {
185175
Time metav1.Time `json:"time"`
186176

187177
// PSI stats of the overall node
188-
PSI cadvisorapi.PSIStats `json:"psi,omitempty"`
178+
PSI *PSIStats `json:"psi,omitempty"`
189179
}
190180

191181
type NodeStats struct {
@@ -194,49 +184,6 @@ type NodeStats struct {
194184
}
195185
```
196186

197-
#### Phase 2 to add PSI based actions.
198-
**Note:** These actions are tentative, and will depend on different the outcome from testing and discussions with sig-node members, users, and other folks.
199-
200-
1. Introduce a new kubelet config parameter, pressure threshold, to let users specify the pressure percentage beyond which the kubelet would report the node condition to disallow workloads to be scheduled on it.
201-
202-
2. Add new node conditions corresponding to high PSI (beyond threshold levels) on CPU, Memory and IO.
203-
204-
```go
205-
// These are valid conditions of the node. Currently, we don't have enough information to decide
206-
// node condition.
207-
const (
208-
209-
// Conditions based on pressure at system level cgroup.
210-
NodeSystemCPUContentionPressure NodeConditionType = "SystemCPUContentionPressure"
211-
NodeSystemMemoryContentionPressure NodeConditionType = "SystemMemoryContentionPressure"
212-
NodeSystemDiskContentionPressure NodeConditionType = "SystemDiskContentionPressure"
213-
214-
// Conditions based on pressure at kubepods level cgroup.
215-
NodeKubepodsCPUContentionPressure NodeConditionType = "KubepodsCPUContentionPressure"
216-
NodeKubepodsMemoryContentionPressure NodeConditionType = "KubepodsMemoryContentionPressure"
217-
NodeKubepodsDiskContentionPressure NodeConditionType = "KubepodsDiskContentionPressure"
218-
)
219-
```
220-
221-
3. Kernel collects PSI data for 10s, 60s and 300s timeframes. To determine the optimal observation timeframe, it is necessary to conduct tests and benchmark performance.
222-
In theory, 10s interval might be rapid to taint a node with NoSchedule effect. Therefore, as an initial approach, opting for a 60s timeframe for observation logic appears more appropriate.
223-
224-
Add the observation logic to add node condition and taint as per following scenarios:
225-
* If avg60 >= threshold, then record an event indicating high resource pressure.
226-
* If avg60 >= threshold and is trending higher i.e. avg10 >= threshold, then set Node Condition for high resource contention pressure. This should ensure no new pods are scheduled on the nodes under heavy resource contention pressure.
227-
* If avg60 >= threshold for a node tainted with NoSchedule effect, and is trending lower i.e. avg10 <= threshold, record an event mentioning the resource contention pressure is trending lower.
228-
* If avg60 < threshold for a node tainted with NoSchedule effect, remove the NodeCondition.
229-
230-
4. Collaborate with sig-scheduling to modify TaintNodesByCondition feature to integrate new taints for the new Node Conditions introduced in this enhancement.
231-
232-
* `node.kubernetes.io/memory-contention-pressure=:NoSchedule`
233-
* `node.kubernetes.io/cpu-contention-pressure=:NoSchedule`
234-
* `node.kubernetes.io/disk-contention-pressure=:NoSchedule`
235-
236-
5. Perform experiments to finalize the default optimal pressure threshold value.
237-
238-
6. Add a new feature gate PSINodeCondition, and guard the node condition related logic behind the feature gate. Set `--feature-gates=PSINodeCondition=true` to enable the feature.
239-
240187
### Test Plan
241188

242189
<!--
@@ -282,6 +229,7 @@ This can inform certain test coverage improvements that we want to do before
282229
extending the production code to implement this enhancement.
283230
-->
284231
- `k8s.io/kubernetes/pkg/kubelet/server/stats`: `2023-10-04` - `74.4%`
232+
- `k8s.io/kubernetes/pkg/kubelet/stats`: `2025-06-10` - `77.4%`
285233

286234
##### Integration tests
287235

@@ -300,6 +248,8 @@ For Beta and GA, add links to added tests together with links to k8s-triage for
300248
https://storage.googleapis.com/k8s-triage/index.html
301249
-->
302250

251+
Within Kubernetes, the feature is implemented solely in kubelet. Therefore a Kubernetes integration test doesn't apply here.
252+
303253
Any identified external user of either of these endpoints (prometheus, metrics-server) should be tested to make sure they're not broken by new fields in the API response.
304254

305255
##### e2e tests
@@ -314,32 +264,25 @@ https://storage.googleapis.com/k8s-triage/index.html
314264
We expect no non-infra related flakes in the last month as a GA graduation criteria.
315265
-->
316266

317-
- <test>: <link to test coverage>
267+
- `test/e2e_node/summary_test.go`: `https://storage.googleapis.com/k8s-triage/index.html?test=test%2Fe2e_node%2Fsummary_test.go`
318268

319269
### Graduation Criteria
320270

321-
#### Phase 1: Alpha
271+
#### Alpha
322272

323273
- PSI integrated in kubelet behind a feature flag.
324274
- Unit tests to check the fields are populated in the
325275
Summary API response.
326276

327-
#### Phase 2: Alpha
328-
329-
- Implement Phase 2 of the enhancement which enables kubelet to
330-
report node conditions based off PSI values.
331-
- Initial e2e tests completed and enabled if CRI implementation supports
332-
it.
333-
- Add documentation for the feature.
334-
335277
#### Beta
336278

337279
- Feature gate is enabled by default.
338280
- Extend e2e test coverage.
339281
- Allowing time for feedback.
340282

341283
#### GA
342-
- TBD
284+
- Gather evidence of real world usage.
285+
- No major issue reported.
343286

344287
#### Deprecation
345288

@@ -406,22 +349,16 @@ well as the [existing list] of feature gates.
406349
-->
407350

408351
- [X] Feature gate (also fill in values in `kep.yaml`)
409-
- Feature gate name: PSINodeCondition
352+
- Feature gate name: KubeletPSI
410353
- Components depending on the feature gate: kubelet
411-
- [ ] Other
412-
- Describe the mechanism:
413-
- Will enabling / disabling the feature require downtime of the control
414-
plane?
415-
- Will enabling / disabling the feature require downtime or reprovisioning
416-
of a node?
417354

418355
###### Does enabling the feature change any default behavior?
419356

420357
<!--
421358
Any change of default behavior may be surprising to users or break existing
422359
automations, so be extremely careful here.
423360
-->
424-
Not in Phase 1. Phase 2 is TBD in K8s 1.31.
361+
No.
425362

426363
###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)?
427364

@@ -438,9 +375,8 @@ NOTE: Also set `disable-supported` to `true` or `false` in `kep.yaml`.
438375
Yes
439376

440377
###### What happens if we reenable the feature if it was previously rolled back?
441-
When the feature is disabled, the Node Conditions will still exist on the nodes. However,
442-
they won't be any consumers of these node conditions. When the feature is re-enabled,
443-
the kubelet will override out of date Node Conditions as expected.
378+
No PSI metrics will be available in kubelet Summary API nor Prometheus metrics if the
379+
feature was rolled back.
444380

445381
###### Are there any tests for feature enablement/disablement?
446382

@@ -476,13 +412,34 @@ rollout. Similarly, consider large clusters and how enablement/disablement
476412
will rollout across nodes.
477413
-->
478414

415+
The PSI metrics in kubelet Summary API and Prometheus metrics are for monitoring purpose,
416+
and are not used by Kubernetes itself to inform workload lifecycle decisions. Therefore it should
417+
not impact running workloads.
418+
419+
If there is a bug and kubelet fails to serve the metrics during rollout, the kubelet Summary API
420+
and Prometheus metrics could be corrupted, and other components that depend on those metrics could
421+
be impacted. Disabling the feature gate / rolling back the feature should be safe.
422+
479423
###### What specific metrics should inform a rollback?
480424

481425
<!--
482426
What signals should users be paying attention to when the feature is young
483427
that might indicate a serious problem?
484428
-->
485429

430+
PSI metrics exposed at kubelet `/metrics/cadvisor` endpoint:
431+
432+
```
433+
container_pressure_cpu_stalled_seconds_total
434+
container_pressure_cpu_waiting_seconds_total
435+
container_pressure_memory_stalled_seconds_total
436+
container_pressure_memory_waiting_seconds_total
437+
container_pressure_io_stalled_seconds_total
438+
container_pressure_io_waiting_seconds_total
439+
```
440+
441+
kubelet Summary API at the `/stats/summary` endpoint.
442+
486443
###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested?
487444

488445
<!--
@@ -491,12 +448,23 @@ Longer term, we may want to require automated upgrade/rollback tests, but we
491448
are missing a bunch of machinery and tooling and can't do that now.
492449
-->
493450

451+
Test plan:
452+
- Create pods when the feature is alpha and disabled
453+
- Upgrade kubelet so the feature is beta and enabled
454+
- Pods should continue to run
455+
- PSI metrics should be reported in kubelet Summary API and Prometheus metrics
456+
- Roll back kubelet to previous version
457+
- Pods should continue to run
458+
- PSI metrics should no longer be reported
459+
494460
###### Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.?
495461

496462
<!--
497463
Even if applying deprecation policies, they may still surprise some users.
498464
-->
499465

466+
No
467+
500468
### Monitoring Requirements
501469

502470
<!--
@@ -513,13 +481,9 @@ Ideally, this should be a metric. Operations against the Kubernetes API (e.g.,
513481
checking if there are objects with field X set) may be a last resort. Avoid
514482
logs or events for this purpose.
515483
-->
516-
For Phase 1:
517484
Use `kubectl get --raw "/api/v1/nodes/{$nodeName}/proxy/stats/summary"` to call Summary API. If the PSIStats field is seen in the API response,
518485
the feature is available to be used by workloads.
519486

520-
For Phase 2:
521-
TBD
522-
523487
###### How can someone using this feature know that it is working for their instance?
524488

525489
<!--
@@ -531,13 +495,8 @@ and operation of this feature.
531495
Recall that end users cannot usually observe component logs or access metrics.
532496
-->
533497

534-
- [ ] Events
535-
- Event Reason:
536-
- [ ] API .status
537-
- Condition name:
538-
- Other field:
539-
- [ ] Other (treat as last resort)
540-
- Details:
498+
- [x] Other (treat as last resort)
499+
- Details: The feature is only about metrics surfacing. One can know that it is working by reading the metrics.
541500

542501
###### What are the reasonable SLOs (Service Level Objectives) for the enhancement?
543502

@@ -556,6 +515,8 @@ These goals will help you determine what you need to measure (SLIs) in the next
556515
question.
557516
-->
558517

518+
kubelet Summary API and Prometheus metrics should continue serving traffics meeting their originally targeted SLOs
519+
559520
###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service?
560521

561522
<!--
@@ -658,10 +619,12 @@ NA
658619
## Implementation History
659620

660621
- 2023/09/13: Initial proposal
622+
- 2025/06/10: Drop Phase 2 from this KEP. Phase 2 will be tracked in its own KEP to allow separate milestone tracking
623+
- 2025/06/10: Update the proposal with Beta requirements
661624

662625
## Drawbacks
663626

664-
No drawbacks in Phase 1 identified. There's no reason the enhancement should not be
627+
No drawbacks identified. There's no reason the enhancement should not be
665628
implemented. This enhancement now makes it possible to read PSI metric without installing
666629
additional dependencies
667630

0 commit comments

Comments
 (0)