You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -85,7 +81,7 @@ Items marked with (R) are required *prior to targeting to a milestone / release*
85
81
86
82
## Summary
87
83
88
-
This KEP proposes adding support in kubelet to read Pressure Stall Information (PSI) metric pertaining to CPU, Memory and IO resources exposed from cAdvisor and runc. This will enable kubelet to report node conditions which will be utilized to prevent scheduling of pods on nodes experiencing significant resource constraints.
84
+
This KEP proposes adding support in kubelet to read Pressure Stall Information (PSI) metric pertaining to CPU, Memory and IO resources exposed from cAdvisor and runc.
89
85
90
86
## Motivation
91
87
@@ -98,11 +94,6 @@ In short, PSI metric are like barometers that provide fair warning of impending
98
94
This proposal aims to:
99
95
1. Enable the kubelet to have the PSI metric of cgroupv2 exposed from cAdvisor and Runc.
100
96
2. Enable the pod level PSI metric and expose it in the Summary API.
101
-
3. Utilize the node level PSI metric to set node condition and node taints.
102
-
103
-
It will have two phases:
104
-
Phase 1: includes goal 1, 2
105
-
Phase 2: includes goal 3
106
97
107
98
### Non-Goals
108
99
@@ -119,23 +110,13 @@ Today, to identify disruptions caused by resource crunches, Kubernetes users nee
119
110
install node exporter to read PSI metric. With the feature proposed in this enhancement,
120
111
PSI metric will be available for users in the Kubernetes metrics API.
121
112
122
-
#### Story 2
123
-
124
-
Kubernetes users want to prevent new pods to be scheduled on the nodes that have resource starvation. By using PSI metric, the kubelet will set Node Condition to avoid pods being scheduled on nodes under high resource pressure. The node controller could then set a [taint on the node based on these new Node Conditions](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition).
125
-
126
113
### Risks and Mitigations
127
114
128
-
There are no significant risks associated with Phase 1 implementation that involves integrating
115
+
There are no significant risks associated with integrating
129
116
the PSI metric in kubelet from either from cadvisor runc libcontainer library or kubelet's CRI runc libcontainer implementation which doesn't involve any shelled binary operations.
130
117
131
-
Phase 2 involves utilizing the PSI metric to report node conditions. There is a potential
132
-
risk of early reporting for nodes under pressure. We intend to address this concern
133
-
by conducting careful experimentation with PSI threshold values to identify the optimal
134
-
default threshold to be used for reporting the nodes under heavy resource pressure.
135
-
136
118
## Design Details
137
119
138
-
#### Phase 1
139
120
1. Add new Data structures PSIData and PSIStats corresponding to the PSI metric output format as following:
140
121
141
122
```
@@ -194,49 +175,6 @@ type NodeStats struct {
194
175
}
195
176
```
196
177
197
-
#### Phase 2 to add PSI based actions.
198
-
**Note:** These actions are tentative, and will depend on different the outcome from testing and discussions with sig-node members, users, and other folks.
199
-
200
-
1. Introduce a new kubelet config parameter, pressure threshold, to let users specify the pressure percentage beyond which the kubelet would report the node condition to disallow workloads to be scheduled on it.
201
-
202
-
2. Add new node conditions corresponding to high PSI (beyond threshold levels) on CPU, Memory and IO.
203
-
204
-
```go
205
-
// These are valid conditions of the node. Currently, we don't have enough information to decide
206
-
// node condition.
207
-
const (
208
-
…
209
-
// Conditions based on pressure at system level cgroup.
3. Kernel collects PSI data for 10s, 60s and 300s timeframes. To determine the optimal observation timeframe, it is necessary to conduct tests and benchmark performance.
222
-
In theory, 10s interval might be rapid to taint a node with NoSchedule effect. Therefore, as an initial approach, opting for a 60s timeframe for observation logic appears more appropriate.
223
-
224
-
Add the observation logic to add node condition and taint as per following scenarios:
225
-
* If avg60 >= threshold, then record an event indicating high resource pressure.
226
-
* If avg60 >= threshold and is trending higher i.e. avg10 >= threshold, then set Node Condition for high resource contention pressure. This should ensure no new pods are scheduled on the nodes under heavy resource contention pressure.
227
-
* If avg60 >= threshold for a node tainted with NoSchedule effect, and is trending lower i.e. avg10 <= threshold, record an event mentioning the resource contention pressure is trending lower.
228
-
* If avg60 < threshold for a node tainted with NoSchedule effect, remove the NodeCondition.
229
-
230
-
4. Collaborate with sig-scheduling to modify TaintNodesByCondition feature to integrate new taints for the new Node Conditions introduced in this enhancement.
5. Perform experiments to finalize the default optimal pressure threshold value.
237
-
238
-
6. Add a new feature gate PSINodeCondition, and guard the node condition related logic behind the feature gate. Set `--feature-gates=PSINodeCondition=true` to enable the feature.
239
-
240
178
### Test Plan
241
179
242
180
<!--
@@ -318,20 +256,12 @@ We expect no non-infra related flakes in the last month as a GA graduation crite
318
256
319
257
### Graduation Criteria
320
258
321
-
#### Phase 1: Alpha
259
+
#### Alpha
322
260
323
261
- PSI integrated in kubelet behind a feature flag.
324
262
- Unit tests to check the fields are populated in the
325
263
Summary API response.
326
264
327
-
#### Phase 2: Alpha
328
-
329
-
- Implement Phase 2 of the enhancement which enables kubelet to
330
-
report node conditions based off PSI values.
331
-
- Initial e2e tests completed and enabled if CRI implementation supports
332
-
it.
333
-
- Add documentation for the feature.
334
-
335
265
#### Beta
336
266
337
267
- Feature gate is enabled by default.
@@ -406,7 +336,7 @@ well as the [existing list] of feature gates.
406
336
-->
407
337
408
338
-[X] Feature gate (also fill in values in `kep.yaml`)
409
-
- Feature gate name: PSINodeCondition
339
+
- Feature gate name: KubeletPSI
410
340
- Components depending on the feature gate: kubelet
411
341
-[ ] Other
412
342
- Describe the mechanism:
@@ -421,7 +351,7 @@ well as the [existing list] of feature gates.
421
351
Any change of default behavior may be surprising to users or break existing
422
352
automations, so be extremely careful here.
423
353
-->
424
-
Not in Phase 1. Phase 2 is TBD in K8s 1.31.
354
+
No.
425
355
426
356
###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)?
427
357
@@ -438,9 +368,8 @@ NOTE: Also set `disable-supported` to `true` or `false` in `kep.yaml`.
438
368
Yes
439
369
440
370
###### What happens if we reenable the feature if it was previously rolled back?
441
-
When the feature is disabled, the Node Conditions will still exist on the nodes. However,
442
-
they won't be any consumers of these node conditions. When the feature is re-enabled,
443
-
the kubelet will override out of date Node Conditions as expected.
371
+
No PSI metrics will be availabe in kubelet Summary API nor Prometheus metrics if the
372
+
feature was rolled back.
444
373
445
374
###### Are there any tests for feature enablement/disablement?
446
375
@@ -513,13 +442,9 @@ Ideally, this should be a metric. Operations against the Kubernetes API (e.g.,
513
442
checking if there are objects with field X set) may be a last resort. Avoid
514
443
logs or events for this purpose.
515
444
-->
516
-
For Phase 1:
517
445
Use `kubectl get --raw "/api/v1/nodes/{$nodeName}/proxy/stats/summary"` to call Summary API. If the PSIStats field is seen in the API response,
518
446
the feature is available to be used by workloads.
519
447
520
-
For Phase 2:
521
-
TBD
522
-
523
448
###### How can someone using this feature know that it is working for their instance?
524
449
525
450
<!--
@@ -658,10 +583,11 @@ NA
658
583
## Implementation History
659
584
660
585
- 2023/09/13: Initial proposal
586
+
- 2025/06/10: Drop Phase 2 from this KEP. Phase 2 will be tracked in its own KEP to allow separate milestone tracking
661
587
662
588
## Drawbacks
663
589
664
-
No drawbacks in Phase 1 identified. There's no reason the enhancement should not be
590
+
No drawbacks identified. There's no reason the enhancement should not be
665
591
implemented. This enhancement now makes it possible to read PSI metric without installing
0 commit comments