You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -85,7 +81,7 @@ Items marked with (R) are required *prior to targeting to a milestone / release*
85
81
86
82
## Summary
87
83
88
-
This KEP proposes adding support in kubelet to read Pressure Stall Information (PSI) metric pertaining to CPU, Memory and IO resources exposed from cAdvisor and runc. This will enable kubelet to report node conditions which will be utilized to prevent scheduling of pods on nodes experiencing significant resource constraints.
84
+
This KEP proposes adding support in kubelet to read Pressure Stall Information (PSI) metric pertaining to CPU, Memory and IO resources exposed from cAdvisor and runc.
89
85
90
86
## Motivation
91
87
@@ -98,11 +94,6 @@ In short, PSI metric are like barometers that provide fair warning of impending
98
94
This proposal aims to:
99
95
1. Enable the kubelet to have the PSI metric of cgroupv2 exposed from cAdvisor and Runc.
100
96
2. Enable the pod level PSI metric and expose it in the Summary API.
101
-
3. Utilize the node level PSI metric to set node condition and node taints.
102
-
103
-
It will have two phases:
104
-
Phase 1: includes goal 1, 2
105
-
Phase 2: includes goal 3
106
97
107
98
### Non-Goals
108
99
@@ -119,23 +110,13 @@ Today, to identify disruptions caused by resource crunches, Kubernetes users nee
119
110
install node exporter to read PSI metric. With the feature proposed in this enhancement,
120
111
PSI metric will be available for users in the Kubernetes metrics API.
121
112
122
-
#### Story 2
123
-
124
-
Kubernetes users want to prevent new pods to be scheduled on the nodes that have resource starvation. By using PSI metric, the kubelet will set Node Condition to avoid pods being scheduled on nodes under high resource pressure. The node controller could then set a [taint on the node based on these new Node Conditions](https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/#taint-nodes-by-condition).
125
-
126
113
### Risks and Mitigations
127
114
128
-
There are no significant risks associated with Phase 1 implementation that involves integrating
115
+
There are no significant risks associated with integrating
129
116
the PSI metric in kubelet from either from cadvisor runc libcontainer library or kubelet's CRI runc libcontainer implementation which doesn't involve any shelled binary operations.
130
117
131
-
Phase 2 involves utilizing the PSI metric to report node conditions. There is a potential
132
-
risk of early reporting for nodes under pressure. We intend to address this concern
133
-
by conducting careful experimentation with PSI threshold values to identify the optimal
134
-
default threshold to be used for reporting the nodes under heavy resource pressure.
135
-
136
118
## Design Details
137
119
138
-
#### Phase 1
139
120
1. Add new Data structures PSIData and PSIStats corresponding to the PSI metric output format as following:
140
121
141
122
```
@@ -144,16 +125,25 @@ full avg10=0.00 avg60=0.00 avg300=0.00 total=0
144
125
```
145
126
146
127
```go
128
+
// PSI data for an individual resource.
147
129
typePSIDatastruct {
148
-
Avg10 *float64`json:"avg10"`
149
-
Avg60 *float64`json:"avg60"`
150
-
Avg300 *float64`json:"avg300"`
151
-
Total *float64`json:"total"`
130
+
// Total time duration for tasks in the cgroup have waited due to congestion.
131
+
// Unit: nanoseconds.
132
+
Totaluint64`json:"total"`
133
+
// The average (in %) tasks have waited due to congestion over a 10 second window.
134
+
Avg10float64`json:"avg10"`
135
+
// The average (in %) tasks have waited due to congestion over a 60 second window.
136
+
Avg60float64`json:"avg60"`
137
+
// The average (in %) tasks have waited due to congestion over a 300 second window.
138
+
Avg300float64`json:"avg300"`
152
139
}
153
140
141
+
// PSI statistics for an individual resource.
154
142
typePSIStatsstruct {
155
-
Some *PSIData `json:"some,omitempty"`
156
-
Full *PSIData `json:"full,omitempty"`
143
+
// PSI data for some tasks in the cgroup.
144
+
SomePSIData`json:"some,omitempty"`
145
+
// PSI data for all tasks in the cgroup.
146
+
FullPSIData`json:"full,omitempty"`
157
147
}
158
148
```
159
149
@@ -165,15 +155,15 @@ metric data will be available through CRI instead.
165
155
```go
166
156
typeCPUStatsstruct {
167
157
// PSI stats of the overall node
168
-
PSIcadvisorapi.PSIStats`json:"psi,omitempty"`
158
+
PSI*PSIStats `json:"psi,omitempty"`
169
159
}
170
160
```
171
161
172
162
##### Memory
173
163
```go
174
164
typeMemoryStatsstruct {
175
165
// PSI stats of the overall node
176
-
PSIcadvisorapi.PSIStats`json:"psi,omitempty"`
166
+
PSI*PSIStats `json:"psi,omitempty"`
177
167
}
178
168
```
179
169
@@ -185,7 +175,7 @@ type IOStats struct {
185
175
Time metav1.Time`json:"time"`
186
176
187
177
// PSI stats of the overall node
188
-
PSIcadvisorapi.PSIStats`json:"psi,omitempty"`
178
+
PSI*PSIStats `json:"psi,omitempty"`
189
179
}
190
180
191
181
typeNodeStatsstruct {
@@ -194,49 +184,6 @@ type NodeStats struct {
194
184
}
195
185
```
196
186
197
-
#### Phase 2 to add PSI based actions.
198
-
**Note:** These actions are tentative, and will depend on different the outcome from testing and discussions with sig-node members, users, and other folks.
199
-
200
-
1. Introduce a new kubelet config parameter, pressure threshold, to let users specify the pressure percentage beyond which the kubelet would report the node condition to disallow workloads to be scheduled on it.
201
-
202
-
2. Add new node conditions corresponding to high PSI (beyond threshold levels) on CPU, Memory and IO.
203
-
204
-
```go
205
-
// These are valid conditions of the node. Currently, we don't have enough information to decide
206
-
// node condition.
207
-
const (
208
-
…
209
-
// Conditions based on pressure at system level cgroup.
3. Kernel collects PSI data for 10s, 60s and 300s timeframes. To determine the optimal observation timeframe, it is necessary to conduct tests and benchmark performance.
222
-
In theory, 10s interval might be rapid to taint a node with NoSchedule effect. Therefore, as an initial approach, opting for a 60s timeframe for observation logic appears more appropriate.
223
-
224
-
Add the observation logic to add node condition and taint as per following scenarios:
225
-
* If avg60 >= threshold, then record an event indicating high resource pressure.
226
-
* If avg60 >= threshold and is trending higher i.e. avg10 >= threshold, then set Node Condition for high resource contention pressure. This should ensure no new pods are scheduled on the nodes under heavy resource contention pressure.
227
-
* If avg60 >= threshold for a node tainted with NoSchedule effect, and is trending lower i.e. avg10 <= threshold, record an event mentioning the resource contention pressure is trending lower.
228
-
* If avg60 < threshold for a node tainted with NoSchedule effect, remove the NodeCondition.
229
-
230
-
4. Collaborate with sig-scheduling to modify TaintNodesByCondition feature to integrate new taints for the new Node Conditions introduced in this enhancement.
5. Perform experiments to finalize the default optimal pressure threshold value.
237
-
238
-
6. Add a new feature gate PSINodeCondition, and guard the node condition related logic behind the feature gate. Set `--feature-gates=PSINodeCondition=true` to enable the feature.
239
-
240
187
### Test Plan
241
188
242
189
<!--
@@ -282,6 +229,7 @@ This can inform certain test coverage improvements that we want to do before
282
229
extending the production code to implement this enhancement.
Within Kubernetes, the feature is implemented solely in kubelet. Therefore a Kubernetes integration test doesn't apply here.
252
+
303
253
Any identified external user of either of these endpoints (prometheus, metrics-server) should be tested to make sure they're not broken by new fields in the API response.
0 commit comments