You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: keps/sig-node/4540-strict-cpu-reservation/README.md
+79-38Lines changed: 79 additions & 38 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -42,20 +42,20 @@
42
42
43
43
Items marked with (R) are required *prior to targeting to a milestone / release*.
44
44
45
-
-[] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR)
45
+
-[x] (R) Enhancement issue in release milestone, which links to KEP dir in [kubernetes/enhancements] (not the initial KEP PR)
46
46
-[x] (R) KEP approvers have approved the KEP status as `implementable`
47
47
-[x] (R) Design details are appropriately documented
48
48
-[ ] (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors)
49
-
-[] e2e Tests for all Beta API Operations (endpoints)
50
-
-[] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)
51
-
-[] (R) Minimum Two Week Window for GA e2e tests to prove flake free
49
+
-[x] e2e Tests for all Beta API Operations (endpoints)
50
+
-[x] (R) Ensure GA e2e tests meet requirements for [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)
51
+
-[x] (R) Minimum Two Week Window for GA e2e tests to prove flake free
52
52
-[ ] (R) Graduation criteria is in place
53
53
-[ ] (R) [all GA Endpoints](https://github.com/kubernetes/community/pull/1806) must be hit by [Conformance Tests](https://github.com/kubernetes/community/blob/master/contributors/devel/sig-architecture/conformance-tests.md)
54
-
-[] (R) Production readiness review completed
54
+
-[x] (R) Production readiness review completed
55
55
-[ ] (R) Production readiness review approved
56
56
-[x] "Implementation History" section is up-to-date for milestone
57
-
-[] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io]
58
-
-[] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes
57
+
-[x] User-facing documentation has been created in [kubernetes/website], for publication to [kubernetes.io]
58
+
-[x] Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes
@@ -73,15 +73,14 @@ With this KEP, a new `CPUManager` policy option `strict-cpu-reservation` is intr
73
73
74
74
The static policy is used to reduce latency or improve performance. If you want to move system daemons or interrupt processing to dedicated cores, the obvious way is use the `reservedSystemCPUs` option. But in current implementation this isolation is implemented only for guaranteed pods with integer CPU requests not for burstable and best-effort pods (and guaranteed pods with fractional CPU requests).
75
75
Admission is only comparing the cpu requests against the allocatable cpus. Since the cpu limit are higher than the request, it allows burstable and best-effort pods to use up the capacity of `reservedSystemCPUs` and cause host OS services to starve in real life deployments.
76
-
Custom CPU allocation policies deployed as NRI plugins (e.g. Balloons) can separate infrastructure and workload into different CPU pools but they require extra software, additional tuning and reduced CPU pool size could affect performance of multi-threaded processes.
77
76
78
77
### Goals
79
78
* Align scheduler and node view for Node Allocatable (total - reserved).
80
79
* Ensure `reservedSystemCPUs` is only used by system daemons or interrupt processing not by workloads.
81
80
* Ensure no breaking changes for the `static` policy of `CPUManager`.
82
81
83
82
### Non-Goals
84
-
* Change scheduler interface to sub-partition `cpu` resource (as described in the archived Risk Mitigation Option 1).
83
+
* Change interface between node and scheduler.
85
84
86
85
## Proposal
87
86
@@ -109,10 +108,6 @@ With the following Kubelet configuration:
@@ -134,7 +129,7 @@ When `strict-cpu-reservation` is enabled:
134
129
135
130
### Risks and Mitigations
136
131
137
-
The feature is isolated to a specific policy option `strict-cpu-reservation` under `cpuManagerPolicyOptions` and is protected by feature gate `CPUManagerPolicyAlphaOptions` or `CPUManagerPolicyBetaOptions` before the feature graduates to `Stable` i.e. enabled by default.
132
+
The feature is isolated to a specific policy option `strict-cpu-reservation` under `cpuManagerPolicyOptions` and is protected by feature gate `CPUManagerPolicyBetaOptions` before the feature graduates to `Stable` i.e. always enabled.
138
133
139
134
Concern for feature impact on best-effort workloads, the workloads that do not have resource requests, is brought up.
140
135
@@ -144,11 +139,11 @@ The concern is, when the feature graduates to `Stable`, it will be enabled by de
144
139
145
140
However, this is exactly the feature intent, best-effort workloads have no KPI requirement, they are meant to consume whatever CPU resources left on the node including starving from time to time. Best-effort workloads are not scheduled to run on the `reservedSystemCPUs` so they shall not be run on the `reservedSystemCPUs` to destablize the whole node.
146
141
147
-
Nevertheless, risk mitigation has been discussed in details (see archived options below) and we agree to start with the following node metrics of cpu pool sizes in Alpha stage to assess the actual impact in real deployment before revisiting if we need risk mitigation.
142
+
Nevertheless, risk mitigation has been discussed in details (see archived options below) and we agree to start with the following node metrics of cpu pool sizes in Alpha and Beta stages to assess the actual impact in real deployment. The plan is to move the current implementation to Stable stage if no field issue is observed for one year.
-`cpu\_manager\_shared\_pool\_size\_millicores`: report shared pool size, in millicores (e.g. 13500m), expected to be non-zone otherwise best-effort pods will starve
151
-
-`cpu\_manager\_exclusive\_cpu\_allocation\_count`: report exclusively allocated cores, counting full cores (e.g. 16)
145
+
-`cpu_manager_shared_pool_size_millicores`: report shared pool size, in millicores (e.g. 13500m), expected to be non-zone otherwise best-effort pods will starve
146
+
-`cpu_manager_exclusive_cpu_allocation_count`: report exclusively allocated cores, counting full cores (e.g. 16)
152
147
153
148
154
149
#### Archived Risk Mitigation (Option 1)
@@ -184,7 +179,6 @@ kind: KubeletConfiguration
184
179
apiVersion: kubelet.config.k8s.io/v1beta1
185
180
featureGates:
186
181
...
187
-
CPUManagerPolicyOptions: true
188
182
CPUManagerPolicyAlphaOptions: true
189
183
cpuManagerPolicy: static
190
184
cpuManagerPolicyOptions:
@@ -298,7 +292,7 @@ No new integration tests for kubelet are planned.
298
292
- CPU Manager works with `strict-cpu-reservation` policy option
299
293
300
294
- Basic functionality
301
-
1. Enable `CPUManagerPolicyAlphaOptions` feature gate and `strict-cpu-reservation` policy option.
295
+
1. Enable `strict-cpu-reservation` policy option.
302
296
2. Create a simple pod of Burstable QoS type.
303
297
3. Verify the pod is not using the reserved CPU cores.
304
298
4. Delete the pod.
@@ -313,8 +307,9 @@ No new integration tests for kubelet are planned.
313
307
314
308
#### Beta
315
309
316
-
-[ ] Gather feedback from consumers of the new policy option.
317
-
-[ ] Verify no major bugs reported in the previous cycle.
310
+
-[X] Gather feedback from consumers of the new policy option.
311
+
-[X] Verify no major bugs reported in the previous cycle.
312
+
-[X] Ensure proper e2e tests are in place.
318
313
319
314
#### GA
320
315
@@ -333,33 +328,32 @@ No changes needed.
333
328
334
329
### Feature Enablement and Rollback
335
330
336
-
The `/var/lib/kubelet/cpu\_manager\_state` needs to be removed when enabling or disabling the feature.
331
+
The `/var/lib/kubelet/cpu_manager_state` needs to be removed when enabling or disabling the feature.
337
332
338
333
###### How can this feature be enabled / disabled in a live cluster?
339
334
340
335
-[X] Feature gate (also fill in values in `kep.yaml`)
- Components depending on the feature gate: `kubelet`
343
338
-[X] Change the kubelet configuration to set a `CPUManager` policy of `static` and a `CPUManager` policy option of `strict-cpu-reservation`
344
339
- Will enabling / disabling the feature require downtime of the control plane? No
345
-
- Will enabling / disabling the feature require downtime or reprovisioning of a node? No -- removing `/var/lib/kubelet/cpu\_manager\_state` and restarting kubelet are enough.
340
+
- Will enabling / disabling the feature require downtime or reprovisioning of a node? No -- removing `/var/lib/kubelet/cpu_manager_state` and restarting kubelet are enough.
346
341
347
342
348
343
###### Does enabling the feature change any default behavior?
349
344
350
345
Yes. Reserved CPU cores will be strictly used for system daemons and interrupt processing no longer available for workloads.
351
346
352
347
The feature is only enabled when all following conditions are met:
353
-
1. The `CPUManagerPolicyAlphaOptions` feature gate must be enabled
354
-
2. The `static``CPUManager` policy must be selected
355
-
3. The new `strict-cpu-reservation` policy option must be selected
356
-
4. The `reservedSystemCPUs` is not empty
348
+
1. The `static``CPUManager` policy is selected
349
+
2. The `CPUManagerPolicyBetaOptions` feature gate is enabled and the `strict-cpu-reservation` policy option is selected
350
+
3. The `reservedSystemCPUs` is not empty
357
351
358
352
###### Can the feature be disabled once it has been enabled (i.e. can we roll back the enablement)?
359
353
360
-
Yes, the feature can be disabled by either:
361
-
1. Disable feature gate `CPUManagerPolicyAlphaOptions` or remove `strict-cpu-reservation` from the list of `CPUManager` policy options
362
-
2. Remove `/var/lib/kubelet/cpu\_manager\_state` and restart kubelet
354
+
Yes, the feature can be disabled by:
355
+
1. Disable feature gate `CPUManagerPolicyBetaOptions` or remove `strict-cpu-reservation` from the list of `CPUManager` policy options
356
+
2. Remove `/var/lib/kubelet/cpu_manager_state` and restart kubelet
363
357
364
358
###### What happens if we reenable the feature if it was previously rolled back?
365
359
@@ -381,7 +375,7 @@ If the feature rollout fails, burstable and best-efforts continue to run on the
381
375
If the feature rollback fails, burstable and best-efforts continue not to run on the reserved CPU cores.
382
376
In either case, existing workload will not be affected.
383
377
384
-
When enabling or disabling the feature, make sure `/var/lib/kubelet/cpu\_manager\_state` is removed before restarting kubelet otherwise kubelet restart could fail.
378
+
When enabling or disabling the feature, make sure `/var/lib/kubelet/cpu_manager_state` is removed before restarting kubelet otherwise kubelet restart could fail.
385
379
386
380
<!--
387
381
Try to be as paranoid as possible - e.g., what if some components will restart
@@ -410,8 +404,54 @@ Describe manual testing that was done and the outcomes.
410
404
Longer term, we may want to require automated upgrade/rollback tests, but we
411
405
are missing a bunch of machinery and tooling and can't do that now.
412
406
-->
407
+
If you have this feature enabled in v1.32 under `CPUManagerPolicyAlphaOptions` (default to false) you will continue to have the feature enabled in v1.33 under `CPUManagerPolicyBetaOptions` (default to true) automatically i.e. no extra action is needed.
408
+
To enable or disable this feature in v1.33, follow the feature activation and de-activation procedures described above.
409
+
410
+
Manual upgrade->downgrade->upgrade testing from v1.32 to v1.33 is as follows:
411
+
412
+
With the following Kubelet configuration and `cpu_manager_state` v1.32:
413
+
414
+
```yaml
415
+
kind: KubeletConfiguration
416
+
apiVersion: kubelet.config.k8s.io/v1beta1
417
+
featureGates:
418
+
CPUManagerPolicyAlphaOptions: true
419
+
...
420
+
cpuManagerPolicy: static
421
+
cpuManagerPolicyOptions:
422
+
strict-cpu-reservation: "true"
423
+
reservedSystemCPUs: "0,32,1,33,16,48"
424
+
...
425
+
```
413
426
414
-
We manually test it in our internal environment and it works.
You are recommended to remove the `CPUManagerPolicyAlphaOptions` feature gate after upgrading to v1.33 for operational integrity, but it is not mandatory.
439
+
440
+
If you want to disable the feature in v1.33, you can either disable the `CPUManagerPolicyBetaOptions` feature gate, or remove the `strict-cpu-reservation` policy option. Remember to remove the `/var/lib/kubelet/cpu_manager_state` file before restarting kubelet.
441
+
442
+
The following `cpu_manager_state` will be seen after the feature is disabled:
If you want to enable the feature in v1.33, you need to make sure the `CPUManagerPolicyBetaOptions` feature gate is not disabled and add the `strict-cpu-reservation` policy option. Remember to remove the `/var/lib/kubelet/cpu_manager_state` file before restarting kubelet.
449
+
450
+
The following `cpu_manager_state` will be seen after the feature is enabled:
-`cpu\_manager\_shared\_pool\_size\_millicores`: report shared pool size, in millicores (e.g. 13500m), expected to be non-zone otherwise best-effort pods will starve
452
-
-`cpu\_manager\_exclusive\_cpu\_allocation\_count`: report exclusively allocated cores, counting full cores (e.g. 16)
490
+
Monitor the following kubelet counters:
491
+
-`cpu_manager_shared_pool_size_millicores`: report shared pool size, in millicores (e.g. 13500m), expected to be non-zone otherwise best-effort pods will starve
492
+
-`cpu_manager_exclusive_cpu_allocation_count`: report exclusively allocated cores, counting full cores (e.g. 16)
453
493
454
494
###### Are there any missing metrics that would be useful to have to improve observability of this feature?
455
495
@@ -520,6 +560,7 @@ You can safely disable the feature.
0 commit comments