You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -261,7 +262,7 @@ The `prefer-align-cpus-by-uncorecache` feature will be enabled and tested indivi
261
262
-`full-pcpus-only`
262
263
- Topology Manager NUMA Affinity
263
264
264
-
The following CPU Topologies are representative of various uncore cache architectures and will be added to policy_test.go and represented in the unit testing.
265
+
The following CPU Topologies are representative of various uncore cache architectures and will be added to [policy_test.go](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cm/cpumanager/policy_test.go) and represented in the unit testing.
@@ -278,19 +279,25 @@ N/A. This feature requires a e2e test for testing.
278
279
279
280
##### e2e tests
280
281
281
-
- For e2e testing, checks will be added to determine if the node has a split uncore cache topology. If node does not meet the requirement to have multiple uncore caches, the added tests will be skipped.
282
-
- e2e testing should cover the deployment of a pod that is following uncore cache alignment. CPU assignment can be determined by podresources API and programatically cross-referenced to syfs topology information to determine proper uncore cache alignment.
283
-
- For e2e testing, guaranteed pods will be deployed with various CPU size requirements on our own baremetal instances across different vendor architectures and confirming the CPU assignments to uncore cache core groupings. This feature is intended for baremetal only and not cloud instances.
284
-
- Update CI to test GCP instances of different architectures utilizing uncore cache alignment feature.
285
-
282
+
-[should update alignment counters when pod successfully run taking less than uncore cache group](https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/cpu_manager_metrics_test.go):[SIG-node](https://testgrid.k8s.io/sig-node):[SIG-node-kubelet](https://testgrid.k8s.io/sig-node-kubelet)
283
+
-[should update alignment counters when pod successfully run taking a full uncore cache group](https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/cpu_manager_metrics_test.go):[SIG-node](https://testgrid.k8s.io/sig-node):[SIG-node-kubelet](https://testgrid.k8s.io/sig-node-kubelet)
284
+
-[should not update alignment counters when pod successfully run taking more than a uncore cache group](https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/cpu_manager_metrics_test.go):[SIG-node](https://testgrid.k8s.io/sig-node):[SIG-node-kubelet](https://testgrid.k8s.io/sig-node-kubelet)
286
285
287
286
### Graduation Criteria
288
287
289
288
#### Alpha
290
289
291
290
- Feature implemented behind a feature gate flag option
292
-
- E2E Tests will be skipped until nodes with uncore cache can be provisioned within CI hardware. Work is ongoing to add required systems (https://github.com/kubernetes/k8s.io/issues/7339). E2E testing will be required to graduate to beta.
293
-
- Providing a metric to verify uncore cache alignment will be required to graduate to beta.
291
+
- Add unit test coverage
292
+
- Added metrics to cover observability needs
293
+
- Added e2e tests for metrics
294
+
295
+
#### Beta
296
+
297
+
- Address bug fixes: ability to schedule odd-integer CPUs for uncore cache alignment
298
+
- Add test cases to ensure functional compatibility with existing CPUManager options
299
+
- Add test cases to ensure and report incompatibility with existing CPUManager options that are not supported with prefer-align-cpus-by-uncore-cache
300
+
- Add E2E test coverage for feature
294
301
295
302
### Upgrade / Downgrade Strategy
296
303
@@ -330,13 +337,12 @@ you need any help or guidance.
330
337
331
338
To enable this feature requires enabling the feature gates for static policy in the Kubelet configuration file for the CPUManager feature gate and add the policy option for uncore cache alignment
332
339
333
-
334
340
###### How can this feature be enabled / disabled in a live cluster?
335
341
336
342
For `CPUManager` it is a requirement going from `none` to `static` policy cannot be done dynamically because of the `cpu_manager_state file`. The node needs to be drained and the policy checkpoint file (`cpu_manager_state`) need to be removed before restarting Kubelet. This feature specifically relies on the `static` policy being enabled.
337
343
338
344
-[x] Feature gate (also fill in values in `kep.yaml`)
- Components depending on the feature gate: `kubelet`
341
347
-[x] Other
342
348
- Describe the mechanism: Change the `kubelet` configuration to set a `CPUManager` policy of static then setting the policy option of `prefer-align-cpus-by-uncorecache`
@@ -360,10 +366,9 @@ Feature will be enabled. Proper drain of node and restart of kubelet required. F
360
366
361
367
###### Are there any tests for feature enablement/disablement?
362
368
363
-
Option is not enabled dynamically. To enable/disable option, cpu_manager_state must be removed and kubelet must be restarted.
364
-
Unit tests will be implemented to test if the feature is enabled/disabled.
365
-
E2e node serial suite can be use to test the enablement/disablement of the feature since it allows the kubelet to be restarted.
366
-
369
+
E2E test will demonstrate default behavior is preserved when `CPUManagerPolicyOptions` feature gate is disabled.
370
+
Metric created to check uncore cache alignment after cpuset is determined and utilized in E2E tests with feature enabled.
371
+
See [cpu_manager_metrics_test.go](https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/cpu_manager_metrics_test.go)
367
372
368
373
### Rollout, Upgrade and Rollback Planning
369
374
@@ -373,12 +378,13 @@ This section must be completed when targeting beta to a release.
373
378
374
379
###### How can a rollout or rollback fail? Can it impact already running workloads?
375
380
376
-
Kubelet restarts are not expected to impact existing CPU assignments to already running workloads
377
-
381
+
This feature is a best-effort alignment of CPUs to uncore caches that requires a kubelet restart that must not affect running workloads. No changes needed to cpu_manager_state file.
382
+
A rollout may fail based upon existing workloads that create fragmented uncore caches on the node, potentially resulting in CPUset distribution across multiple caches based upon the CPU quantity requirements and the best-effort policy.
383
+
Metrics below can help the user track alignment, but a rollback will not help because the feature is not a strict alignment to uncore caches, but a best-effort to reduce shared uncore caches.
378
384
379
385
###### What specific metrics should inform a rollback?
380
386
381
-
Increased pod startup time/latency
387
+
`kubelet_container_aligned_compute_resources_count` and `container_aligned_compute_resources_failure_count` metric can be tracked to measure if there are issues in the cpuset allocation that can determine if a rollback is necessary.
382
388
383
389
###### Were upgrade and rollback tested? Was the upgrade->downgrade->upgrade path tested?
384
390
@@ -397,7 +403,7 @@ Reference CPUID info in podresources API to be able to verify assignment.
397
403
###### How can an operator determine if the feature is in use by workloads?
398
404
399
405
Reference podresources API to determine CPU assignment and CacheID assignment per container.
400
-
Use proposed 'container_aligned_compute_resources_count' metric which reports the count of containers getting aligned compute resources. See PR#127155(https://github.com/kubernetes/kubernetes/pull/127155).
406
+
Use 'container_aligned_compute_resources_count' metric which reports the count of containers getting aligned compute resources. See [kubelet/metrics/metrics.go](https://github.com/kubernetes/kubernetes/blob/8f1f17a04f62ab64ebe4f0b9d7f5f799bf56a0d9/pkg/kubelet/metrics/metrics.go#L135).
401
407
402
408
###### How can someone using this feature know that it is working for their instance?
403
409
@@ -409,16 +415,17 @@ Reference podresources API to determine CPU assignment.
409
415
410
416
###### What are the reasonable SLOs (Service Level Objectives) for the enhancement?
411
417
412
-
Measure the time to deploy pods under default settings and compare to the time to deploy pods with align-by-uncorecache enabled. Time difference should be negligible.
418
+
In default Kubernetes installation, 99th percentile per cluster-day <= X
419
+
This feature is best-effort and will not cause failed admission, but can introduce admission delay.
413
420
414
421
###### What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service?
415
422
416
423
- Metrics
417
-
-`topology_manager_admission_duration_ms`: Which measures the the duration of the admission process performed by Topology Manager.
424
+
-`topology_manager_admission_duration_ms` can be used to determine pod admission time
418
425
419
426
###### Are there any missing metrics that would be useful to have to improve observability of this feature?
420
427
421
-
Utilized proposed 'container_aligned_compute_resources_count' in PR#127155 to be extended for uncore cache alignment count.
428
+
No.
422
429
423
430
<!--
424
431
Describe the metrics themselves and the reasons why they weren't added (e.g., cost,
@@ -526,6 +533,8 @@ For each of them, fill in the following information by copying the below templat
526
533
527
534
- The outlined sections were filled out was created 2024-08-27.
528
535
536
+
- 2025-06-09: Submitted PR to promote feature to beta
0 commit comments