@@ -189,11 +189,15 @@ The following policy options exist for the static CPU management policy:
189
189
` align-by-socket` (alpha, hidden by default)
190
190
: Align CPUs by physical package / socket boundary, rather than logical NUMA boundaries (available since Kubernetes v1.25)
191
191
` distribute-cpus-across-cores` (alpha, hidden by default)
192
- : allocate virtual cores, sometimes called hardware threads, across different physical cores (available since Kubernetes v1.31)
192
+ : Allocate virtual cores, sometimes called hardware threads, across different physical cores (available since Kubernetes v1.31)
193
193
` distribute-cpus-across-numa` (alpha, hidden by default)
194
- : spread CPUs across different NUMA domains, aiming for an even balance between the selected domains (available since Kubernetes v1.23)
194
+ : Spread CPUs across different NUMA domains, aiming for an even balance between the selected domains (available since Kubernetes v1.23)
195
195
` full-pcpus-only` (beta, visible by default)
196
196
: Always allocate full physical cores (available since Kubernetes v1.22)
197
+ ` strict-cpu-reservation` (alpha, hidden by default)
198
+ : Prevent all the pods regardless of their Quality of Service class to run on reserved CPUs (available since Kubernetes v1.32)
199
+ ` prefer-align-cpus-by-uncorecache` (alpha, hidden by default)
200
+ : Align CPUs by uncore (Last-Level) cache boundary on a best-effort way (available since Kubernetes v1.32)
197
201
198
202
You can toggle groups of options on and off based upon their maturity level
199
203
using the following feature gates :
@@ -273,6 +277,24 @@ of `reservedSystemCPUs` and cause host OS services to starve in real life deploy
273
277
If the `strict-cpu-reservation` policy option is enabled, the static policy will not allow
274
278
any workload to use the CPU cores specified in `reservedSystemCPUs`.
275
279
280
+ # #### `prefer-align-cpus-by-uncorecache`
281
+
282
+ If the `prefer-align-cpus-by-uncorecache` policy is specified, the static policy
283
+ will allocate CPU resources for individual containers such that all CPUs assigned
284
+ to a container share the same uncore cache block (also known as the Last-Level Cache
285
+ or LLC). By default, the `CPUManager` will tightly pack CPU assignments which can
286
+ result in containers being assigned CPUs from multiple uncore caches. This option
287
+ enables the `CPUManager` to allocate CPUs in a way that maximizes the efficient use
288
+ of the uncore cache. Allocation is performed on a best-effort basis, aiming to
289
+ affine as many CPUs as possible within the same uncore cache. If the container's
290
+ CPU requirement exceeds the CPU capacity of a single uncore cache, the `CPUManager`
291
+ minimizes the number of uncore caches used in order to maintain optimal uncore
292
+ cache alignment. Specific workloads can benefit in performance from the reduction
293
+ of inter-cache latency and noisy neighbors at the cache level. If the `CPUManager`
294
+ cannot align optimally while the node has sufficient resources, the container will
295
+ still be admitted using the default packed behavior.
296
+
297
+
276
298
# # Memory Management Policies
277
299
278
300
{{< feature-state feature_gate_name="MemoryManager" >}}
0 commit comments