Skip to content

Conversation

@klihub
Copy link
Collaborator

@klihub klihub commented Dec 16, 2025

This patch series should fix the observed failure to allocate exclusive CPUs reported here: #602 (comment).

In particular it updates the topology-aware policy to

  • correctly disregard 0 CPU request containers in the reserved pool from normal shared pools
  • print grants with reserved CPU pool assignments correctly

Note that due to runc still being broken wrt. allocating CPUs with logical ID > 1024, the above linked test case of #599 still fails. But now only due to runc, with the actual resource allocation and intended policy CPU assignment being as expected, as can be observed by running something like this on the test VM host once the test case has failed:

$ crictl inspect $(crictl ps | grep pod0c0 | tr -s '\t' ' ' | cut -d ' ' -f1) | jq .info.runtimeSpec.linux.resources.cpu.cpus
"0-2,511,4095"

Copy link
Collaborator

@askervin askervin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @klihub, I think this is good fix to that issue! The only suggestion I have is renaming the foreach function to match new parameter.

Disregard containers assigned to the reserved pool when looking
for containers with 0 CPU request.

Signed-off-by: Krisztian Litkey <[email protected]>
@klihub klihub force-pushed the fixes/has-0-cpureq-containers branch from 8d579fc to 3646e50 Compare December 16, 2025 16:25
@klihub klihub requested a review from askervin December 16, 2025 16:28
@askervin askervin merged commit a5500a7 into containers:main Dec 17, 2025
9 checks passed
@klihub klihub deleted the fixes/has-0-cpureq-containers branch December 17, 2025 13:41
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants