-
Notifications
You must be signed in to change notification settings - Fork 2.9k
enforce cpu/memory ceiling for prow jobs #36121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
a8113f9 to
7955c13
Compare
|
/retest |
7955c13 to
ee71e35
Compare
ee71e35 to
dd5f3bb
Compare
dd5f3bb to
eb5cb0d
Compare
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: upodroid The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@upodroid: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
this has been oomkilled endlessly while patching go on 1.34 cpu:memory ratio roughly based on kubernetes#36121
Part of #34139
/hold for discussion
We don't use the same instance sizes between clouds. Right now, we use an 8core 64gb RAM instance on GCP (c4|c4a|c4d-highmem-8) and a 16core 128gb node on AWS(r5ad.4xlarge or r5ad.2xlarge).
The preferred node size we should adopt consistently between clouds is the latest 8 core 32GB instance that has local SSDs.
Our nodes are underutilised in memory, as you can see in Datadog.
While we determine how to inject small, medium, and large pod sizes via Kyverno (mutating webhooks), I'll enforce a new change that caps the pod size to 7 cores and 27 GB of RAM. The remaining half core and a 1G of RAM goes to any agents we run on the cluster.
c4(30GB) -> c4d(31GB) -> c4a(32GB)

Less than 1% of prow jobs(54 out of 5874) request more than 28GB of RAM, and most of them don't need it.
Examples of misconfigured sizings: