-
Notifications
You must be signed in to change notification settings - Fork 4.2k
feat(recommender): add OOMMinBumpUp&OOMBumpUpRatio to CRD #8012
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
We might want to create a proper AEP for this, but this is the general direction I'm thinking. I can open additional issues to track the specific flags we’d like to support for this type of configuration. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @omerap12 thanks for the PR!
I agree it makes sense to be able to configure the OOM bump behavior on VPA level. There's a few questions on how to implement this, though:
- I'm not sure if we want this to be a configuration on Container level or on Pod level, i.e. should this apply to all Containers controlled by a certain VPA or should this rather be something that's controlled per individual Container? I think so far we've been mostly offering configurations on Container level, probably this would also apply here. Or do we have some indication that people who want to configure custom OOM bumps want to do this for all Containers of a Pod in the same way?
- I don't think we should introduce a new configuration type
recommenderConfig
. Technically, all of these properties are configuration options of the recommender (histogram decay options, maxAllowed, minAllowed, which resources to include in the recommendations, etc), so this doesn't seem like a reasonable way to group things. If we agree to make this configuration Container specific, I'd rather add this to theContainerResourcePolicy
- Currently, OOM bump configuration is part of the
AggregationsConfig
, as it is assumed to be globally configured, like all the other options in there. This config is only initialized once, in themain.go
:
model.InitializeAggregationsConfig(model.NewAggregationsConfig(*memoryAggregationInterval, *memoryAggregationIntervalCount, *memoryHistogramDecayHalfLife, *cpuHistogramDecayHalfLife, *oomBumpUpRatio, *oomMinBumpUp)) - If we, however, want to make this configurable per VPA, I'd rather opt for pushing this configuration down, rather than adding some if-else to the cluster_feeder resulting in having to find the correct VPA for a Pod every time we add an OOM sample
- IMHO, a possible place to put this configuration options would be the
aggregate_container_state
, where we already have the necessary methods to re-load the ContainerResourcePolicy options on VPA updates and then read this incluster.go
, right before we add the OOM sample to the ContainerAggregation:
err := containerState.RecordOOM(timestamp, requestedMemory)
WDYT?
Thanks for the input!
So yep, I agree with all of your suggestions :) |
5e23c1d
to
b7b84de
Compare
Signed-off-by: Omer Aplatony <[email protected]>
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: omerap12 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/remove area provider/cluster-api |
/remove-area provider/cluster-api |
Signed-off-by: Omer Aplatony <[email protected]>
Signed-off-by: Omer Aplatony <[email protected]>
Signed-off-by: Omer Aplatony <[email protected]>
Signed-off-by: Omer Aplatony <[email protected]>
Signed-off-by: Omer Aplatony <[email protected]>
Co-authored-by: Adrian Moisey <[email protected]>
Signed-off-by: Omer Aplatony <[email protected]>
Signed-off-by: Omer Aplatony <[email protected]>
Signed-off-by: Omer Aplatony <[email protected]>
Signed-off-by: Omer Aplatony <[email protected]>
Signed-off-by: Omer Aplatony <[email protected]>
/label api-review |
Signed-off-by: Omer Aplatony <[email protected]>
// check that perVPA is on if being used | ||
if err := validatePerVPAFeatureFlag(&policy); err != nil { | ||
return err | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This piece of code made me wonder if we should try move closer to how k/k validates resources. That's something for another day though
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah.. we should create a seperate issue to discuss it.
trying to kick the api-review label to exercise project automation |
/label api-review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/hold
just to make sure this doesn't merge before going through proper API review
|
||
// oomMinBumpUp is the minimum increase in memory when OOM is detected. | ||
// +optional | ||
OOMMinBumpUp *resource.Quantity `json:"oomMinBumpUp,omitempty" protobuf:"bytes,8,opt,name=oomMinBumpUp"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here are several questions from API shadow review pov, and reading through AEP-8026:
- How does setting these two work with the global flags? The document does not have clear answer for that, but skimming this PR it seems you're leaving the flags as is, and adding this capability at VPA level.
- What alternatives were considered for placing both of these at container level? I don't see any discussion why it's worth setting at VPA level, pod level or container level. Which one is desired and which one makes more sense. I'd suggest thinking through questions do you really want to set it per container, and not per pod, or even per entire workload/app? What benefits you gain or loose by placing that kind of granular control at various levels.
- Why you need to have 2 separate fields for something that seems to express the same thing (partially at least). Why not just a single
OOBumpUp
defined asresource.Quantity
isn't sufficient? In which case, you could express both ratio as 1.2 or percentage, or explicit bytes value.
When you're considering exposing some of the internal configs you don't always have to expose them literally, something simpler might be sufficient here. - What happens when both of these will be set? I haven't found clear answer in the doc to that question, either. Only
When both oomBumpUpRatio = 1 and oomMinBumpUp = 0, OOM-based memory increases are completely disabled
. Are they mutually exclusive, additive, averaged?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I thought I had written this in the AEP - I need to check. The idea is that this setting will override the one in the global config. If the user adds it to their VPA object, those values will be applied. otherwise, the global flags will be used.
- Good point. We will discuss it.
- The core idea is not to introduce a new configuration for the VPA, but simply to expose these settings in the VPA object. I agree that this makes sense (and I’m a little embarrassed I didn’t notice it myself).
- This hasn't change - we are taking the max between those two:
autoscaler/vertical-pod-autoscaler/pkg/recommender/model/container.go
Lines 192 to 193 in f1251a1
memoryNeeded := ResourceAmountMax(memoryUsed+MemoryAmountFromBytes(GetAggregationsConfig().OOMMinBumpUp), ScaleResource(memoryUsed, GetAggregationsConfig().OOMBumpUpRatio))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I thought I had written this in the AEP - I need to check. The idea is that this setting will override the one in the global config. If the user adds it to their VPA object, those values will be applied. otherwise, the global flags will be used.
Please update the AEP, and link here the update.
- Good point. We will discuss it.
Again, please make sure to update the AEP, accordingly.
- The core idea is not to introduce a new configuration for the VPA, but simply to expose these settings in the VPA object. I agree that this makes sense (and I’m a little embarrassed I didn’t notice it myself).
Again, document 😅
- This hasn't change - we are taking the max between those two:
autoscaler/vertical-pod-autoscaler/pkg/recommender/model/container.go
Lines 192 to 193 in f1251a1
memoryNeeded := ResourceAmountMax(memoryUsed+MemoryAmountFromBytes(GetAggregationsConfig().OOMMinBumpUp), ScaleResource(memoryUsed, GetAggregationsConfig().OOMBumpUpRatio))
I believe, once you reconsider question 3, you'll have a better answer and just make sure to write it down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, we submitted a PR to update the AEP here: #8505
I'll ping you once @adrianmoisey will check that out first
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@omerap12: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
This PR adds new options to the Vertical Pod Autoscaler (VPA) to better handle Out of Memory (OOM) events:
It adds two new settings to the VPA configuration:
These settings can be set for each container within a VPA's resource policy.
If not set for a specific container, it will use default values from the VPA recommender.
Example:
Which issue(s) this PR fixes:
part of #7650
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: