-
Notifications
You must be signed in to change notification settings - Fork 182
feat: Adapt flow control to per-request saturation #1622
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Adapt flow control to per-request saturation #1622
Conversation
This commit refactors the flow control `ShardProcessor` to align with the new `SaturationDetector` contract (introduced in 7d84fb9), which evaluates saturation for a specific set of candidate pods rather than for the entire pool. This change fundamentally alters the dispatching logic to prioritize strict fairness and priority over work conservation. The `BandFilter` abstraction has been removed, and the `ShardProcessor` now performs a post-selection viability check. After policies select the fairest request, the `SaturationDetector` is called with the candidate pods for only that specific request. If the check fails, the processor stops the entire dispatch cycle for the current tick, enforcing Head-of-Line blocking to prevent priority inversion. This new model correctly upholds a strict fairness and priority contract. However, it introduces a known trade-off where the system may leave resources idle if the fairest request is blocked, rather than finding other viable work (the "noisy neighbor" problem).
✅ Deploy Preview for gateway-api-inference-extension ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
Hi @LukeAVanDrie. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/ok-to-test |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ahg-g, LukeAVanDrie The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@kaushikmitr and @BenjaminBraunDev FYI This prepares us for per-request saturation. The solution isn't elegant right now but I am working on a proposal to give operators control over the work-conservation / strict fairness tradeoff during HoL blocking. |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
This PR refactors the flow control dispatch cycle to align with a recent change to the
SaturationDetector
contract (in #1293), which moved from a pool-wide saturation signal to a per-request signal.The core change removes the previous
BandFilter
abstraction and implements a post-selection viability check. The dispatcher now first allows the inter-flow (fairness) and intra-flow (ordering) policies to select the single best request. Only then is theSaturationDetector
consulted to see if that specific request is viable.This new logic strictly enforces the policy decisions. For example, given two flows:
[A_1, A_2, ...]
[B_1, B_2, ...]
If policies select
A_1
as the next item to dispatch, but it targets saturated backends, the dispatcher will now block the entire priority band for the current cycle. It will not attempt to dispatchB_1
orA_2
. This introduces a known trade-off between strict fairness and work conservation (the "noisy neighbor" problem). This is a reasonable default, and future work can explore giving operators more control over this behavior.Which issue(s) this PR fixes:
Tracks #674.
Does this PR introduce a user-facing change?: