[GREP] Enhance Gang Termination with Stuck Terminating Policy#466
Open
kangclzjc wants to merge 3 commits intoai-dynamo:mainfrom
Open
[GREP] Enhance Gang Termination with Stuck Terminating Policy#466kangclzjc wants to merge 3 commits intoai-dynamo:mainfrom
kangclzjc wants to merge 3 commits intoai-dynamo:mainfrom
Conversation
Signed-off-by: kangclzjc <kangz@nvidia.com>
Signed-off-by: kangclzjc <kangz@nvidia.com>
Signed-off-by: kangclzjc <kangz@nvidia.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What type of PR is this?
/kind feature
What this PR does / why we need it:
Grove’s gang termination today deletes and recreates PodCliques (and their pods) when MinAvailable is breached for longer than TerminationDelay. In environments where pods are constrained to a topology (e.g. same rack), node or kubelet failures can leave pods stuck in a terminating state: the API server has set
deletionTimestampbut the kubelet never completes termination. Those pods are excluded from ready/scheduled counts, so MinAvailable is breached and gang termination runs; however, the stuck pods are still present and can block or complicate cleanup and rescheduling. This GREP proposes a configurable enhancement so that pods stuck in termination for longer than a user-configurable duration are either force-deleted (grace period zero) or orphaned—left in the cluster for the admin to handle—while Grove treats them as gone for availability and reconciliation, allowing the gang to recover.Which issue(s) this PR fixes:
Fixes #401
Special notes for your reviewer:
Does this PR introduce a API change?
Additional documentation e.g., enhancement proposals, usage docs, etc.: