-
Notifications
You must be signed in to change notification settings - Fork 109
fix(e2e): Wait for nodes to reboot #1419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
Summary of ChangesHello @qinqon, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly improves the reliability of end-to-end tests by addressing a critical timing issue that occurred after node reboots. By incorporating a specific waiting mechanism for nodes to become fully ready before proceeding with further checks, the tests will now more accurately assess the system's state, thereby reducing flaky test failures caused by race conditions and premature assertions. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request aims to fix a race condition in e2e tests by adding an explicit wait for nodes to reboot. While the intention is correct, the implementation of the waiting function waitForNodeToStart is flawed and does not actually wait for the node to become ready. This makes the added waits ineffective. I've left a detailed comment explaining the issue and a suggestion for a fix to the waitForNodeToStart function, which is necessary for this PR to achieve its goal.
| nodeToReboot := nodes[0] | ||
| Byf("Reboot node %s and verify that bond still has ip of primary nic", nodeToReboot) | ||
| restartNodeWithoutWaiting(nodeToReboot) | ||
| waitForNodeToStart(nodeToReboot) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The waitForNodeToStart function has a logic issue that prevents it from waiting for the node to reboot. When the node is down and unreachable, runner.RunAtNode inside waitForNodeToStart will fail. The function will then return the string "not yet". The Eventually block checks ShouldNot(Equal("up")). Since "not yet" is not equal to "up", the check passes, and waitForNodeToStart returns immediately without actually waiting for the node to be ready. This defeats the purpose of adding this wait.
The current implementation of waitForNodeToStart in test/e2e/handler/utils.go is:
func waitForNodeToStart(node string) {
Byf("Waiting till node %s is rebooted", node)
// It will wait till uptime -p will return up that means that node was currently rebooted and is 0 min up
Eventually(func() string {
output, err := runner.RunAtNode(node, "uptime", "-p")
if err != nil {
return "not yet"
}
return output
}, 300*time.Second, 5*time.Second).ShouldNot(Equal("up"), fmt.Sprintf("Node %s failed to start after reboot", node))
}To correctly wait for the node to be ready, the function should wait for the runner.RunAtNode command to succeed. A corrected implementation would look like this:
func waitForNodeToStart(node string) {
Byf("Waiting till node %s is rebooted", node)
Eventually(func() error {
_, err := runner.RunAtNode(node, "uptime")
return err
}, 5*time.Minute, 5*time.Second).Should(Succeed(), fmt.Sprintf("Node %s failed to start after reboot", node))
}Since utils.go is not part of this PR's changes, I recommend also including the fix for waitForNodeToStart in this pull request for the intended fix to be effective.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@qinqon is this a valid one or yet another hallucination?
The test were rebooting nodes and they were directly eventually checking NNCPs, this can introduce race conditions since how much time it takes for nodes to reboot is quite random, this fix add a explicit wait of node readiness, if it fails we will know for sure that the node didn't fully reboot instead of weird "Missing enacment" error. Signed-off-by: Enrique Llorente <ellorent@redhat.com>
a1e5fb4 to
de8212a
Compare
|
@qinqon: The following test failed, say
DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
What this PR does / why we need it:
The test were rebooting nodes and they were directly eventually checking NNCPs, this can introduce race conditions since how much time it takes for nodes to reboot is quite random, this fix add a explicit wait of node readiness, if it fails we will know for sure that the node didn't fully reboot instead of weird "Missing enacment" error.
Release note: