Skip to content

Conversation

@qinqon
Copy link
Member

@qinqon qinqon commented Dec 26, 2025

What this PR does / why we need it:
The test were rebooting nodes and they were directly eventually checking NNCPs, this can introduce race conditions since how much time it takes for nodes to reboot is quite random, this fix add a explicit wait of node readiness, if it fails we will know for sure that the node didn't fully reboot instead of weird "Missing enacment" error.

Release note:

NONE

@kubevirt-bot kubevirt-bot added release-note-none Denotes a PR that doesn't merit a release note. dco-signoff: yes Indicates the PR's author has DCO signed all their commits. labels Dec 26, 2025
@kubevirt-bot
Copy link
Collaborator

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign phoracek for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@gemini-code-assist
Copy link

Summary of Changes

Hello @qinqon, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the reliability of end-to-end tests by addressing a critical timing issue that occurred after node reboots. By incorporating a specific waiting mechanism for nodes to become fully ready before proceeding with further checks, the tests will now more accurately assess the system's state, thereby reducing flaky test failures caused by race conditions and premature assertions.

Highlights

  • E2E Test Reliability: Introduced explicit waits for node readiness after rebooting nodes in e2e tests to prevent race conditions and "Missing enactment" errors, ensuring tests accurately reflect node state.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix a race condition in e2e tests by adding an explicit wait for nodes to reboot. While the intention is correct, the implementation of the waiting function waitForNodeToStart is flawed and does not actually wait for the node to become ready. This makes the added waits ineffective. I've left a detailed comment explaining the issue and a suggestion for a fix to the waitForNodeToStart function, which is necessary for this PR to achieve its goal.

nodeToReboot := nodes[0]
Byf("Reboot node %s and verify that bond still has ip of primary nic", nodeToReboot)
restartNodeWithoutWaiting(nodeToReboot)
waitForNodeToStart(nodeToReboot)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The waitForNodeToStart function has a logic issue that prevents it from waiting for the node to reboot. When the node is down and unreachable, runner.RunAtNode inside waitForNodeToStart will fail. The function will then return the string "not yet". The Eventually block checks ShouldNot(Equal("up")). Since "not yet" is not equal to "up", the check passes, and waitForNodeToStart returns immediately without actually waiting for the node to be ready. This defeats the purpose of adding this wait.

The current implementation of waitForNodeToStart in test/e2e/handler/utils.go is:

func waitForNodeToStart(node string) {
	Byf("Waiting till node %s is rebooted", node)
	// It will wait till uptime -p will return up that means that node was currently rebooted and is 0 min up
	Eventually(func() string {
		output, err := runner.RunAtNode(node, "uptime", "-p")
		if err != nil {
			return "not yet"
		}
		return output
	}, 300*time.Second, 5*time.Second).ShouldNot(Equal("up"), fmt.Sprintf("Node %s failed to start after reboot", node))
}

To correctly wait for the node to be ready, the function should wait for the runner.RunAtNode command to succeed. A corrected implementation would look like this:

func waitForNodeToStart(node string) {
	Byf("Waiting till node %s is rebooted", node)
	Eventually(func() error {
		_, err := runner.RunAtNode(node, "uptime")
		return err
	}, 5*time.Minute, 5*time.Second).Should(Succeed(), fmt.Sprintf("Node %s failed to start after reboot", node))
}

Since utils.go is not part of this PR's changes, I recommend also including the fix for waitForNodeToStart in this pull request for the intended fix to be effective.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@qinqon is this a valid one or yet another hallucination?

The test were rebooting nodes and they were directly eventually checking
NNCPs, this can introduce race conditions since how much time it takes
for nodes to reboot is quite random, this fix add a explicit wait of
node readiness, if it fails we will know for sure that the node didn't
fully reboot instead of weird "Missing enacment" error.

Signed-off-by: Enrique Llorente <ellorent@redhat.com>
@qinqon qinqon force-pushed the e2e-wait-for-node-reboot branch from a1e5fb4 to de8212a Compare December 26, 2025 07:21
@kubevirt-bot
Copy link
Collaborator

@qinqon: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubernetes-nmstate-e2e-handler-k8s de8212a link true /test pull-kubernetes-nmstate-e2e-handler-k8s
Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

dco-signoff: yes Indicates the PR's author has DCO signed all their commits. release-note-none Denotes a PR that doesn't merit a release note. size/S

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants