-
Notifications
You must be signed in to change notification settings - Fork 74
OCPBUGS-60771: Fix job success logic #3202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
/approve cancel |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sebsoto thanks for fixing this. Mostly LGTM, see comments for remarks.
| if err != nil { | ||
| return "", err | ||
| } | ||
| labelSelector = "job-name=" + job.Name |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I recommend leaving this assignment here; it is a better practice to build the selector based on a job that exists ( i.e. L119) and lower the prob for errors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thats fair, reverted.
| if job.Status.Failed > 0 { | ||
| _, err = tc.gatherPodLogs(labelSelector) | ||
| if err != nil { | ||
| log.Printf("Unable to get logs associated with pod %s: %v", labelSelector, err) | ||
| } | ||
| events, _ := tc.getPodEvents(name) | ||
| return "", fmt.Errorf("job %v failed: %v", job, events) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may be a case-by-case scenario. Currently, the logic followes no failures and one success, is there a case where we need to track a failure? if so consider passing an argument to controll it, e.g. MaxFailureCount
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just reworked the logic to account for this.
The caller should define the success criteria when creating the job.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, LGTM
5db43d1 to
63c015e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jrvaldes The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jrvaldes The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
1e0953a to
20c4841
Compare
|
@sebsoto: This pull request references Jira Issue OCPBUGS-60771, which is invalid:
Comment The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/jira refresh |
|
@sebsoto: This pull request references Jira Issue OCPBUGS-60771, which is valid. The bug has been moved to the POST state. 3 validation(s) were run on this bug
Requesting review from QA contact: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
/cherry-pick release-4.19 |
|
@sebsoto: once the present PR merges, I will cherry-pick it on top of In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
We are reporting jobs as failed if a pod within the job failed. This is incorrect logic, as a single pod failing is not a failure state for the entire job. The job will keep creating pods until a success is seen.
20c4841 to
b1082ea
Compare
Given the changes are only targeting the tests, they can be backported to all active branches. |
|
/lgtm |
1 similar comment
|
/override ci/prow/nutanix-e2e-operator |
|
@sebsoto: Overrode contexts on behalf of sebsoto: ci/prow/nutanix-e2e-operator In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
@sebsoto: Jira Issue OCPBUGS-60771: All pull requests linked via external trackers have merged: Jira Issue OCPBUGS-60771 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
@sebsoto: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
|
@sebsoto: new pull request created: #3234 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
We are reporting jobs as failed if a pod within the job failed. This is incorrect logic, as a single pod failing is not a failure state for the entire job. The job will keep creating pods until a success is seen.