-
Notifications
You must be signed in to change notification settings - Fork 89
[Build|GH] Unify Maven arguments and replace use of coactions/setup-xvfb #2998
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Build|GH] Unify Maven arguments and replace use of coactions/setup-xvfb #2998
Conversation
.github/workflows/mavenBuild.yml
Outdated
-Dcompare-version-with-baselines.skip=false | ||
-Pbree-libs | ||
-Papi-check | ||
--fail-at-end | ||
--fail-at-end -Dmaven.test.failure.ignore=true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think such generic properties should be set in a shared action. If this is wanted (and projects might decide differently) its better to either pass as an extra option in the calling workflow or put into maven.config
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a generic option yes, but I assume that each project actually would want that or at least would not object.
What do you think speaks against this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think speaks against this?
- It is bad practice , as it for example will build downstream projects that then fail with more tests because they rely on correct behavior. And still the "bad" module will be marked as
SUCCESS
. - "what projects actually want" is changing / different and you can't override the option if given on the commandline
- I even remember we have used that option in the past and the people complained that the check shows green even it was failing tests
- If the additional check fails to run (for whatever reason) then you even won't get a feedback at all
Actually we should even think about removing fail at end... If something is wrong we should report a failure as soon as possible and not waste resources for things that already are proven to fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have never though about this option, but when doing so now, I also see some benefits of it:
- People would be better notified about any kind of failure not related to tests. I have seen merges despite different kinds of failures as they were "hidden" by the fact that there was a test failure which was just considered to be the only cause for the workflow to fail.
- The results would be obviously "cleaner" as a test failure only makes one check fail, so it removes the manual effort take a look at multiple check results in case of a failure.
But I also see the drawback that "success" for an execution with test failure is a bit strange and probably there should an option for consuming projects to opt-out.
4. If the additional check fails to run (for whatever reason) then you even won't get a feedback at all
This is quite important point. Even if the additional check does not fail, it takes some time (seconds) until its done and shows up. That check is not shown as "pending" but just appears several seconds after the builds finished. I have seen people waiting actively for the builds to finish to directly merge the change, so this could result in PRs being merged right after all checks become green even though few seconds later a failure pops up.
Thus, I think we should somehow ensure that either the test results check shows up as "pending" or that the builds wait for the test results to show up before finishing (but then a build would only be marked as completed when all of them are).
Actually we should even think about removing fail at end... If something is wrong we should report a failure as soon as possible and not waste resources for things that already are proven to fail.
This is something we cannot do as we have sporadically failing tests and people are (unfortunately) used to merging PRs with such a failing referring to the issue reporting that failure. So this could easily lead to test failure that would occur after such a known one to stay undiscovered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I even remember we have used that option in the past and the people complained that the check shows green even it was failing tests
I would say that each failing check should be inspected/understood in detail.
- The results would be obviously "cleaner" as a test failure only makes one check fail, so it removes the manual effort take a look at multiple check results in case of a failure.
That's my main motivation as I find it very annoying to have to check two logs for one failure, just to be sure nothing is hidden. Of course we could also 'just' fix all flaky tests.
- People would be better notified about any kind of failure not related to tests. I have seen merges despite different kinds of failures as they were "hidden" by the fact that there was a test failure which was just considered to be the only cause for the workflow to fail.
There can be actually two kinds of hidden. Hidden in the sense that the committer does not inspect the build-log because it is assumed that the test-failure is unrelated. And hidden that a failing test-project prevents the execution of another one that also has test-failures that don't occur because they are not executed. Although this is currently unlikely as we run with -fae
and test inter-dependencies are at least very uncommon. But that's basically Heiko's last point.
To be honest I expect that under normal conditions no one checks in code that do not compile and we should not make it "comfortable" or "easier" ... if one wants its easy enough to even run a maven build locally. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I really like the usage of long names to ease understanding of the commands.
I just agree that the option to igore test failures is a bit controversional and may be something to extract into a separate PR with according discussion.
Ok, then let's move that to |
Use long option names to be more readable.
d7a883d
to
ebd102b
Compare
ebd102b
to
c3c1033
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Besides moving the ignore-test-failures option out, I have no fixed a misconfiguration in the shell selection, but this could also be made the default (see below).
Furthermore I have simplified the Download of the API Tools matcher to just use local tools (i.e. curl).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changes still look good to me. Using curl instead of the download actions seems to be fine as we don't use much of the functionality of that download action anyway.
0b6b5a0
to
c93c00c
Compare
dc92085
to
1f35311
Compare
Requested change has been addressed.
Makes it more compact and only relays on tools provided by the runner.
1f35311
to
a69aea5
Compare
In the shared GH workflows use long option names to be more readable.
Also don't fail the build on test-failures because the the additional test-result check fails in case of test-failures. This simplifies the distinction between 'hard' failures, like compile-errors and test-failures.
Similar to