-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
ci: Add automatic flaky test detector #18684
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
node-overhead report 🧳Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.
|
dev-packages/e2e-tests/test-applications/node-express/tests/errors.test.ts
Outdated
Show resolved
Hide resolved
| for (const [key, value] of Object.entries(vars)) { | ||
| const pattern = new RegExp(`\\{\\{\\s*env\\.${key}\\s*\\}\\}`, 'g'); | ||
| title = title.replace(pattern, value); | ||
| issueBody = issueBody.replace(pattern, value); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
String replace special patterns corrupt job name substitution
The replace() calls interpret special dollar-sign sequences in the replacement string ($&, $', $$, $1, etc.) rather than treating them literally. If a job name happens to contain patterns like $& or $1, the issue title and body would contain the matched template placeholder or captured groups instead of the literal job name. While job names rarely contain such patterns, this could cause confusing issue titles when they do.
|
I'm not 100% sure on this. The new ticket is unassigned by default and, I think, it doesn't get an SLA set in Linear.
To understand this use case - don't we get reported within Sentry if there are flaky tests? We could simply adapt the alert: https://sentry.sentry.io/issues/alerts/rules/details/267675/ (which we can also assign to people) |
|
@JPeer264 We only get alerted if there are more than 10 flaky tests detected within an hour, which is a good start but I think not ideal (quite arbitrary since it depends on how much people are pushing to PRs, for instance). My main goal was to increase visibility of flaky tests, which I think we already get from having them in the issue feed instead of needing to manually check for failing tests and then creating issues so somebody can fix them. I am not exactly sure how/when SLAs are assigned , but we can probably get an SLA by assigning the correct label? Automatically having someone assigned sounds difficult to me though, maybe flaky CI issues can just be another stream that we can look out for during triage. Happy to explore other options as well (e.g. using the alerts we have), just wanted to put this out to see if we can maybe improve the process around this a bit |
Manually checking for flakes and opening issues is a bit annoying. I was thinking we could add a ci workflow to automate this. The action only runs when merging to develop. Could also be done on PRs but seems unnecessarily complicated. My thinking is that for a push to develop to happen, all the test must first have passed in the original PR. Therefore if the test then fails on develop we know it's a flake. Open for ideas/improvements/cleanups or let me know if there might be any cases I am missing that could lead to false positives.
Example issue created with this: #18693
It doesn't get all the details but I think basically the most important is a link to the run so we can then investigate further. Also the logic for creating the issues is a bit ugly, but not sure if we can make it cleaner given that I want to create one issue per failed test not dump it all into one issue.