Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #393 +/- ##
==========================================
- Coverage 95.54% 95.11% -0.43%
==========================================
Files 67 67
Lines 3278 3278
Branches 706 702 -4
==========================================
- Hits 3132 3118 -14
- Misses 146 160 +14 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
| await page.keys(["Enter"]); | ||
| await page.keys(["ArrowLeft"]); | ||
| expect(await page.fullPageScreenshot()).toMatchImageSnapshot(); | ||
| for (let i = 1; i <= 3; i++) { |
There was a problem hiding this comment.
Before my changes, we could call toMatchImageSnapshot() within the same test multiple times. The snapshots would then be stored with a suffix like -2 and -3 for the subsequent calls.
However, the counter is also added when the tests are retried with vitest retry configuration - which contradicts the idea of the retries. To mitigate that, I updated the generated snapshot names and removed the counter from there. Now, there must be a single snapshot taken per test.
| return `/#${page}?${urlParams.toString()}`; | ||
| } | ||
|
|
||
| const ignoredPages = new Set([ |
There was a problem hiding this comment.
Before my change, we run tests on all pages with .screenshot-area - which actually missed some useful pages, as the screenshot area is not really a requirement for the full-page screenshots. The updated config captures all pages except those where it does not make sense.
| .map((page) => "/" + path.relative("../../pages/", page)) | ||
| .filter((page) => !ignoredPages.has(page)); | ||
|
|
||
| const rtlStaticPages = ["/dnd/engine-a2h-test", "/with-app-layout/integ"]; |
There was a problem hiding this comment.
This is new. Previously only lrt direction was covered.
|
|
||
| const toMatchImageSnapshot = configureToMatchImageSnapshot({ | ||
| customSnapshotsDir: snapshotDir, | ||
| customSnapshotIdentifier: ({ currentTestName }) => currentTestName.replace("/#/", "").replace(/[\s]/g, "-"), |
There was a problem hiding this comment.
this makes generated snapshots better organised (with folders) and removes the counter from the name, which is affected by retries config.
Who-is-PS
left a comment
There was a problem hiding this comment.
Minor suggestion: Consider extracting the timing values into named constants for better readability and easier tuning. What do you think about this?
| await this.browser.execute((target) => { | ||
| (document.querySelector(target) as HTMLButtonElement)!.focus(); | ||
| }, selector); | ||
| await this.browser.keys(["ArrowDown"]); |
There was a problem hiding this comment.
Why was ArrowDown added to focus()? Is this intentional or a mistake?
There was a problem hiding this comment.
I think there was a bug with that focus outline that got fixed but w/o updating the tests. The focus outline is expected to be shown only when the last user interaction was keyboard-based. Calling the ".focus()" is artificial and it does not satisfy the criteria (the focus outline would be shown if we focus the handles with Tab navigation, for instance).
There was a problem hiding this comment.
Got it, makes sense, triggers :focus-visible styling. Thanks for the clarification!
| "dev": true, | ||
| "license": "MIT" | ||
| }, | ||
| "node_modules/react-keyed-flatten-children": { |
There was a problem hiding this comment.
react-keyed-flatten-children was added but doesn't appear used in these changes. Could be unrelated or needed elsewhere? Could you please explain.
There was a problem hiding this comment.
This came from installing dependencies (npm install) - and is likely due to changes in the components dependency.
There was a problem hiding this comment.
Ah, transitive dependency. Thanks!
| await browser.url(url); | ||
| const page = new DndPageObject(browser); | ||
| await page.waitForVisible("main"); | ||
| await page.waitForJsTimers(500); |
There was a problem hiding this comment.
Did you test it with a lower number? This could cause more delay per test
There was a problem hiding this comment.
I selected a rather large number in an attempt to make the flakiness minimal. It does affect the completion time of the visual regression CI action - but this time is still lower than the other actions, so it is fine to have it like that. It is also easily reversible if the visual regression tests running time will ever be a problem.
There was a problem hiding this comment.
Fair enough, makes sense prioritizing stability over speed. Thanks for the explanation!
|
|
||
| async fullPageScreenshot() { | ||
| // Necessary for animations to complete. | ||
| await this.pause(200); |
There was a problem hiding this comment.
The original value was 100ms, is the increase to 200ms necessary? With many screenshots, this doubles the wait time per test.
There was a problem hiding this comment.
It doubles the waiting time before interactions but it is still a small increase to the total execution time of each test, where the largest portion is setting up the test session. I am not sure if this particular increase makes a real difference, but it might have some effect since the most of flakiness is now observed in the dynamic tests (those where we trigger some actions before taking a screenshot).
TBH, I think this is not very practical here, as there is no need to reuse them and there is no added benefit of giving these values names: the values are based on practical experiments rather than explicit constraints (such as matching certain timeouts present in the feature implementation). |
Description
The visual tests are refactored for more code reuse between different scenarios so that optimisations for less flakiness can be applied to all tests.
Rel: D383236509
Similar: cloudscape-design/chart-components#163
How has this been tested?
I tested that by running visual refresh tests locally (npm run test:visual). The first run generates screenshots - the second one generates new screenshots and compares them to ones generated previously.
Note: the Visual Regressions action is expected to fail in this PR. I expect it to work and produce less flakiness in future PRs to charts. I will experiment with that more once this PR is merged.
Review checklist
The following items are to be evaluated by the author(s) and the reviewer(s).
Correctness
CONTRIBUTING.md.CONTRIBUTING.md.Security
checkSafeUrlfunction.Testing
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.