Skip to content

Conversation

rubiesonthesky
Copy link
Collaborator

PR Checklist

Overview

Collect CLI output in integration tests.

Added mutation stats to "log" output so we can see what kind of changes there are in integration tests.

I also added checkTestResult function so I didn't have to change every test to include output snapshotting. Which meant that I had to change every test. 😅 If no snapshot is reported as unneeded, then I should have changed every test successfully.

Copy link

codecov bot commented Feb 7, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 76.59%. Comparing base (8dd19f9) to head (a1da636).
⚠️ Report is 161 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2198      +/-   ##
==========================================
+ Coverage   76.52%   76.59%   +0.07%     
==========================================
  Files         168      168              
  Lines        7109     7131      +22     
  Branches     1092     1101       +9     
==========================================
+ Hits         5440     5462      +22     
  Misses       1663     1663              
  Partials        6        6              
Flag Coverage Δ
mutation 70.28% <100.00%> (+0.09%) ⬆️
unit 14.47% <3.33%> (-0.04%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

await runMutationTest(caseDir);
await expect(actualContent).toMatchFileSnapshot(expectedFilePath);
expect(options).toMatchSnapshot("options");
expect.assertions(3);
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Refactor] 😬 I really, really dislike expect.assertions. It's super brittle: relying on implementation details cross-test. If you ever rework a test, add a utility that runs multiple assertions, etc. you end up having to change a bunch of otherwise-seemingly-arbitrary numbers. Plus it's not comprehensive - if some async logic gets added to tests and the author forgets to update a number, it can fail silently or in some future tests. Spooky.

Request: remove the expect.assertions(...) calls. If there are places that would need that logic, we can always look at them individually. (are there?)

Comment on lines +96 to +98
await expect(actualContent).toMatchFileSnapshot(expectedFilePath);
expect(options).toMatchSnapshot("options");
expect(output).toMatchSnapshot("output");
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Refactor] Having multiple assertions in a test means we only get insights from one failure at a time. If assertion 1 of 3 fails, we don't know at a glance if 2 or 3 did.

My normal trick for this kind of thing is:

Suggested change
await expect(actualContent).toMatchFileSnapshot(expectedFilePath);
expect(options).toMatchSnapshot("options");
expect(output).toMatchSnapshot("output");
await expect({ actualContent, options, output }).toMatchFileSnapshot(expectedFilePath);

@JoshuaKGoldberg JoshuaKGoldberg added the status: waiting for author Needs an action taken by the original poster label Feb 17, 2025
@rubiesonthesky rubiesonthesky marked this pull request as draft March 5, 2025 18:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: waiting for author Needs an action taken by the original poster
Projects
None yet
Development

Successfully merging this pull request may close these issues.

🛠 Tooling: collect cli output in integration tests
2 participants