-
Notifications
You must be signed in to change notification settings - Fork 475
test: intentionally break CI to observe test result comments #7732
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This commit improves the Playwright test results comments on PRs to be more concise and actionable:
1. Enhanced extract-playwright-counts.ts:
- Added interfaces for test results, locations, and attachments
- Implemented extractFailingTests() to recursively extract failing test details
- Now extracts test names, file paths, line numbers, errors, and trace paths
- Returns failingTests array in the JSON output
2. Updated pr-playwright-deploy-and-comment.sh:
- Made summary more concise (single line with counts)
- Added "Failed Tests" section showing each failing test with:
* Direct link to test source code on GitHub
* Browser configuration where it failed
* Direct link to Playwright trace viewer
- Moved browser-specific reports into a collapsible <details> section
- Reduced overall verbosity while keeping important info upfront
The new format makes it much easier for developers to:
- Quickly see which tests failed
- Jump directly to the failing test code
- Access the Playwright trace viewer (which few people knew existed)
Implements: https://www.notion.so/Implement-Improve-Playwright-PR-comment-format-2d16d73d36508129979ad74391bee39d
🤖 Generated with [Claude Code](https://claude.com/claude-code)
Co-Authored-By: Claude <noreply@anthropic.com>
Add categorization of test failures by type (screenshot assertions, expectation failures, timeouts, and other) to help developers quickly understand what types of issues are occurring. Changes: - Add categorizeFailureType() function to detect failure types from error messages - Track failure type counts in TestCounts interface - Display "Failure Breakdown" section in PR comments when tests fail - Show counts for: screenshot, expectation, timeout, and other failures 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
- Replace sed with bash parameter expansion for /index.html removal (SC2001) - Remove unused trace_link variable (SC2034)
This is an experiment to observe how GitHub CI/CD test result comments are formatted when a test fails. The test now expects a non-existent screenshot file which will cause the test to fail in CI. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
🎨 Storybook Build Status✅ Build completed successfully! ⏰ Completed at: 12/22/2025, 08:40:28 PM UTC 🔗 Links🎉 Your Storybook is ready for review! |
🎭 Playwright Test Results❌ Some tests failed • ⏰ 12/22/2025, 08:49:24 PM UTC 499 ✅ • 1 ❌ • 1 Failure Breakdown: 📸 0 screenshot • ✓ 0 expectation • ⏱️ 0 timeout • ❓ 0 other 📊 Test Reports by Browser
|
Bundle Size ReportSummary
Category Glance Per-category breakdownApp Entry Points — 3.19 MB (baseline 3.19 MB) • ⚪ 0 BMain entry bundles and manifests
Graph Workspace — 996 kB (baseline 996 kB) • ⚪ 0 BGraph editor runtime, canvas, workflow orchestration
Views & Navigation — 6.54 kB (baseline 6.54 kB) • ⚪ 0 BTop-level views, pages, and routed surfaces
Panels & Settings — 295 kB (baseline 295 kB) • ⚪ 0 BConfiguration panels, inspectors, and settings screens
UI Components — 196 kB (baseline 196 kB) • ⚪ 0 BReusable component library chunks
Data & Services — 12.5 kB (baseline 12.5 kB) • ⚪ 0 BStores, services, APIs, and repositories
Utilities & Hooks — 1.41 kB (baseline 1.41 kB) • ⚪ 0 BHelpers, composables, and utility bundles
Vendor & Third-Party — 9.1 MB (baseline 9.1 MB) • ⚪ 0 BExternal libraries and shared vendor chunks
Other — 3.44 MB (baseline 3.44 MB) • ⚪ 0 BBundles that do not match a named category
|
Purpose
This is an experimental PR to observe how GitHub CI/CD test result comments are formatted when tests fail.
What was changed
browser_tests/tests/colorPalette.spec.tsto expect a non-existent screenshot fileExpected outcome
The test should fail and GitHub should post a comment with the test failure details. We want to see how this comment is formatted and what information it includes.
Note
🤖 Generated with Claude Code
┆Issue is synchronized with this Notion page by Unito