-
Notifications
You must be signed in to change notification settings - Fork 1
Troubleshooting Guide
This comprehensive guide assists in resolving common issues encountered with the GUI-Based Testing Code Review GitHub Action. The guide provides systematic approaches to diagnosing and resolving problems, ensuring smooth integration and operation within your CI/CD pipeline.
This issue manifests when the action executes successfully but no summary comment appears on the pull request. This typically indicates permission or context problems rather than action failures.
Symptoms: The action completes without errors, the dashboard generates successfully, but no comment appears on the pull request interface.
Root Causes and Solutions:
The most common cause involves missing permissions in the workflow configuration. Ensure your workflow includes the necessary permission grants:
permissions:
pull-requests: write # Required for commenting on PRs
contents: read # Required for repository accessVerify the workflow is triggered in the correct context by adding diagnostic output:
- name: Debug PR context
run: |
echo "Event: ${{ github.event_name }}"
echo "PR Number: ${{ github.event.pull_request.number }}"
echo "Has PR context: ${{ github.event.pull_request != null }}"For workflows triggered by pull requests from forks, GitHub applies restricted permissions by default. Consider using pull_request_target carefully with appropriate security measures, or implement a two-phase workflow using workflow_run for commenting.
Ensure the GitHub token has appropriate access:
with:
github-token: ${{ secrets.GITHUB_TOKEN }} # Default token
# Or use a Personal Access Token for enhanced permissions
github-token: ${{ secrets.PAT_WITH_PR_WRITE }}Visual comparison failures prevent the action from showing differences between the pull request and main branch, limiting regression detection capabilities.
Symptoms: Only pull request results appear in the dashboard, no main branch comparison section is visible, or error messages indicate "Main branch checkout failed."
Primary Solutions:
The most critical requirement for visual comparison is ensuring full repository history is available:
- uses: actions/checkout@v4
with:
fetch-depth: 0 # Essential for accessing main branch historyVerify the main branch name matches your repository configuration:
with:
main-branch: 'master' # Adjust if not using 'main'
# Common alternatives: 'develop', 'trunk', 'release'Ensure the key test file exists on both branches:
with:
key-test-file: 'tests/smoke.spec.ts' # Must exist on main branchAdd diagnostic steps to understand branch availability:
- name: Debug branch availability
run: |
echo "Available branches:"
git branch -a
echo "Recent commits:"
git log --oneline -10
echo "Test directory contents:"
ls -la tests/GitHub Pages deployment failures prevent external access to the interactive dashboard, limiting stakeholder visibility.
Symptoms: Error messages stating "No artifacts named github-pages found", the Pages URL returns 404 errors, or deployment appears successful but no content is accessible.
Configuration Requirements:
First, enable GitHub Pages in your repository settings. Navigate to Settings → Pages → Source and select "GitHub Actions" as the deployment source.
Add the required permissions to your workflow:
permissions:
contents: read
pages: write # Required for Pages deployment
id-token: write # Required for OIDC authenticationConfigure the deployment environment:
environment:
name: github-pages
url: ${{ steps.review.outputs.dashboard-url }}Ensure Pages deployment is explicitly enabled:
with:
enable-github-pages: 'true' # Default, but verifyTest discovery failures prevent the action from executing any tests, resulting in empty dashboards and missing metrics.
Symptoms: Console output shows "No tests found", test execution reports 0 tests, or the dashboard displays empty results.
Resolution Steps:
Verify your test file pattern matches your project structure:
with:
test-files: 'e2e/**/*.spec.ts' # Adjust to match your structure
# Common patterns:
# test-files: 'tests' # Default directory
# test-files: 'src/**/*.test.ts' # Co-located testsEnsure your Playwright configuration aligns with the specified pattern:
// playwright.config.js
export default {
testDir: './tests', // Must contain actual test files
testMatch: '**/*.spec.ts' // Pattern for test discovery
}Add debugging to understand test discovery:
- name: Debug test discovery
run: |
echo "Finding test files:"
find . -name "*.spec.ts" -o -name "*.spec.js" -o -name "*.test.ts"
echo "Playwright test list:"
npx playwright test --listLinting failures prevent code quality feedback, reducing the value of the review process.
Symptoms: No lint results appear in the dashboard, reviewdog comments are missing from the pull request, or "ESLint not found" errors appear in logs.
Dependency Requirements:
Ensure all required dependencies are specified in your package.json:
{
"devDependencies": {
"eslint": "^8.0.0",
"prettier": "^3.3.2",
"@typescript-eslint/parser": "^8.35.0",
"@typescript-eslint/eslint-plugin": "^8.35.0"
}
}Verify configuration files exist in your repository root:
# ESLint configuration (one of):
.eslintrc.json
.eslintrc.js
eslint.config.js
eslint.config.mjs
# Prettier configuration (one of):
.prettierrc
.prettierrc.json
prettier.config.jsForce lint execution when debugging:
with:
mode: 'full'
enable-lint: 'true' # Explicitly enableThis error indicates a missing dependency required for dashboard generation.
Cause: The marked library is required for Markdown processing but is not installed.
Solution:
with:
extra-npm-dependencies: 'marked@15.0.12'This message appears when running the action locally or in non-GitHub environments.
Cause: Reviewdog is only available in GitHub Actions environments.
Resolution: This is expected behavior outside GitHub Actions. The action will continue without reviewdog integration. In GitHub Actions, reviewdog is automatically installed.
This error occurs when the dashboard generator cannot find expected test results.
Cause: The test phase failed, was skipped, or generated artifacts in an unexpected location.
Solution:
# Allow dashboard generation even if tests fail
- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
continue-on-error: true # Dashboard generates regardlessThis Linux-specific error occurs when the system file watcher limit is exceeded.
Cause: Default Linux file watcher limits are insufficient for large projects.
Solution:
- name: Increase file watcher limit
run: |
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
sudo sysctl -pTest execution performance impacts overall CI/CD pipeline efficiency. Several optimization strategies can significantly improve execution time.
Caching Strategies:
Implement comprehensive caching to avoid redundant downloads:
- uses: actions/setup-node@v4
with:
node-version: '18'
cache: 'npm' # Caches node_modules
- name: Get Playwright Version
id: playwright-version
run: |
echo "version=$(npm ls @playwright/test --json | jq -r '.dependencies["@playwright/test"].version')" >> $GITHUB_OUTPUT
- name: Cache Playwright Browsers
uses: actions/cache@v3
with:
path: ~/.cache/ms-playwright
key: ${{ runner.os }}-playwright-${{ steps.playwright-version.outputs.version }}
restore-keys: |
${{ runner.os }}-playwright-Parallel Execution:
Configure Playwright for optimal parallel execution:
// playwright.config.js
export default {
workers: process.env.CI ? 2 : undefined, // Adjust based on runner capacity
fullyParallel: true, // Enable parallel test execution
}Runner Optimization:
Utilize larger GitHub-hosted runners for resource-intensive suites:
runs-on: ubuntu-latest-4-cores # 4-core runner
# Available options:
# ubuntu-latest-2-cores (default)
# ubuntu-latest-4-cores
# ubuntu-latest-8-cores
# ubuntu-latest-16-coresLarge test suites can cause dashboard generation to exceed time limits.
Causes: Excessive flowchart complexity or large numbers of test results.
Solutions:
Increase the action timeout:
- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
timeout-minutes: 30 # Default is typically 6 minutesFor extremely large test suites, consider splitting into multiple jobs or limiting flowchart generation scope by modifying the generate-flowchart.js script.
Memory exhaustion occurs when processing large test results or generating complex visualizations.
Symptoms: "JavaScript heap out of memory" errors in action logs.
Solution:
Increase Node.js memory allocation:
- name: Configure Node.js memory
run: echo "NODE_OPTIONS=--max-old-space-size=4096" >> $GITHUB_ENV
- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
# Action now runs with 4GB heap allocationPull requests from forks operate with restricted permissions for security reasons, limiting certain features.
Issue: Fork PRs cannot post comments or access secrets by default.
Secure Workaround Using workflow_run:
# .github/workflows/test.yml
name: Tests
on:
pull_request:
types: [opened, synchronize]
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
with:
mode: 'test-only'
enable-pr-comments: 'false' # Disable in restricted context
# .github/workflows/comment.yml
name: Post Results
on:
workflow_run:
workflows: ["Tests"]
types: [completed]
jobs:
comment:
if: github.event.workflow_run.conclusion == 'success'
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
# Download and post results with full permissionsMonorepos present unique challenges with multiple package.json files and test suites.
Problem: The action doesn't know which package to test or install dependencies from.
Solution:
Configure separate jobs for each package:
strategy:
matrix:
package: [web, api, mobile]
steps:
- uses: actions/checkout@v4
- name: Setup package
working-directory: packages/${{ matrix.package }}
run: npm ci
- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
with:
test-files: 'packages/${{ matrix.package }}/tests'
playwright-config: 'packages/${{ matrix.package }}/playwright.config.js'Custom Playwright reporters may interfere with the action's JSON reporter requirement.
Issue: The action requires JSON output but custom reporters override this.
Solution:
Configure conditional reporters based on environment:
// playwright.config.js
export default {
reporter: process.env.CI ? [
['json', { outputFile: 'playwright-metrics.json' }], // Required
['html', { outputFolder: 'playwright-report' }], // Optional
['./custom-reporter.js'] // Custom
] : 'list' // Local development reporter
}Comprehensive logging aids in identifying root causes of failures:
env:
ACTIONS_STEP_DEBUG: true # GitHub Actions debug
DEBUG: 'pw:*' # Playwright debug
NODE_ENV: 'development' # Node.js verbose mode
- name: Debug environment
run: |
echo "=== System Information ==="
uname -a
echo "=== Node/NPM Versions ==="
node --version
npm --version
echo "=== Directory Structure ==="
ls -la
echo "=== GitHub Context ==="
echo '${{ toJSON(github) }}'Inspect artifacts at each stage to identify where issues occur:
- name: Debug artifacts
if: always() # Run even if previous steps fail
run: |
echo "=== Artifact Directory Structure ==="
find artifacts -type f -name "*.json" | while read file; do
echo "File: $file"
echo "Size: $(stat -f%z "$file" 2>/dev/null || stat -c%s "$file" 2>/dev/null) bytes"
echo "First 20 lines:"
head -20 "$file"
echo "---"
done
echo "=== Playwright Report ==="
if [ -d "playwright-report" ]; then
ls -la playwright-report/
else
echo "No Playwright report found"
fiTest dashboard generation locally to isolate issues:
# Create mock artifacts for testing
mkdir -p artifacts
echo '{"total":5,"passed":4,"failed":1,"skipped":0,"duration":12000,"pass_rate":80}' > artifacts/playwright-summary-pr.json
echo '{"total":5,"passed":5,"failed":0,"skipped":0,"duration":11000,"pass_rate":100}' > artifacts/playwright-summary-main.json
# Generate dashboard
node scripts/generate-webpage.js
# View result
open artifacts/web-report/index.html # macOS
xdg-open artifacts/web-report/index.html # LinuxWhile the action is optimized for Playwright, integration with other frameworks is possible through artifact transformation. Generate JSON output from your framework matching the expected schema, then use dashboard-only mode:
# Run your framework and transform output
- run: |
npm run cypress:run
node scripts/transform-cypress-to-playwright.js
- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
with:
mode: 'dashboard-only'
custom-artifacts-path: 'transformed-results'The action provides granular control through feature flags:
with:
enable-lint: 'false' # Skip ESLint/Prettier
enable-dashboard: 'false' # Skip dashboard generation
enable-pr-comments: 'false' # Skip PR comments
enable-github-pages: 'false' # Skip Pages deploymentDashboard customization requires forking the repository and modifying the generation scripts. The primary customization point is scripts/generate-webpage.js, where you can adjust HTML structure, CSS styles, and JavaScript functionality.
Screenshot capture must be enabled in your Playwright configuration:
// playwright.config.js
export default {
use: {
screenshot: 'only-on-failure', // or 'on' for all tests
trace: 'on-first-retry', // Traces for debugging
video: 'retain-on-failure' // Optional video capture
}
}While GitHub Actions cannot run locally, you can test individual components:
# Test linting
node scripts/lint.js
# Run Playwright tests with JSON output
npx playwright test --reporter=json
# Generate flowchart from results
node scripts/generate-flowchart.js
# Build dashboard
node scripts/generate-webpage.js
# View results
open artifacts/web-report/index.htmlGitHub enforces the following limits:
- Individual file size: 25MB maximum
- Total artifacts per workflow run: 500MB
- Maximum retention period: 90 days
- Default retention: 30 days (configurable)
While the action itself uses composite steps rather than Docker, you can run Playwright tests in Docker before invoking the action:
- name: Run tests in Docker
run: |
docker run --rm -v $(pwd):/work mcr.microsoft.com/playwright:latest \
npx playwright test --reporter=json
- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
with:
mode: 'dashboard-only'Before seeking help, work through this diagnostic checklist:
- Enable debug logging and review the complete output
- Check permissions match the features you're using
- Verify file paths and patterns are correct
- Test components individually to isolate the issue
- Review recent changes that might have introduced the problem
- Search existing issues in the repository for similar problems
When reporting issues, include the following information:
-
Action version being used (e.g.,
@v1,@main, specific SHA) - Complete workflow YAML (sanitize sensitive information)
- Error messages and logs with debug mode enabled
- Expected versus actual behavior with specific examples
- Minimal reproduction case if possible
- Environment details (runner OS, Node.js version, etc.)
- GitHub Issues - For bug reports and feature requests
- GitHub Discussions - For questions and community support
- Documentation - Review guides and references
- Examples - Check the examples directory for working configurations
- [Action Repository](https://github.com/DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests)
- [Playwright Documentation](https://playwright.dev)
- [GitHub Actions Documentation](https://docs.github.com/actions)
- [ESLint Documentation](https://eslint.org)
- [Prettier Documentation](https://prettier.io)