Skip to content
This repository was archived by the owner on Jan 28, 2026. It is now read-only.

Troubleshooting Guide

amandaxmqiu edited this page Aug 7, 2025 · 5 revisions

This comprehensive guide assists in resolving common issues encountered with the GUI-Based Testing Code Review GitHub Action. The guide provides systematic approaches to diagnosing and resolving problems, ensuring smooth integration and operation within your CI/CD pipeline.

Common Issues

1. No PR Comment Appears

This issue manifests when the action executes successfully but no summary comment appears on the pull request. This typically indicates permission or context problems rather than action failures.

Symptoms: The action completes without errors, the dashboard generates successfully, but no comment appears on the pull request interface.

Root Causes and Solutions:

The most common cause involves missing permissions in the workflow configuration. Ensure your workflow includes the necessary permission grants:

permissions:
  pull-requests: write  # Required for commenting on PRs
  contents: read        # Required for repository access

Verify the workflow is triggered in the correct context by adding diagnostic output:

- name: Debug PR context
  run: |
    echo "Event: ${{ github.event_name }}"
    echo "PR Number: ${{ github.event.pull_request.number }}"
    echo "Has PR context: ${{ github.event.pull_request != null }}"

For workflows triggered by pull requests from forks, GitHub applies restricted permissions by default. Consider using pull_request_target carefully with appropriate security measures, or implement a two-phase workflow using workflow_run for commenting.

Ensure the GitHub token has appropriate access:

with:
  github-token: ${{ secrets.GITHUB_TOKEN }}  # Default token
  # Or use a Personal Access Token for enhanced permissions
  github-token: ${{ secrets.PAT_WITH_PR_WRITE }}

2. Visual Comparison Not Working

Visual comparison failures prevent the action from showing differences between the pull request and main branch, limiting regression detection capabilities.

Symptoms: Only pull request results appear in the dashboard, no main branch comparison section is visible, or error messages indicate "Main branch checkout failed."

Primary Solutions:

The most critical requirement for visual comparison is ensuring full repository history is available:

- uses: actions/checkout@v4
  with:
    fetch-depth: 0  # Essential for accessing main branch history

Verify the main branch name matches your repository configuration:

with:
  main-branch: 'master'  # Adjust if not using 'main'
  # Common alternatives: 'develop', 'trunk', 'release'

Ensure the key test file exists on both branches:

with:
  key-test-file: 'tests/smoke.spec.ts'  # Must exist on main branch

Add diagnostic steps to understand branch availability:

- name: Debug branch availability
  run: |
    echo "Available branches:"
    git branch -a
    echo "Recent commits:"
    git log --oneline -10
    echo "Test directory contents:"
    ls -la tests/

3. Dashboard Not Deploying to Pages

GitHub Pages deployment failures prevent external access to the interactive dashboard, limiting stakeholder visibility.

Symptoms: Error messages stating "No artifacts named github-pages found", the Pages URL returns 404 errors, or deployment appears successful but no content is accessible.

Configuration Requirements:

First, enable GitHub Pages in your repository settings. Navigate to Settings → Pages → Source and select "GitHub Actions" as the deployment source.

Add the required permissions to your workflow:

permissions:
  contents: read
  pages: write      # Required for Pages deployment
  id-token: write   # Required for OIDC authentication

Configure the deployment environment:

environment:
  name: github-pages
  url: ${{ steps.review.outputs.dashboard-url }}

Ensure Pages deployment is explicitly enabled:

with:
  enable-github-pages: 'true'  # Default, but verify

4. Playwright Tests Not Found

Test discovery failures prevent the action from executing any tests, resulting in empty dashboards and missing metrics.

Symptoms: Console output shows "No tests found", test execution reports 0 tests, or the dashboard displays empty results.

Resolution Steps:

Verify your test file pattern matches your project structure:

with:
  test-files: 'e2e/**/*.spec.ts'  # Adjust to match your structure
  # Common patterns:
  # test-files: 'tests'           # Default directory
  # test-files: 'src/**/*.test.ts' # Co-located tests

Ensure your Playwright configuration aligns with the specified pattern:

// playwright.config.js
export default {
  testDir: './tests',  // Must contain actual test files
  testMatch: '**/*.spec.ts'  // Pattern for test discovery
}

Add debugging to understand test discovery:

- name: Debug test discovery
  run: |
    echo "Finding test files:"
    find . -name "*.spec.ts" -o -name "*.spec.js" -o -name "*.test.ts"
    echo "Playwright test list:"
    npx playwright test --list

5. ESLint/Prettier Not Running

Linting failures prevent code quality feedback, reducing the value of the review process.

Symptoms: No lint results appear in the dashboard, reviewdog comments are missing from the pull request, or "ESLint not found" errors appear in logs.

Dependency Requirements:

Ensure all required dependencies are specified in your package.json:

{
  "devDependencies": {
    "eslint": "^8.0.0",
    "prettier": "^3.3.2",
    "@typescript-eslint/parser": "^8.35.0",
    "@typescript-eslint/eslint-plugin": "^8.35.0"
  }
}

Verify configuration files exist in your repository root:

# ESLint configuration (one of):
.eslintrc.json
.eslintrc.js
eslint.config.js
eslint.config.mjs

# Prettier configuration (one of):
.prettierrc
.prettierrc.json
prettier.config.js

Force lint execution when debugging:

with:
  mode: 'full'
  enable-lint: 'true'  # Explicitly enable

Error Messages

"Cannot find module 'marked'"

This error indicates a missing dependency required for dashboard generation.

Cause: The marked library is required for Markdown processing but is not installed.

Solution:

with:
  extra-npm-dependencies: 'marked@15.0.12'

"reviewdog: command not found"

This message appears when running the action locally or in non-GitHub environments.

Cause: Reviewdog is only available in GitHub Actions environments.

Resolution: This is expected behavior outside GitHub Actions. The action will continue without reviewdog integration. In GitHub Actions, reviewdog is automatically installed.

"Artifact not found: playwright-summary-pr.json"

This error occurs when the dashboard generator cannot find expected test results.

Cause: The test phase failed, was skipped, or generated artifacts in an unexpected location.

Solution:

# Allow dashboard generation even if tests fail
- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
  continue-on-error: true  # Dashboard generates regardless

"ENOSPC: System limit for number of file watchers reached"

This Linux-specific error occurs when the system file watcher limit is exceeded.

Cause: Default Linux file watcher limits are insufficient for large projects.

Solution:

- name: Increase file watcher limit
  run: |
    echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf
    sudo sysctl -p

Performance Issues

Slow Test Execution

Test execution performance impacts overall CI/CD pipeline efficiency. Several optimization strategies can significantly improve execution time.

Caching Strategies:

Implement comprehensive caching to avoid redundant downloads:

- uses: actions/setup-node@v4
  with:
    node-version: '18'
    cache: 'npm'  # Caches node_modules

- name: Get Playwright Version
  id: playwright-version
  run: |
    echo "version=$(npm ls @playwright/test --json | jq -r '.dependencies["@playwright/test"].version')" >> $GITHUB_OUTPUT

- name: Cache Playwright Browsers
  uses: actions/cache@v3
  with:
    path: ~/.cache/ms-playwright
    key: ${{ runner.os }}-playwright-${{ steps.playwright-version.outputs.version }}
    restore-keys: |
      ${{ runner.os }}-playwright-

Parallel Execution:

Configure Playwright for optimal parallel execution:

// playwright.config.js
export default {
  workers: process.env.CI ? 2 : undefined,  // Adjust based on runner capacity
  fullyParallel: true,  // Enable parallel test execution
}

Runner Optimization:

Utilize larger GitHub-hosted runners for resource-intensive suites:

runs-on: ubuntu-latest-4-cores  # 4-core runner
# Available options:
# ubuntu-latest-2-cores (default)
# ubuntu-latest-4-cores
# ubuntu-latest-8-cores
# ubuntu-latest-16-cores

Dashboard Generation Timeout

Large test suites can cause dashboard generation to exceed time limits.

Causes: Excessive flowchart complexity or large numbers of test results.

Solutions:

Increase the action timeout:

- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
  timeout-minutes: 30  # Default is typically 6 minutes

For extremely large test suites, consider splitting into multiple jobs or limiting flowchart generation scope by modifying the generate-flowchart.js script.

Out of Memory Errors

Memory exhaustion occurs when processing large test results or generating complex visualizations.

Symptoms: "JavaScript heap out of memory" errors in action logs.

Solution:

Increase Node.js memory allocation:

- name: Configure Node.js memory
  run: echo "NODE_OPTIONS=--max-old-space-size=4096" >> $GITHUB_ENV

- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
  # Action now runs with 4GB heap allocation

Integration Problems

Fork PR Limitations

Pull requests from forks operate with restricted permissions for security reasons, limiting certain features.

Issue: Fork PRs cannot post comments or access secrets by default.

Secure Workaround Using workflow_run:

# .github/workflows/test.yml
name: Tests
on:
  pull_request:
    types: [opened, synchronize]

jobs:
  test:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
        with:
          mode: 'test-only'
          enable-pr-comments: 'false'  # Disable in restricted context

# .github/workflows/comment.yml
name: Post Results
on:
  workflow_run:
    workflows: ["Tests"]
    types: [completed]

jobs:
  comment:
    if: github.event.workflow_run.conclusion == 'success'
    runs-on: ubuntu-latest
    permissions:
      pull-requests: write
    steps:
      # Download and post results with full permissions

Monorepo Issues

Monorepos present unique challenges with multiple package.json files and test suites.

Problem: The action doesn't know which package to test or install dependencies from.

Solution:

Configure separate jobs for each package:

strategy:
  matrix:
    package: [web, api, mobile]

steps:
  - uses: actions/checkout@v4
  
  - name: Setup package
    working-directory: packages/${{ matrix.package }}
    run: npm ci
  
  - uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
    with:
      test-files: 'packages/${{ matrix.package }}/tests'
      playwright-config: 'packages/${{ matrix.package }}/playwright.config.js'

Custom Reporters Conflict

Custom Playwright reporters may interfere with the action's JSON reporter requirement.

Issue: The action requires JSON output but custom reporters override this.

Solution:

Configure conditional reporters based on environment:

// playwright.config.js
export default {
  reporter: process.env.CI ? [
    ['json', { outputFile: 'playwright-metrics.json' }],  // Required
    ['html', { outputFolder: 'playwright-report' }],      // Optional
    ['./custom-reporter.js']                              // Custom
  ] : 'list'  // Local development reporter
}

Debug Techniques

Enable Verbose Logging

Comprehensive logging aids in identifying root causes of failures:

env:
  ACTIONS_STEP_DEBUG: true    # GitHub Actions debug
  DEBUG: 'pw:*'               # Playwright debug
  NODE_ENV: 'development'     # Node.js verbose mode
  
- name: Debug environment
  run: |
    echo "=== System Information ==="
    uname -a
    echo "=== Node/NPM Versions ==="
    node --version
    npm --version
    echo "=== Directory Structure ==="
    ls -la
    echo "=== GitHub Context ==="
    echo '${{ toJSON(github) }}'

Check Intermediate Outputs

Inspect artifacts at each stage to identify where issues occur:

- name: Debug artifacts
  if: always()  # Run even if previous steps fail
  run: |
    echo "=== Artifact Directory Structure ==="
    find artifacts -type f -name "*.json" | while read file; do
      echo "File: $file"
      echo "Size: $(stat -f%z "$file" 2>/dev/null || stat -c%s "$file" 2>/dev/null) bytes"
      echo "First 20 lines:"
      head -20 "$file"
      echo "---"
    done
    
    echo "=== Playwright Report ==="
    if [ -d "playwright-report" ]; then
      ls -la playwright-report/
    else
      echo "No Playwright report found"
    fi

Manual Dashboard Generation

Test dashboard generation locally to isolate issues:

# Create mock artifacts for testing
mkdir -p artifacts
echo '{"total":5,"passed":4,"failed":1,"skipped":0,"duration":12000,"pass_rate":80}' > artifacts/playwright-summary-pr.json
echo '{"total":5,"passed":5,"failed":0,"skipped":0,"duration":11000,"pass_rate":100}' > artifacts/playwright-summary-main.json

# Generate dashboard
node scripts/generate-webpage.js

# View result
open artifacts/web-report/index.html  # macOS
xdg-open artifacts/web-report/index.html  # Linux

Frequently Asked Questions

Can I use this with Jest, Cypress, or other testing frameworks?

While the action is optimized for Playwright, integration with other frameworks is possible through artifact transformation. Generate JSON output from your framework matching the expected schema, then use dashboard-only mode:

# Run your framework and transform output
- run: |
    npm run cypress:run
    node scripts/transform-cypress-to-playwright.js
    
- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
  with:
    mode: 'dashboard-only'
    custom-artifacts-path: 'transformed-results'

How do I disable specific features?

The action provides granular control through feature flags:

with:
  enable-lint: 'false'        # Skip ESLint/Prettier
  enable-dashboard: 'false'   # Skip dashboard generation
  enable-pr-comments: 'false' # Skip PR comments
  enable-github-pages: 'false' # Skip Pages deployment

Can I customize the dashboard styling?

Dashboard customization requires forking the repository and modifying the generation scripts. The primary customization point is scripts/generate-webpage.js, where you can adjust HTML structure, CSS styles, and JavaScript functionality.

Why are my screenshots not appearing in the dashboard?

Screenshot capture must be enabled in your Playwright configuration:

// playwright.config.js
export default {
  use: {
    screenshot: 'only-on-failure',  // or 'on' for all tests
    trace: 'on-first-retry',         // Traces for debugging
    video: 'retain-on-failure'       // Optional video capture
  }
}

How do I run this action locally for testing?

While GitHub Actions cannot run locally, you can test individual components:

# Test linting
node scripts/lint.js

# Run Playwright tests with JSON output
npx playwright test --reporter=json

# Generate flowchart from results
node scripts/generate-flowchart.js

# Build dashboard
node scripts/generate-webpage.js

# View results
open artifacts/web-report/index.html

What are the artifact size limits?

GitHub enforces the following limits:

  • Individual file size: 25MB maximum
  • Total artifacts per workflow run: 500MB
  • Maximum retention period: 90 days
  • Default retention: 30 days (configurable)

Is there a way to run tests in Docker?

While the action itself uses composite steps rather than Docker, you can run Playwright tests in Docker before invoking the action:

- name: Run tests in Docker
  run: |
    docker run --rm -v $(pwd):/work mcr.microsoft.com/playwright:latest \
      npx playwright test --reporter=json
      
- uses: DigitalProductInnovationAndDevelopment/Code-Reviews-of-GUI-Tests@v1
  with:
    mode: 'dashboard-only'

Getting Help

Diagnostic Checklist

Before seeking help, work through this diagnostic checklist:

  1. Enable debug logging and review the complete output
  2. Check permissions match the features you're using
  3. Verify file paths and patterns are correct
  4. Test components individually to isolate the issue
  5. Review recent changes that might have introduced the problem
  6. Search existing issues in the repository for similar problems

Creating Effective Bug Reports

When reporting issues, include the following information:

  • Action version being used (e.g., @v1, @main, specific SHA)
  • Complete workflow YAML (sanitize sensitive information)
  • Error messages and logs with debug mode enabled
  • Expected versus actual behavior with specific examples
  • Minimal reproduction case if possible
  • Environment details (runner OS, Node.js version, etc.)

Support Channels

  1. GitHub Issues - For bug reports and feature requests
  2. GitHub Discussions - For questions and community support
  3. Documentation - Review guides and references
  4. Examples - Check the examples directory for working configurations

Common Support Resources

Clone this wiki locally