Skip to content

Testing Running Tests

crocodilestick edited this page Oct 23, 2025 · 1 revision

Running Tests - Complete Guide

This guide covers all the ways to run CWA tests, from quick verification to comprehensive test suites.

📋 Table of Contents


Interactive Test Runner

The easiest way to run tests with a friendly menu interface.

Basic Usage

./run_tests.sh

Menu Options Explained

1) Integration Tests (Bind Mount Mode)

  • Best for: Local development on host machine, CI/CD
  • Technology: Temporary directories with bind mounts
  • Result: 20/20 tests pass
  • Duration: ~3-4 minutes
  • Use when: Running on macOS, Linux, or Windows WSL2 host

What it tests:

  • Complete book import workflow
  • File format conversion
  • Metadata database tracking
  • EPUB fixing for Kindle compatibility
  • File backup and cleanup
  • Lock mechanism
  • Error handling

2) Integration Tests (Docker Volume Mode)

  • Best for: Development containers, Docker-in-Docker scenarios
  • Technology: Docker volumes with docker cp operations
  • Result: 19/20 tests pass (1 skipped - cwa_db_tracks_import)
  • Duration: ~3-4 minutes
  • Use when: Running inside a dev container or remote container

Differences from bind mount mode:

  • Uses Docker volumes instead of temporary directories
  • Automatically detected when running in a container
  • One test skipped (requires config volume mount)
  • Slightly different file access patterns

3) Docker Startup Tests

  • Purpose: Verify container health and initialization
  • Result: 9/9 tests pass
  • Duration: ~1 minute
  • Use when: Validating Docker configuration changes

What it tests:

  • Container starts successfully
  • Web server responds to health checks
  • Environment variables are read correctly
  • Port binding works
  • Basic API endpoints are accessible

4) All Tests (Full Suite)

  • Purpose: Complete validation
  • Result: Varies by environment (105+ tests total)
  • Duration: ~5-7 minutes
  • Use when: Pre-release validation, major changes

Includes:

  • 13 smoke tests (basic functionality)
  • 83 unit tests (individual functions)
  • 9 Docker tests (container health)
  • 20 integration tests (workflows)

5) Quick Test

  • Purpose: Fast verification that tests work
  • Result: 1 test pass
  • Duration: ~30 seconds
  • Use when: First-time setup, quick validation

Runs a single representative integration test to verify:

  • Docker is working
  • Pytest is installed
  • Container can start
  • Basic file operations work

6) Custom Test Selection

  • Purpose: Advanced users running specific tests
  • Duration: Varies
  • Use when: Debugging specific functionality

Examples of what you can enter:

tests/unit/test_cwa_db.py              # Single file
tests/integration/ -k metadata         # Pattern match
tests/ -m "not slow"                   # Marker-based
tests/smoke/ tests/unit/               # Multiple dirs

7) Show Info & Status

  • Purpose: Display environment details
  • Shows:
    • Current environment (host vs Docker)
    • Python version
    • Docker availability
    • Test modes available
    • Documentation links
    • Test statistics

Manual Test Execution

By Category

# Smoke tests - Quick sanity checks (30 seconds)
pytest tests/smoke/ -v

# Unit tests - Individual functions (2 minutes)
pytest tests/unit/ -v

# Docker tests - Container health (1 minute)
pytest tests/docker/ -v

# Integration tests - Workflows (3-4 minutes)
pytest tests/integration/ -v

# All tests (5-7 minutes)
pytest tests/ -v

By Test File

# Run all tests in a file
pytest tests/unit/test_cwa_db.py -v

# Run specific test class
pytest tests/unit/test_cwa_db.py::TestCWADBInitialization -v

# Run specific test function
pytest tests/smoke/test_smoke.py::test_python_version -v

By Pattern Matching

# Run tests with "database" in the name
pytest -k "database" -v

# Run tests with "import" or "export" in name
pytest -k "import or export" -v

# Exclude slow tests
pytest -k "not slow" -v

By Marker

# Run only smoke tests
pytest -m smoke -v

# Run only unit tests
pytest -m unit -v

# Run tests that don't require Docker
pytest -m "not requires_docker" -v

Available markers:

  • @pytest.mark.smoke - Quick sanity checks
  • @pytest.mark.unit - Unit tests
  • @pytest.mark.integration - Integration tests
  • @pytest.mark.requires_docker - Needs Docker
  • @pytest.mark.requires_calibre - Needs Calibre tools
  • @pytest.mark.slow - Takes >10 seconds

Testing Modes

CWA supports two testing modes for handling Docker-in-Docker scenarios.

Bind Mount Mode (Default)

How to use:

# Just run tests normally
pytest tests/integration/ -v

How it works:

  • Creates temporary directories in /tmp/pytest-xxx
  • Mounts them into test container as bind mounts
  • Uses standard Path operations
  • Fast and reliable

Best for:

  • CI/CD (GitHub Actions)
  • Local development on host machine
  • macOS, Linux, Windows WSL2

Results:

  • ✅ 20/20 integration tests pass
  • ✅ All features work
  • ✅ Full database access

Docker Volume Mode

How to use:

# Set environment variable
export USE_DOCKER_VOLUMES=true
pytest tests/integration/ -v

# Or one-liner
USE_DOCKER_VOLUMES=true pytest tests/integration/ -v

How it works:

  • Creates Docker volumes for test data
  • Transfers files using docker cp commands
  • Uses VolumeHelper class for file operations
  • Extracts databases to temp files for queries

Best for:

  • Dev containers
  • Remote containers
  • Docker-in-Docker scenarios
  • Codespaces, Gitpod

Results:

  • ✅ 19/20 integration tests pass
  • ⏭️ 1 test skipped (cwa_db_tracks_import - needs config volume)
  • ✅ All core features work

Auto-detection:

The interactive test runner auto-detects your environment:

./run_tests.sh
# Shows: "Environment: docker" if in container
# Shows: "Environment: host" if on host

Mode Comparison

Feature Bind Mount Docker Volume
Speed Fast Slightly slower
Setup Automatic Requires env var
CI/CD ✅ Perfect ⚠️ Works but overkill
Dev Container ❌ Fails ✅ Works
Tests Passing 20/20 19/20
Database Access Direct Extract to temp
File Operations Path VolumeHelper

Test Categories

🔥 Smoke Tests (13 tests, ~30 seconds)

Purpose: Verify critical functionality isn't broken

pytest tests/smoke/ -v

What they test:

  • Python version compatibility
  • Flask app can be imported
  • Required modules are available
  • Database connections work
  • Configuration loads
  • Critical paths accessible

When to run: Before every commit

🧪 Unit Tests (83 tests, ~2 minutes)

Purpose: Test individual functions in isolation

pytest tests/unit/ -v

What they test:

CWA Database (test_cwa_db.py - 20 tests):

  • Database initialization
  • Table creation
  • Insert/query operations
  • Statistics retrieval
  • Error handling

Helper Functions (test_helper.py - 63 tests):

  • Email validation
  • Password validation
  • ISBN validation
  • File format detection
  • String manipulation
  • Date/time formatting

When to run: Before every commit

🐋 Docker Tests (9 tests, ~1 minute)

Purpose: Verify container functionality

pytest tests/docker/ -v

What they test:

  • Container starts successfully
  • Web server is accessible
  • Health checks pass
  • Environment variables work
  • Port bindings correct
  • API endpoints respond

When to run: When changing Docker configuration

🔗 Integration Tests (20 tests, ~3-4 minutes)

Purpose: Test complete workflows

pytest tests/integration/ -v

What they test:

Ingest Pipeline:

  • EPUB import to library
  • Format conversion (MOBI, TXT → EPUB)
  • Multiple file processing
  • International filenames
  • Empty/corrupted file handling
  • File backup creation
  • Lock mechanism

Database Tracking:

  • Import logging
  • Conversion logging
  • Metadata database updates
  • Statistics updates

File Management:

  • Filename truncation
  • Directory cleanup
  • Ignored format handling
  • Zero-byte file handling

When to run: Before merging PRs


Advanced Usage

Parallel Execution

Run tests faster using multiple CPUs:

# Auto-detect CPU count
pytest tests/unit/ -n auto

# Use specific number of workers
pytest tests/unit/ -n 4

Note: Don't use with integration tests (Docker container conflicts).

Coverage Reports

Generate code coverage reports:

# Terminal report
pytest tests/unit/ --cov=scripts --cov=cps --cov-report=term

# HTML report
pytest tests/unit/ --cov=scripts --cov=cps --cov-report=html

# Open HTML report
open htmlcov/index.html  # macOS
xdg-open htmlcov/index.html  # Linux

Coverage goals:

  • Critical modules (ingest, enforcer): 80%+
  • Core application: 70%+
  • Overall project: 50%+

Verbose Output

Control how much detail you see:

# Standard verbosity
pytest tests/smoke/ -v

# Extra verbose (shows test docstrings)
pytest tests/smoke/ -vv

# Quiet mode (just results)
pytest tests/smoke/ -q

# Show print statements
pytest tests/smoke/ -v -s

Debugging Failed Tests

# Stop at first failure
pytest tests/ -x

# Drop into debugger on failure
pytest tests/ --pdb

# Show local variables in tracebacks
pytest tests/ -l

# Re-run only failed tests
pytest --lf

# Re-run failed tests first, then others
pytest --ff

Timeout Control

Prevent hanging tests:

# 5 second timeout per test
pytest tests/ --timeout=5

# 30 second timeout
pytest tests/integration/ --timeout=30

Default timeout: 300 seconds (5 minutes) per test

Output Customization

# Short traceback format
pytest tests/ --tb=short

# No traceback
pytest tests/ --tb=no

# Show summary of all test outcomes
pytest tests/ -ra

# Show only failed tests details
pytest tests/ -rf

Selecting Tests by Marks

# List all available markers
pytest --markers

# Run only smoke tests
pytest -m smoke

# Run smoke OR unit tests
pytest -m "smoke or unit"

# Run integration but not slow tests
pytest -m "integration and not slow"

CI/CD Testing

GitHub Actions

Tests run automatically on:

  • Every pull request
  • Every push to main branch
  • Nightly builds

CI Test Stages:

  1. Smoke Tests (runs first, ~30s)

    • Quick sanity checks
    • Fails fast if critical issues
  2. Unit Tests (runs in parallel, ~2m)

    • Individual function tests
    • Coverage reporting
  3. Docker Tests (after unit tests, ~1m)

    • Container health checks
    • Integration validation
  4. Integration Tests (after Docker tests, ~4m)

    • Complete workflow validation
    • Uses bind mount mode

View CI results:

  • GitHub PR page → "Checks" tab
  • Click on "CWA Tests" for details

Running CI Tests Locally

Simulate CI environment:

# Run exactly what CI runs
pytest tests/smoke/ -v
pytest tests/unit/ -v --cov=scripts --cov=cps
pytest tests/docker/ -v
pytest tests/integration/ -v

Or use the interactive runner:

./run_tests.sh
# Choose option 4 (All Tests)

Pre-commit Checks

Run before committing:

# Quick validation (1 minute)
pytest tests/smoke/ -v

# Full validation (5-7 minutes)
./run_tests.sh  # Choose option 4

Pre-release Checklist

Before tagging a release:

  • All smoke tests pass
  • All unit tests pass
  • All Docker tests pass
  • All integration tests pass
  • Coverage hasn't decreased
  • No new warnings
  • Manual testing of critical paths

Performance Benchmarks

Expected durations on modern hardware:

Test Suite Expected Time Notes
Single smoke test <1 second Very fast
All smoke tests 15-30 seconds Quick validation
All unit tests 1-2 minutes Isolated tests
Docker tests 45-60 seconds Container startup
Integration tests 3-4 minutes Workflow validation
Full suite 5-7 minutes Everything
Quick test (option 5) 20-40 seconds Single integration test

First run? Add 2-3 minutes for Docker image download.

Slower than expected?

  • Run in parallel: pytest -n auto
  • Use SSD instead of HDD
  • Close other Docker containers
  • Check Docker Desktop settings (increase CPU/RAM)

Troubleshooting

Tests Won't Start

"pytest: command not found"

pip install pytest
# Or install all dev dependencies
pip install -r requirements-dev.txt

"Docker is required but not found"

# Check Docker is running
docker ps

# If not, start Docker Desktop

"Permission denied" on run_tests.sh

chmod +x run_tests.sh

Tests Are Failing

"Container failed to start"

# Check Docker daemon
docker ps

# Check for port conflicts
docker ps | grep 8083

# Clean up old containers
docker container prune -f

"Bind mount not working"

You're probably in a dev container. Use Docker volume mode:

USE_DOCKER_VOLUMES=true pytest tests/integration/ -v

"Database locked" errors

# Clean up lock files
rm /tmp/*.lock

# Make sure no other tests are running
pkill pytest

Tests Are Slow

First run? Docker downloads images (~1-2 GB). This happens once.

Still slow?

# Run in parallel (unit tests only)
pytest tests/unit/ -n auto

# Skip slow tests
pytest -m "not slow" tests/

Tests Pass Locally But Fail in CI

Common causes:

  1. Missing dependency in requirements-dev.txt
  2. Docker-specific behavior - Test needs @pytest.mark.requires_docker
  3. Calibre not available - Test needs @pytest.mark.requires_calibre
  4. Timing issue - Add timeout or wait for container readiness

Check CI logs in GitHub Actions for specific error.


Next Steps


Happy Testing! 🎉

Questions? Ask on Discord or open a GitHub issue.

Clone this wiki locally