Skip to content
crocodilestick edited this page Oct 23, 2025 · 2 revisions

Testing Overview

Welcome to the Calibre-Web Automated (CWA) testing documentation! This page provides an overview of the test suite and links to detailed guides.

📚 Documentation Index

🎯 Why We Test

CWA is a complex application with:

  • Multiple background services (s6-overlay)
  • Three separate SQLite databases
  • 27+ ebook import formats
  • Docker deployment across different architectures
  • Integration with Calibre CLI tools

Automated tests help us:

  • ✅ Catch bugs before they reach users
  • ✅ Prevent regressions when adding new features
  • ✅ Give contributors confidence their changes work
  • ✅ Speed up the review process
  • ✅ Serve as living documentation

📊 Current Status

Test Coverage

Category Tests Status Coverage
Smoke Tests 13 ✅ Passing Critical paths
Unit Tests 83 ✅ Passing Core utilities
Docker Tests 9 ✅ Passing Container health
Integration Tests 20 ✅ Passing Workflows
Total 125+ Working ~30%

Test Modes

CWA supports two testing modes:

1. Bind Mount Mode (Default)

  • Used by CI/CD (GitHub Actions)
  • Fast and reliable
  • Works on host systems
  • Result: 25/25 integration tests passing

2. Docker Volume Mode

  • For development containers (Docker-in-Docker)
  • Automatic volume management
  • Handles bind mount limitations
  • Result: 19/20 tests passing (1 documented skip)

🚀 Quick Start

Run the Interactive Test Runner

The easiest way to run tests:

./run_tests.sh

This gives you a friendly menu with 7 options:

  1. Integration Tests (Bind Mount)
  2. Integration Tests (Docker Volume)
  3. Docker Startup Tests
  4. All Tests
  5. Quick Test (30 seconds)
  6. Custom Selection
  7. Info & Status

Run Tests Manually

Quick verification:

pytest tests/smoke/ -v          # 30 seconds

Full integration suite:

pytest tests/integration/ -v    # 3-4 minutes

In a dev container:

USE_DOCKER_VOLUMES=true pytest tests/integration/ -v

📖 Test Categories

🔥 Smoke Tests

Purpose: Verify basic functionality isn't broken
Duration: <30 seconds
Run: Every commit

Tests critical paths like:

  • App starts successfully
  • Databases are accessible
  • Required binaries exist
  • Config loads correctly

🧪 Unit Tests

Purpose: Test individual functions in isolation
Duration: ~2 minutes
Run: Every commit

Tests components like:

  • Database operations (CWA_DB)
  • Helper functions
  • File validation
  • Format detection

🐋 Docker Tests

Purpose: Verify container health
Duration: ~1 minute
Run: Pre-merge

Tests Docker-specific behavior:

  • Container starts successfully
  • Services are running
  • Health checks pass
  • Volumes mounted correctly

🔗 Integration Tests

Purpose: Test multi-component workflows
Duration: ~3-4 minutes
Run: Pre-merge

Tests complete workflows:

  • Book import pipeline
  • File conversion
  • Metadata enforcement
  • EPUB fixing
  • Database tracking

🔧 Requirements

Minimal requirements:

  • Python 3.8+
  • Docker (for integration tests)
  • Bash shell

Test dependencies:

pip install -r requirements-dev.txt

The interactive test runner will auto-install pytest if needed.

🎨 Features

Interactive Test Runner

  • Color-coded output - Easy to read results
  • Auto-detection - Chooses right mode for your environment
  • Progress indicators - Know what's happening
  • Error handling - Clear error messages with fixes
  • Menu-driven - No need to remember commands

Dual-Mode Architecture

  • Transparent switching - Single environment variable toggles modes
  • CI optimized - Bind mounts for maximum speed
  • DinD compatible - Docker volumes when needed
  • Zero conflicts - Modes don't interfere with each other

Smart Container Management

  • Log polling - Detects when container is ready (~12s vs 60s)
  • Auto-cleanup - Removes containers after tests
  • Volume management - Creates and destroys test volumes
  • Error recovery - Handles container failures gracefully

📈 Testing Roadmap

✅ Completed (Weeks 1-3)

  • Test infrastructure and documentation
  • Smoke tests for critical paths
  • Unit tests for core utilities (CWA_DB, helper functions)
  • Docker container health tests
  • Integration tests for ingest pipeline
  • Dual-mode architecture (bind mount + Docker volume)
  • Interactive test runner script

🚧 In Progress (Weeks 4-6)

  • Additional unit tests (ingest_processor, cover_enforcer)
  • OAuth flow integration tests
  • Kobo sync integration tests
  • Metadata provider tests
  • Sample test fixtures for all formats

📋 Planned (Weeks 7-12)

  • End-to-end workflow tests
  • Network share mode tests
  • Performance and load tests
  • Browser-based UI tests (Playwright)
  • Comprehensive CI/CD pipeline

Coverage Goals:

  • Week 6: 40% coverage
  • Week 12: 60% coverage
  • Long term: 70%+ coverage

🤝 Contributing

We welcome test contributions! Tests are one of the most valuable contributions you can make.

Easy first contributions:

  • Add unit tests for utility functions
  • Add parameterized tests for format detection
  • Create minimal sample ebook fixtures
  • Improve test documentation

Getting started:

  1. Read the Testing Guide for Contributors
  2. Pick an untested area (check coverage report)
  3. Write tests following our patterns
  4. Submit a PR

Questions?

📚 Additional Resources

Documentation

Files

  • run_tests.sh - Interactive test runner
  • pytest.ini - Test configuration
  • requirements-dev.txt - Test dependencies
  • tests/conftest.py - Shared fixtures

Reports

# Generate coverage report
pytest --cov=cps --cov=scripts --cov-report=html

# View in browser
open htmlcov/index.html

Happy Testing! 🎉

Every test makes CWA more reliable and easier to maintain. Thank you for contributing!

Clone this wiki locally