-
-
Notifications
You must be signed in to change notification settings - Fork 281
Testing Overview
Welcome to the Calibre-Web Automated (CWA) testing documentation! This page provides an overview of the test suite and links to detailed guides.
- Quick Start Guide - Get up and running with tests in 5 minutes
- Running Tests - Interactive test runner and execution modes
- Docker-in-Docker Mode - Testing inside dev containers
- Writing Tests - Comprehensive guide for contributors
- Implementation Status - What's done and what's planned
CWA is a complex application with:
- Multiple background services (s6-overlay)
- Three separate SQLite databases
- 27+ ebook import formats
- Docker deployment across different architectures
- Integration with Calibre CLI tools
Automated tests help us:
- β Catch bugs before they reach users
- β Prevent regressions when adding new features
- β Give contributors confidence their changes work
- β Speed up the review process
- β Serve as living documentation
| Category | Tests | Status | Coverage |
|---|---|---|---|
| Smoke Tests | 19 | β Passing | Critical paths |
| Unit Tests | 83 | β Passing | Core utilities |
| Docker Tests | 9 | β Passing | Container health |
| Integration Tests | 20 | β Passing | Auto-ingest workflows |
| Total | 131+ | β Working | ~14% code, 100% critical paths |
Note: Tests intelligently skip when dependencies are unavailable (e.g., no container, no Calibre tools). This means tests run cleanly in all environments without false failures.
CWA supports two testing modes:
1. Bind Mount Mode (Default)
- Used by CI/CD (GitHub Actions) and local development
- Fast and reliable
- Works on host systems
- Result: All tests passing, smart skipping when no container available
2. Docker Volume Mode
- For development containers (Docker-in-Docker)
- Automatic volume management via
docker cp - Handles bind mount limitations
- Result: All tests passing with expected skips for volume-incompatible features
The easiest way to run tests:
./run_tests.shThis gives you a friendly menu with 7 options:
- Integration Tests (Bind Mount)
- Integration Tests (Docker Volume)
- Docker Startup Tests
- All Tests
- Quick Test (30 seconds)
- Custom Selection
- Info & Status
Quick verification:
pytest tests/smoke/ -v # 30 secondsFull integration suite:
pytest tests/integration/ -v # 3-4 minutesIn a dev container:
USE_DOCKER_VOLUMES=true pytest tests/integration/ -vPurpose: Verify basic functionality isn't broken
Duration: <10 seconds
Run: Every commit
Tests critical paths like:
- App starts successfully
- Databases are accessible
- Required binaries exist (when in container)
- Config loads correctly
- Lock mechanisms work
Smart Skipping: Tests automatically skip when not in container environment instead of failing.
Purpose: Test individual functions in isolation
Duration: ~2 minutes
Run: Every commit
Tests components like:
- Database operations (CWA_DB)
- Helper functions
- File validation
- Format detection
Purpose: Verify container health and startup
Duration: ~1 minute (or <1 second if no container)
Run: Pre-merge
Tests Docker-specific behavior:
- Container starts successfully
- Services are running
- Health checks pass
- Volumes mounted correctly
- Web interface accessible
Smart Skipping: Tests skip gracefully when no container is available on configured port (default 8085 for local, 8083 for CI).
Purpose: Test multi-component workflows with real Calibre tools
Duration: ~3-4 minutes
Run: Pre-merge, every PR
Tests complete auto-ingest workflows:
- Book import pipeline - All 27+ supported formats
- File conversion - MOBIβEPUB, TXTβEPUB, etc.
- Format detection - Accurate file type identification
- Error handling - Corrupted/empty files handled gracefully
- Database tracking - Both metadata.db and cwa.db updated
- Backup system - Files archived to processed_books/
- International support - Unicode filenames work correctly
- Lock mechanism - Prevents concurrent processing issues
- Configuration respect - Settings like ignored formats honored
- Stability - Bulk imports don't crash system
These are the most important tests - they verify the core auto-ingest feature that makes CWA valuable.
Minimal requirements:
- Python 3.10+
- Docker (for integration tests)
- Bash shell
Test dependencies:
pip install -r requirements-dev.txt
# Or manually:
pip install pytest pytest-timeout pytest-flask pytest-mock faker testcontainers requestsThe interactive test runner will check dependencies and guide you through installation if needed.
- Color-coded output - Easy to read results
- Auto-detection - Chooses right mode for your environment
- Progress indicators - Know what's happening
- Error handling - Clear error messages with fixes
- Menu-driven - No need to remember commands
- Transparent switching - Single environment variable toggles modes
- CI optimized - Bind mounts for maximum speed
- DinD compatible - Docker volumes when needed
- Zero conflicts - Modes don't interfere with each other
- Container availability detection - Tests check if container is running before attempting connection
- Graceful skipping - Tests skip with clear messages instead of failing
- Port configuration - Default 8085 for local (avoids conflicts), 8083 for CI
- Log polling - Detects when container is ready (~12s vs 60s)
- Auto-cleanup - Removes containers after tests
- Volume management - Creates and destroys test volumes
- Error recovery - Handles container failures gracefully
- Test infrastructure and documentation
- Smoke tests for critical paths (19 tests)
- Unit tests for core utilities (83 tests - CWA_DB, helper functions)
- Docker container health tests (9 tests)
- Integration tests for auto-ingest pipeline (20 comprehensive tests)
- Dual-mode architecture (bind mount + Docker volume)
- Interactive test runner script
- Container availability detection and smart skipping
- Port conflict resolution (8085 default for local dev)
- CI/CD integration with GitHub Actions
Current Status: 131+ tests, all passing with intelligent environment detection. Core auto-ingest system thoroughly tested.
- Additional unit tests (ingest_processor modules, cover_enforcer)
- OAuth flow integration tests
- Kobo sync integration tests
- Metadata provider tests (Google Books, Hardcover, etc.)
- EPUB fixer integration tests
- Cover enforcer integration tests
- Auto-metadata fetch tests
- Sample test fixtures for all 27+ supported formats
- End-to-end workflow tests
- Network share mode tests
- Performance and load tests
- Browser-based UI tests (Playwright)
Coverage Goals:
- Current: ~14% code coverage, 100% critical path coverage
- Short term: 25% code coverage (core features fully tested)
- Long term: 50%+ code coverage (comprehensive test suite)
Focus Areas: The auto-ingest system is the most critical feature and is already comprehensively tested. Future work will expand to other CWA features.
We welcome test contributions! Tests are one of the most valuable contributions you can make.
Easy first contributions:
- Add unit tests for utility functions
- Add parameterized tests for format detection
- Create minimal sample ebook fixtures
- Improve test documentation
Getting started:
- Read the Testing Guide for Contributors
- Pick an untested area (check coverage report)
- Write tests following our patterns
- Submit a PR
Questions?
- Discord: https://discord.gg/EjgSeek94R
- GitHub Issues: Tag with
testinglabel
- Quick Start Guide - 5-minute setup
- Running Tests - All execution modes
- Docker-in-Docker Mode - Dev container testing
- Writing Tests - Complete guide
- Implementation Status - Progress tracking
-
run_tests.sh- Interactive test runner -
pytest.ini- Test configuration -
requirements-dev.txt- Test dependencies -
tests/conftest.py- Shared fixtures
# Generate coverage report
pytest --cov=cps --cov=scripts --cov-report=html
# View in browser
open htmlcov/index.htmlHappy Testing! π
Every test makes CWA more reliable and easier to maintain. Thank you for contributing!