-
-
Notifications
You must be signed in to change notification settings - Fork 298
Testing Running Tests
This guide covers all the ways to run CWA tests, from quick verification to comprehensive test suites.
- Interactive Test Runner
- Manual Test Execution
- Testing Modes
- Test Categories
- Advanced Usage
- CI/CD Testing
The easiest way to run tests with a friendly menu interface.
./run_tests.sh- Best for: Local development on host machine, CI/CD
- Technology: Temporary directories with bind mounts
- Result: 20/20 tests pass
- Duration: ~3-4 minutes
- Use when: Running on macOS, Linux, or Windows WSL2 host
What it tests:
- Complete book import workflow
- File format conversion
- Metadata database tracking
- EPUB fixing for Kindle compatibility
- File backup and cleanup
- Lock mechanism
- Error handling
- Best for: Development containers, Docker-in-Docker scenarios
-
Technology: Docker volumes with
docker cpoperations -
Result: 19/20 tests pass (1 skipped -
cwa_db_tracks_import) - Duration: ~3-4 minutes
- Use when: Running inside a dev container or remote container
Differences from bind mount mode:
- Uses Docker volumes instead of temporary directories
- Automatically detected when running in a container
- One test skipped (requires config volume mount)
- Slightly different file access patterns
- Purpose: Verify container health and initialization
- Result: 9/9 tests pass
- Duration: ~1 minute
- Use when: Validating Docker configuration changes
What it tests:
- Container starts successfully
- Web server responds to health checks
- Environment variables are read correctly
- Port binding works
- Basic API endpoints are accessible
- Purpose: Complete validation
- Result: Varies by environment (105+ tests total)
- Duration: ~5-7 minutes
- Use when: Pre-release validation, major changes
Includes:
- 13 smoke tests (basic functionality)
- 83 unit tests (individual functions)
- 9 Docker tests (container health)
- 20 integration tests (workflows)
- Purpose: Fast verification that tests work
- Result: 1 test pass
- Duration: ~30 seconds
- Use when: First-time setup, quick validation
Runs a single representative integration test to verify:
- Docker is working
- Pytest is installed
- Container can start
- Basic file operations work
- Purpose: Advanced users running specific tests
- Duration: Varies
- Use when: Debugging specific functionality
Examples of what you can enter:
tests/unit/test_cwa_db.py # Single file
tests/integration/ -k metadata # Pattern match
tests/ -m "not slow" # Marker-based
tests/smoke/ tests/unit/ # Multiple dirs
- Purpose: Display environment details
-
Shows:
- Current environment (host vs Docker)
- Python version
- Docker availability
- Test modes available
- Documentation links
- Test statistics
# Smoke tests - Quick sanity checks (30 seconds)
pytest tests/smoke/ -v
# Unit tests - Individual functions (2 minutes)
pytest tests/unit/ -v
# Docker tests - Container health (1 minute)
pytest tests/docker/ -v
# Integration tests - Workflows (3-4 minutes)
pytest tests/integration/ -v
# All tests (5-7 minutes)
pytest tests/ -v# Run all tests in a file
pytest tests/unit/test_cwa_db.py -v
# Run specific test class
pytest tests/unit/test_cwa_db.py::TestCWADBInitialization -v
# Run specific test function
pytest tests/smoke/test_smoke.py::test_python_version -v# Run tests with "database" in the name
pytest -k "database" -v
# Run tests with "import" or "export" in name
pytest -k "import or export" -v
# Exclude slow tests
pytest -k "not slow" -v# Run only smoke tests
pytest -m smoke -v
# Run only unit tests
pytest -m unit -v
# Run tests that don't require Docker
pytest -m "not requires_docker" -vAvailable markers:
-
@pytest.mark.smoke- Quick sanity checks -
@pytest.mark.unit- Unit tests -
@pytest.mark.integration- Integration tests -
@pytest.mark.requires_docker- Needs Docker -
@pytest.mark.requires_calibre- Needs Calibre tools -
@pytest.mark.slow- Takes >10 seconds
CWA supports two testing modes for handling Docker-in-Docker scenarios.
How to use:
# Just run tests normally
pytest tests/integration/ -vHow it works:
- Creates temporary directories in
/tmp/pytest-xxx - Mounts them into test container as bind mounts
- Uses standard Path operations
- Fast and reliable
Best for:
- CI/CD (GitHub Actions)
- Local development on host machine
- macOS, Linux, Windows WSL2
Results:
- ✅ 20/20 integration tests pass
- ✅ All features work
- ✅ Full database access
How to use:
# Set environment variable
export USE_DOCKER_VOLUMES=true
pytest tests/integration/ -v
# Or one-liner
USE_DOCKER_VOLUMES=true pytest tests/integration/ -vHow it works:
- Creates Docker volumes for test data
- Transfers files using
docker cpcommands - Uses VolumeHelper class for file operations
- Extracts databases to temp files for queries
Best for:
- Dev containers
- Remote containers
- Docker-in-Docker scenarios
- Codespaces, Gitpod
Results:
- ✅ 19/20 integration tests pass
- ⏭️ 1 test skipped (
cwa_db_tracks_import- needs config volume) - ✅ All core features work
Auto-detection:
The interactive test runner auto-detects your environment:
./run_tests.sh
# Shows: "Environment: docker" if in container
# Shows: "Environment: host" if on host| Feature | Bind Mount | Docker Volume |
|---|---|---|
| Speed | Fast | Slightly slower |
| Setup | Automatic | Requires env var |
| CI/CD | ✅ Perfect | |
| Dev Container | ❌ Fails | ✅ Works |
| Tests Passing | 20/20 | 19/20 |
| Database Access | Direct | Extract to temp |
| File Operations | Path | VolumeHelper |
Purpose: Verify critical functionality isn't broken
pytest tests/smoke/ -vWhat they test:
- Python version compatibility
- Flask app can be imported
- Required modules are available
- Database connections work
- Configuration loads
- Critical paths accessible
When to run: Before every commit
Purpose: Test individual functions in isolation
pytest tests/unit/ -vWhat they test:
CWA Database (test_cwa_db.py - 20 tests):
- Database initialization
- Table creation
- Insert/query operations
- Statistics retrieval
- Error handling
Helper Functions (test_helper.py - 63 tests):
- Email validation
- Password validation
- ISBN validation
- File format detection
- String manipulation
- Date/time formatting
When to run: Before every commit
Purpose: Verify container functionality
pytest tests/docker/ -vWhat they test:
- Container starts successfully
- Web server is accessible
- Health checks pass
- Environment variables work
- Port bindings correct
- API endpoints respond
When to run: When changing Docker configuration
Purpose: Test complete workflows
pytest tests/integration/ -vWhat they test:
Ingest Pipeline:
- EPUB import to library
- Format conversion (MOBI, TXT → EPUB)
- Multiple file processing
- International filenames
- Empty/corrupted file handling
- File backup creation
- Lock mechanism
Database Tracking:
- Import logging
- Conversion logging
- Metadata database updates
- Statistics updates
File Management:
- Filename truncation
- Directory cleanup
- Ignored format handling
- Zero-byte file handling
When to run: Before merging PRs
Run tests faster using multiple CPUs:
# Auto-detect CPU count
pytest tests/unit/ -n auto
# Use specific number of workers
pytest tests/unit/ -n 4Note: Don't use with integration tests (Docker container conflicts).
Generate code coverage reports:
# Terminal report
pytest tests/unit/ --cov=scripts --cov=cps --cov-report=term
# HTML report
pytest tests/unit/ --cov=scripts --cov=cps --cov-report=html
# Open HTML report
open htmlcov/index.html # macOS
xdg-open htmlcov/index.html # LinuxCoverage goals:
- Critical modules (ingest, enforcer): 80%+
- Core application: 70%+
- Overall project: 50%+
Control how much detail you see:
# Standard verbosity
pytest tests/smoke/ -v
# Extra verbose (shows test docstrings)
pytest tests/smoke/ -vv
# Quiet mode (just results)
pytest tests/smoke/ -q
# Show print statements
pytest tests/smoke/ -v -s# Stop at first failure
pytest tests/ -x
# Drop into debugger on failure
pytest tests/ --pdb
# Show local variables in tracebacks
pytest tests/ -l
# Re-run only failed tests
pytest --lf
# Re-run failed tests first, then others
pytest --ffPrevent hanging tests:
# 5 second timeout per test
pytest tests/ --timeout=5
# 30 second timeout
pytest tests/integration/ --timeout=30Default timeout: 300 seconds (5 minutes) per test
# Short traceback format
pytest tests/ --tb=short
# No traceback
pytest tests/ --tb=no
# Show summary of all test outcomes
pytest tests/ -ra
# Show only failed tests details
pytest tests/ -rf# List all available markers
pytest --markers
# Run only smoke tests
pytest -m smoke
# Run smoke OR unit tests
pytest -m "smoke or unit"
# Run integration but not slow tests
pytest -m "integration and not slow"Tests run automatically on:
- Every pull request
- Every push to main branch
- Nightly builds
CI Test Stages:
-
Smoke Tests (runs first, ~30s)
- Quick sanity checks
- Fails fast if critical issues
-
Unit Tests (runs in parallel, ~2m)
- Individual function tests
- Coverage reporting
-
Docker Tests (after unit tests, ~1m)
- Container health checks
- Integration validation
-
Integration Tests (after Docker tests, ~4m)
- Complete workflow validation
- Uses bind mount mode
View CI results:
- GitHub PR page → "Checks" tab
- Click on "CWA Tests" for details
Simulate CI environment:
# Run exactly what CI runs
pytest tests/smoke/ -v
pytest tests/unit/ -v --cov=scripts --cov=cps
pytest tests/docker/ -v
pytest tests/integration/ -vOr use the interactive runner:
./run_tests.sh
# Choose option 4 (All Tests)Run before committing:
# Quick validation (1 minute)
pytest tests/smoke/ -v
# Full validation (5-7 minutes)
./run_tests.sh # Choose option 4Before tagging a release:
- All smoke tests pass
- All unit tests pass
- All Docker tests pass
- All integration tests pass
- Coverage hasn't decreased
- No new warnings
- Manual testing of critical paths
Expected durations on modern hardware:
| Test Suite | Expected Time | Notes |
|---|---|---|
| Single smoke test | <1 second | Very fast |
| All smoke tests | 15-30 seconds | Quick validation |
| All unit tests | 1-2 minutes | Isolated tests |
| Docker tests | 45-60 seconds | Container startup |
| Integration tests | 3-4 minutes | Workflow validation |
| Full suite | 5-7 minutes | Everything |
| Quick test (option 5) | 20-40 seconds | Single integration test |
First run? Add 2-3 minutes for Docker image download.
Slower than expected?
- Run in parallel:
pytest -n auto - Use SSD instead of HDD
- Close other Docker containers
- Check Docker Desktop settings (increase CPU/RAM)
"pytest: command not found"
pip install pytest
# Or install all dev dependencies
pip install -r requirements-dev.txt"Docker is required but not found"
# Check Docker is running
docker ps
# If not, start Docker Desktop"Permission denied" on run_tests.sh
chmod +x run_tests.sh"Container failed to start"
# Check Docker daemon
docker ps
# Check for port conflicts
docker ps | grep 8083
# Clean up old containers
docker container prune -f"Bind mount not working"
You're probably in a dev container. Use Docker volume mode:
USE_DOCKER_VOLUMES=true pytest tests/integration/ -v"Database locked" errors
# Clean up lock files
rm /tmp/*.lock
# Make sure no other tests are running
pkill pytestFirst run? Docker downloads images (~1-2 GB). This happens once.
Still slow?
# Run in parallel (unit tests only)
pytest tests/unit/ -n auto
# Skip slow tests
pytest -m "not slow" tests/Common causes:
- Missing dependency in requirements-dev.txt
-
Docker-specific behavior - Test needs
@pytest.mark.requires_docker -
Calibre not available - Test needs
@pytest.mark.requires_calibre - Timing issue - Add timeout or wait for container readiness
Check CI logs in GitHub Actions for specific error.
- Writing Tests - Learn to write your own tests
- Docker-in-Docker Mode - Deep dive into volume mode
- Implementation Status - See what's tested and what's planned
Happy Testing! 🎉
Questions? Ask on Discord or open a GitHub issue.