-
-
Notifications
You must be signed in to change notification settings - Fork 281
Testing Guide for Contributors
crocodilestick edited this page Oct 23, 2025
·
2 revisions
# Testing Guide for Contributors
Welcome to the Calibre-Web Automated testing guide! This comprehensive guide will help you write effective tests for new features and bug fixes.
> **New to testing?** Start with the [Quick Start Guide](Testing-Quick-Start.md) for a 5-minute introduction.
## 📚 Documentation
This is part of a complete testing documentation set:
- **[Testing Overview](Testing-Overview.md)** - Complete testing system overview
- **[Quick Start Guide](Testing-Quick-Start.md)** - Get running in 5 minutes
- **[Running Tests](Testing-Running-Tests.md)** - All execution modes and options
- **[Docker-in-Docker Mode](Testing-Docker-in-Docker-Mode.md)** - Testing in dev containers
- **[This Guide]** - Writing and contributing tests
- **[Implementation Status](Testing-Implementation-Status.md)** - Progress tracking
## Table of Contents
- [Why We Test](#why-we-test)
- [Getting Started](#getting-started)
- [Test Categories](#test-categories)
- [Writing Your First Test](#writing-your-first-test)
- [Common Testing Patterns](#common-testing-patterns)
- [Testing Checklist for PRs](#testing-checklist-for-prs)
- [Advanced Topics](#advanced-topics)
- [Getting Help](#getting-help)
---
## Why We Test
CWA is a complex application with:
- Multiple background services (s6-overlay)
- Three separate SQLite databases
- 27+ ebook import formats
- Docker deployment across different architectures
- Integration with Calibre CLI tools
**Manual testing is time-consuming and error-prone.** Automated tests help us:
- ✅ Catch bugs before they reach users
- ✅ Prevent regressions when adding new features
- ✅ Give contributors confidence their changes work
- ✅ Speed up the review process
- ✅ Serve as living documentation
---
## Getting Started
### 1. Install Test Dependencies
From the project root directory:
```bash
pip install -r requirements-dev.txt
```
This installs pytest and related testing tools.
### 2. Verify Installation
Use the interactive test runner:
```bash
./run_tests.sh
# Choose option 5 (Quick Test)
```
Or run a single smoke test:
```bash
pytest tests/smoke/test_smoke.py::test_smoke_suite_itself -v
```
You should see: ✅ **PASSED**
### 3. Explore the Test Structure
```
tests/
├── conftest.py # Shared fixtures (bind mount mode)
├── conftest_volumes.py # Docker volume mode fixtures
├── smoke/ # Fast sanity checks (~30 seconds)
│ └── test_smoke.py # 13 tests
├── unit/ # Isolated component tests (~2 minutes)
│ ├── test_cwa_db.py # 20 tests
│ └── test_helper.py # 63 tests
├── docker/ # Container health (~1 minute)
│ └── test_container_startup.py # 9 tests
├── integration/ # Multi-component tests (~3-4 minutes)
│ └── test_ingest_pipeline.py # 20 tests
└── fixtures/ # Sample test data
└── sample_books/
```
**Total**: 125+ working tests
---
## Test Categories
### 🔥 Smoke Tests (Priority: CRITICAL)
**Location**: `tests/smoke/`
**Run Time**: <30 seconds
**Purpose**: Verify basic functionality isn't broken
**When to add smoke tests:**
- Core application startup
- Database connectivity
- Required binaries are present
- Critical configuration loading
**Example:**
```python
@pytest.mark.smoke
def test_app_can_start():
"""Verify Flask app initializes without errors."""
from cps import create_app
app = create_app()
assert app is not None
```
### 🧪 Unit Tests (Priority: HIGH)
**Location**: `tests/unit/`
**Run Time**: ~2 minutes
**Purpose**: Test individual functions in isolation
**When to add unit tests:**
- New utility functions
- Data validation logic
- Format detection/parsing
- Database operations
- File handling logic
**Example:**
```python
@pytest.mark.unit
def test_file_format_detection():
"""Verify EPUB files are correctly identified."""
from scripts.ingest_processor import is_supported_format
assert is_supported_format("book.epub") is True
assert is_supported_format("book.txt") is True
assert is_supported_format("book.exe") is False
```
### 🔗 Integration Tests (Priority: MEDIUM)
**Location**: `tests/integration/`
**Run Time**: ~10 minutes
**Purpose**: Test multiple components working together
**When to add integration tests:**
- Ingest pipeline workflows
- Database + file system interactions
- Calibre CLI integration
- OAuth/LDAP authentication flows
**Example:**
```python
@pytest.mark.integration
def test_book_import_workflow(temp_library, sample_epub):
"""Verify complete book import process."""
result = import_book(sample_epub, temp_library)
assert result['success'] is True
assert book_exists_in_library(temp_library, result['book_id'])
```
### 🎯 E2E Tests (Priority: LOW initially)
**Location**: `tests/e2e/`
**Run Time**: ~30 minutes
**Purpose**: Test complete user workflows in Docker
**When to add E2E tests:**
- Major feature releases
- Multi-service interactions
- Docker-specific behavior
- Network share mode testing
---
## Writing Your First Test
### Step 1: Choose the Right Category
Ask yourself:
1. Does this test require Docker? → **E2E**
2. Does it test multiple components? → **Integration**
3. Does it test one function? → **Unit**
4. Does it verify basic functionality? → **Smoke**
### Step 2: Create Your Test File
Create a new file following the naming convention:
```bash
# Unit test example
touch tests/unit/test_my_feature.py
```
### Step 3: Write Your Test
```python
"""
Unit tests for my new feature.
Brief description of what this module tests.
"""
import pytest
@pytest.mark.unit
class TestMyFeature:
"""Test suite for MyFeature functionality."""
def test_feature_with_valid_input(self):
"""Test that feature handles valid input correctly."""
# Arrange
input_data = "valid input"
# Act
result = my_function(input_data)
# Assert
assert result is not None
assert result == "expected output"
def test_feature_with_invalid_input(self):
"""Test that feature handles invalid input gracefully."""
with pytest.raises(ValueError):
my_function(None)
```
### Step 4: Use Fixtures for Setup
Instead of creating test data manually, use fixtures from `conftest.py`:
```python
def test_with_database(temp_cwa_db):
"""The temp_cwa_db fixture is automatically available."""
# Test uses temporary database that's cleaned up automatically
temp_cwa_db.insert_import_log(1, "Test Book", "EPUB", "/path")
assert temp_cwa_db.get_total_imports() == 1
```
**Available fixtures:**
- `temp_dir` - Temporary directory
- `temp_cwa_db` - Temporary CWA database
- `temp_library_dir` - Temporary Calibre library
- `sample_book_data` - Sample book metadata
- `sample_user_data` - Sample user data
- `mock_calibre_tools` - Mocked Calibre binaries
### Step 5: Run Your Test
```bash
pytest tests/unit/test_my_feature.py -v
```
### Step 6: Check Coverage
```bash
pytest tests/unit/test_my_feature.py --cov=my_module --cov-report=term
```
Aim for **>80% coverage** on new code.
---
## Common Testing Patterns
### Pattern 1: Testing Database Operations
```python
def test_database_insert(temp_cwa_db):
"""Test inserting data into database."""
# Insert
temp_cwa_db.insert_import_log(
book_id=1,
title="Test Book",
format="EPUB",
file_path="/path/to/book.epub"
)
# Query and verify
logs = temp_cwa_db.query_import_logs(limit=1)
assert len(logs) == 1
assert logs[0]['title'] == "Test Book"
```
### Pattern 2: Testing File Operations
```python
def test_file_processing(tmp_path):
"""Test file is processed correctly."""
# Create test file
test_file = tmp_path / "test.epub"
test_file.write_text("epub content")
# Process
result = process_file(str(test_file))
# Verify
assert result['success'] is True
assert test_file.exists() # Or doesn't exist, depending on logic
```
### Pattern 3: Testing with Mock Calibre Tools
```python
def test_calibre_import(mock_calibre_tools, sample_epub):
"""Test book import using Calibre."""
# mock_calibre_tools automatically mocks subprocess.run
result = import_with_calibredb(sample_epub)
assert result is True
mock_calibre_tools['calibredb'].assert_called_once()
```
### Pattern 4: Parameterized Tests (Test Multiple Inputs)
```python
@pytest.mark.parametrize("format,expected", [
("epub", True),
("mobi", True),
("pdf", True),
("exe", False),
("txt", True),
])
def test_format_detection(format, expected):
"""Test format detection for multiple file types."""
result = is_supported_format(f"book.{format}")
assert result == expected
```
### Pattern 5: Testing Error Handling
```python
def test_handles_missing_file_gracefully():
"""Test that missing files don't crash the app."""
result = process_file("/nonexistent/file.epub")
assert result['success'] is False
assert 'error' in result
assert "not found" in result['error'].lower()
```
### Pattern 6: Testing Async/Background Tasks
```python
@pytest.mark.timeout(10) # Fail if takes >10 seconds
def test_background_task_completes():
"""Test background task runs to completion."""
import time
task = start_background_task()
# Wait for completion with timeout
max_wait = 5
start = time.time()
while not task.is_complete() and (time.time() - start) < max_wait:
time.sleep(0.1)
assert task.is_complete()
assert task.success is True
```
---
## Testing Checklist for PRs
Before submitting a pull request, verify:
### ✅ Tests Added
- [ ] New features have corresponding tests
- [ ] Bug fixes have regression tests
- [ ] At least one test per new function/method
### ✅ Tests Pass
- [ ] All smoke tests pass: `pytest tests/smoke/ -v`
- [ ] All unit tests pass: `pytest tests/unit/ -v`
- [ ] New tests pass individually
- [ ] Tests pass in CI/CD pipeline
### ✅ Code Coverage
- [ ] New code has >70% test coverage
- [ ] Critical functions have >80% coverage
- [ ] Check with: `pytest --cov=. --cov-report=term`
### ✅ Test Quality
- [ ] Tests have descriptive names
- [ ] Tests have docstrings explaining what they verify
- [ ] Tests use fixtures instead of manual setup
- [ ] Tests clean up after themselves (automatic with fixtures)
- [ ] Tests are independent (don't rely on other tests)
### ✅ Documentation
- [ ] Complex test logic is commented
- [ ] Test file has module-level docstring
- [ ] Non-obvious test behavior is explained
---
## Troubleshooting
### "Module not found" errors
```bash
# Make sure you're in the project root
cd /app/calibre-web-automated
# Reinstall dependencies
pip install -r requirements.txt
pip install -r requirements-dev.txt
```
### Tests pass locally but fail in CI
This usually means:
- Missing dependency in `requirements-dev.txt`
- Docker-specific behavior (test needs `@pytest.mark.requires_docker`)
- Calibre tools not available (test needs `@pytest.mark.requires_calibre`)
### Database locked errors
```bash
# Clear lock files
rm /tmp/*.lock
# Tests should use temp_cwa_db fixture to avoid conflicts
```
### Tests are slow
```bash
# Run tests in parallel
pytest -n auto tests/unit/
# Skip slow tests during development
pytest -m "not slow" tests/
```
### "Permission denied" errors
Tests should use `tmp_path` or `temp_dir` fixtures, not system directories:
```python
# ❌ Bad - uses system directory
def test_bad():
with open('/config/test.txt', 'w') as f:
f.write('test')
# ✅ Good - uses temporary directory
def test_good(tmp_path):
test_file = tmp_path / "test.txt"
test_file.write_text('test')
```
### "Fixture not found" errors
Common fixtures are in `tests/conftest.py` and automatically available. If you see this error:
1. Check spelling of fixture name
2. Verify fixture exists in `conftest.py`
3. Check that test file is in `tests/` directory
---
## Advanced Topics
### Running Tests in Docker
```bash
# Start container
docker compose up -d
# Run tests inside container
docker exec -it calibre-web-automated pytest tests/smoke/ -v
```
### Creating Custom Fixtures
Add to `tests/conftest.py`:
```python
@pytest.fixture
def my_custom_fixture():
"""Provide custom test data."""
# Setup
data = create_test_data()
yield data
# Cleanup (optional)
cleanup_test_data(data)
```
Then use in tests:
```python
def test_with_custom_fixture(my_custom_fixture):
assert my_custom_fixture is not None
```
### Mocking External Services
```python
def test_with_mocked_api(requests_mock):
"""Test API integration with mocked responses."""
# Mock API response
requests_mock.get(
'https://api.example.com/metadata',
json={'title': 'Mocked Book'}
)
# Test function that calls API
result = fetch_metadata('123')
assert result['title'] == 'Mocked Book'
```
### Testing with Time Travel
```python
from freezegun import freeze_time
@freeze_time("2024-01-01 12:00:00")
def test_scheduled_task():
"""Test task runs at scheduled time."""
# Code thinks it's 2024-01-01 at noon
result = should_run_daily_task()
assert result is True
```
---
## Getting Help
- **Full documentation**: See `TESTING_STRATEGY.md` in project root
- **Example tests**: Browse `tests/smoke/` and `tests/unit/` directories
- **Ask questions**: Discord server: https://discord.gg/EjgSeek94R
- **Report issues**: GitHub Issues
---
## Contributing Tests
We appreciate test contributions! Here's how to help:
1. **Pick an untested area**: Check coverage report to find gaps
2. **Write tests**: Follow patterns in this guide
3. **Run tests locally**: Verify they pass
4. **Submit PR**: Include tests with your feature/bugfix
5. **Respond to feedback**: Reviewers may suggest improvements
**Good first test contributions:**
- Add missing unit tests for utility functions
- Add parameterized tests for format detection
- Add edge case tests for existing functions
- Improve test coverage of core modules
---
## Test Coverage Goals
- **Critical modules** (ingest_processor, cwa_db, helper): **80%+**
- **Core application**: **70%+**
- **Overall project**: **50%+**
Check current coverage:
```bash
pytest --cov=cps --cov=scripts --cov-report=term --cov-report=html
```
View detailed report: Open `htmlcov/index.html` in your browser
---
## Quick Reference
```bash
# Most common commands
pytest tests/smoke/ -v # Fast sanity check
pytest tests/unit/ -v # Unit tests
pytest -k "test_name" -v # Run specific test
pytest --cov=. --cov-report=html # Coverage report
pytest -n auto # Parallel execution
pytest --lf # Run last failed tests
pytest -x # Stop on first failure
pytest -vv # Extra verbose
```
---
**Thank you for contributing to CWA's test suite!** 🎉 Every test makes the project more reliable and easier to maintain.