Comprehensive guide to running, writing, and maintaining tests for the contextify project.
First, install development dependencies:
cd contextify
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\Activate.ps1
uv pip install -e ".[dev]"# Run all tests
pytest
# Run with coverage report
pytest --cov=src/contextify --cov-report=html
# Run with verbose output
pytest -v
# Run tests in parallel (faster)
pytest -n autoThe test suite is organized into logical modules:
tests/
├── __init__.py # Test package marker
├── conftest.py # Shared fixtures and configuration
├── test_logger.py # Logging configuration tests (14 tests)
├── test_patterns.py # Pattern matching tests (18 tests)
├── test_core.py # Traversal and aggregation tests (14 tests)
├── test_main.py # CLI argument parsing tests (12 tests)
└── test_integration.py # End-to-end workflow tests (9 tests)
Total: ~80 tests covering all major components
pytest tests/test_patterns.pypytest tests/test_patterns.py::TestPatternMatcherpytest tests/test_patterns.py::TestPatternMatcher::test_pattern_matcher_simple_wildcardpytest -k "pattern_matcher"pytest tests/test_integration.pyPurpose: Test individual components in isolation
File: tests/test_logger.py, tests/test_patterns.py, tests/test_core.py, tests/test_main.py
Example:
def test_pattern_matcher_simple_wildcard(self):
"""Test basic wildcard pattern matching."""
matcher = PatternMatcher(["*.pyc"])
assert matcher.is_match(Path("test.pyc"))
assert not matcher.is_match(Path("test.py"))Purpose: Test complete workflows combining multiple components
File: tests/test_integration.py
Example:
def test_integration_basic_workflow(self, sample_python_files, temp_project_dir):
"""Test basic end-to-end workflow."""
strategies = configure_ignore_strategies(temp_project_dir)
rule_manager = IgnoreRuleManager(strategies)
matcher = rule_manager.build_matcher()
# ... continue workflow@pytest.fixture
def temp_project_dir(tmp_path: Path) -> Path:
"""Isolated temporary directory for file system tests."""
@pytest.fixture
def sample_python_files(temp_project_dir: Path) -> List[Path]:
"""Create sample Python files in temporary directory."""
@pytest.fixture
def sample_mixed_files(temp_project_dir: Path) -> List[Path]:
"""Create mixed file types (Python, JavaScript, Markdown)."""
@pytest.fixture
def sample_ignored_dirs(temp_project_dir: Path) -> Path:
"""Create common ignored directories."""@pytest.fixture
def sample_gitignore(temp_project_dir: Path) -> Path:
"""Create a .gitignore file with common patterns."""
@pytest.fixture
def sample_aicontextignore(temp_project_dir: Path) -> Path:
"""Create a .aicontextignore file for custom patterns."""@pytest.fixture
def sample_patterns() -> List[str]:
"""Standard list of ignore patterns for testing."""
@pytest.fixture
def complex_patterns() -> List[str]:
"""Complex patterns including negations and directory rules."""
@pytest.fixture
def unicode_content() -> str:
"""Unicode-heavy content for encoding tests."""@pytest.fixture
def caplog_setup(caplog):
"""Configure caplog for structured logging assertions."""Generate HTML coverage report:
pytest --cov=src/contextify --cov-report=htmlThen open htmlcov/index.html in your browser.
Target coverage: 85%+
View coverage in terminal:
pytest --cov=src/contextify --cov-report=term-missingEvery test should follow Arrange-Act-Assert:
def test_something_specific(self, fixture):
"""
Clear description of what is being tested.
Arrange:
Set up test data and conditions
Act:
Execute the code being tested
Assert:
Verify the results
"""
# Arrange
test_data = fixture
# Act
result = some_function(test_data)
# Assert
assert result == expected_value- Class names:
Test<ComponentName>(e.g.,TestPatternMatcher) - Test method names:
test_<class>_<method>_<scenario>(e.g.,test_pattern_matcher_is_match_ignores_directory)
For testing multiple scenarios:
@pytest.mark.parametrize(
"pattern,path,should_match",
[
("*.log", "debug.log", True),
("*.log", "debug.txt", False),
("__pycache__/", "__pycache__", True),
],
)
def test_pattern_matcher_parametrized(self, pattern, path, should_match):
"""Test pattern matching with various patterns."""
matcher = PatternMatcher([pattern])
assert matcher.is_match(Path(path)) == should_matchMock external dependencies:
from unittest.mock import patch, MagicMock
@patch("contextify.patterns.Path.read_text")
def test_file_ignore_strategy_reads_file(mock_read, sample_patterns):
"""Test file reading."""
mock_read.return_value = "\n".join(sample_patterns)
# ... rest of testCreate .github/workflows/tests.yml:
name: Tests
on: [push, pull_request]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.9, "3.10", "3.11", "3.12"]
steps:
- uses: actions/checkout@v3
- uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- run: pip install -e ".[dev]"
- run: pytest --cov=src/contextify
- run: ruff check src/- ✅ Write one assertion per test (or related assertions)
- ✅ Use descriptive test names
- ✅ Use fixtures for setup
- ✅ Mock external dependencies
- ✅ Test error conditions
- ✅ Use parametrization for multiple scenarios
- ✅ Keep tests fast
- ✅ Test public APIs
- ❌ Use bare
except:in tests - ❌ Create external files outside of temp directories
- ❌ Test implementation details
- ❌ Make tests dependent on execution order
- ❌ Skip error handling tests
- ❌ Create tests that are slower than necessary
- ❌ Mix concerns in a single test
pytest -vv tests/test_patterns.py::TestPatternMatcher::test_pattern_matcher_simple_wildcardpytest -xpytest --pdb tests/test_patterns.pypytest -lpytest -o log_cli=true -o log_cli_level=DEBUGEnsure virtual environment is activated:
source .venv/bin/activate # macOS/Linux
# or
.venv\Scripts\Activate.ps1 # Windows PowerShellReinstall the package in editable mode:
uv pip install -e .Use parallel execution:
pytest -n autoInstall pytest-xdist:
uv pip install pytest-xdistThis is usually handled by pytest's tmp_path fixture. If you create custom temp files, ensure they're cleaned up:
def test_something(tmp_path):
test_file = tmp_path / "file.txt"
# pytest automatically cleans up tmp_path after test- Create
tests/test_<module>.py - Import the module to test
- Create a test class
Test<Module> - Add test methods following naming convention
- Use shared fixtures from
conftest.py - Document the test with docstrings
Example:
"""Tests for contextify.<module>."""
import pytest
from contextify.<module> import SomeClass
class TestSomeClass:
"""Test suite for SomeClass."""
def test_something(self, sample_pattern):
"""Test description in docstring."""
# AAA pattern
pass- Weekly: Run tests locally before pushing
- Before PR: Run full test suite with coverage
- After merge: Verify CI/CD tests pass
- Quarterly: Review coverage report for gaps
When modifying source code:
- Run affected tests:
pytest tests/test_<module>.py - Run full suite:
pytest - Check coverage:
pytest --cov=src/contextify - Update tests if behavior changes
Questions? Open an issue or discussion on GitHub.