Thank you for your interest in contributing to pytest-gremlins! This document provides guidelines and instructions for contributing.
This project adheres to a Code of Conduct. By participating, you are expected to uphold this code.
Before creating a bug report, please check existing issues to avoid duplicates. When creating a bug report, include:
- A clear, descriptive title
- Steps to reproduce the behavior
- Expected behavior
- Actual behavior
- Python version and OS
- pytest-gremlins version
- Minimal code example if possible
Feature requests are welcome! Please:
- Check existing issues and discussions first
- Describe the problem you're trying to solve
- Explain your proposed solution
- Consider alternatives you've thought about
We love pull requests! Here's how to contribute code:
- Fork the repository
- Create a feature branch from
main - Follow our development workflow (see below)
- Submit a pull request
# Clone your fork
git clone https://github.com/YOUR_USERNAME/pytest-gremlins.git
cd pytest-gremlins
# Install dependencies
uv sync --dev
# Install pre-commit hooks
uv run pre-commit install
uv run pre-commit install --hook-type commit-msg
# Verify setup
uv run pytest tests/smallWe follow strict Test-Driven Development. This is non-negotiable:
- Write a failing test first - Before any production code
- Write minimal code to pass - No more, no less
- Refactor while green - Clean up only when tests pass
For features, we write Gherkin scenarios first:
# features/my_feature.feature
Feature: Description of the feature
Scenario: Specific behavior
Given some precondition
When I take some action
Then I expect some outcomeThen implement step definitions in tests/ before writing production code.
All development happens in git worktrees:
# Create a worktree for your feature
git worktree add ../pytest-gremlins-my-feature feature/my-feature
# Work in the worktree
cd ../pytest-gremlins-my-feature
# When done, clean up
git worktree remove ../pytest-gremlins-my-featureWe use Conventional Commits:
type(scope): description
[optional body]
[optional footer]
Types: feat, fix, docs, style, refactor, perf, test, chore, ci
Examples:
feat(operators): add string mutation operator
fix(instrumentation): handle async functions correctly
docs(readme): add installation instructions
test(operators): add tests for boundary mutations
# Run small (unit) tests - always fast
uv run pytest tests/small
# Run small + medium tests
uv run pytest tests/small tests/medium
# Run all tests
uv run pytest
# Run with coverage
uv run pytest --cov=pytest_gremlins --cov-report=html
# Run specific test file
uv run pytest tests/small/pytest_gremlins/test_operators.py
# Run specific test
uv run pytest tests/small/pytest_gremlins/test_operators.py::test_comparison_mutationWe target 69% code coverage, which may seem low for a TDD project. This is intentional due to inherent limitations of measuring coverage for pytest plugins.
When pytest loads a plugin, it imports the entire module tree before coverage.py starts instrumenting code. This creates unavoidable measurement gaps:
- Import-time code isn't measured - Module-level imports, constants, and function definitions execute during plugin loading, before coverage starts
- Multiprocessing code is difficult to measure - Worker pool code runs in subprocesses where coverage collection is complex
- Integration tests run in subprocesses - pytester (our integration test fixture) runs pytest in isolated subprocesses
- Files like
switcher.py,results.py, andscore.pyshow 0% coverage even though they have comprehensive test suites - Function bodies are measured when tests call them, but function definitions aren't
- The 69% threshold reflects what's actually measurable, not test quality
We exclude from measurement files that can't be properly measured:
__init__.pyfiles (re-exports only)protocol.py(type definitions)gremlin.py(pure dataclass)- Files affected by import timing (
switcher.py,results.py,score.py)
See pyproject.toml [tool.coverage] for full configuration.
- Don't skip tests to improve coverage numbers
- Do write tests for all new code (TDD is still mandatory)
- If coverage drops significantly, investigate whether it's a measurement issue or missing tests
# Linting
uv run ruff check src tests
uv run ruff format --check src tests
# Type checking
uv run mypy src/pytest_gremlins
# Run all checks (pre-commit runs these)
uv run pre-commit run --all-filesWe use pytest-test-categories:
| Category | Location | Characteristics | Timeout |
|---|---|---|---|
| Small | tests/small/ |
Pure functions, no I/O, mocked deps | < 100ms |
| Medium | tests/medium/ |
Real filesystem, database, multiple components | < 10s |
| Large | tests/large/ |
End-to-end, external services | < 60s |
- All public APIs need docstrings (Google style)
- Include doctests for examples
- Update user guide for behavior changes
- Run doctests:
uv run pytest --doctest-modules src/pytest_gremlins
Before submitting a PR, ensure:
- Tests written FIRST (TDD)
- All tests pass (
uv run pytest) - Type checking passes (
uv run mypy src/pytest_gremlins) - Linting passes (
uv run ruff check .) - Formatting correct (
uv run ruff format --check .) - Docstrings added/updated for public APIs
- Doctests pass (
uv run pytest --doctest-modules src/pytest_gremlins) - Documentation updated if behavior changed
- Commit messages follow conventional commits
- PR description explains the change
- Quotes: Single quotes for strings
- Line length: 120 characters
- Imports: Sorted with vertical hanging indent
- Type hints: Required for all public functions
- Docstrings: Google style, required for public APIs
Example:
from __future__ import annotations
from typing import TYPE_CHECKING
from pytest_gremlins.operators import (
ArithmeticOperator,
BooleanOperator,
ComparisonOperator,
)
if TYPE_CHECKING:
from collections.abc import Sequence
def process_gremlins(
operators: Sequence[str],
*,
parallel: bool = True,
) -> list[str]:
"""Process gremlins using the specified operators.
Args:
operators: Names of operators to use.
parallel: Whether to run in parallel.
Returns:
List of gremlin IDs that were processed.
Raises:
ValueError: If no operators are specified.
Examples:
>>> process_gremlins(['comparison', 'boolean'])
['gremlin_001', 'gremlin_002', ...]
"""
if not operators:
raise ValueError('At least one operator must be specified')
# Implementation...- Open an issue for bugs or feature requests
- Start a discussion for questions
- Check existing issues and discussions first
Contributors are recognized in our release notes. Thank you for helping make pytest-gremlins better!