Comprehensive testing documentation for the MCP Gateway Registry project.
- Quick Start
- Test Structure
- Running Tests
- Test Categories
- Coverage Requirements
- CI/CD Integration
- Troubleshooting
Run all tests:
make testRun specific test categories:
# Unit tests only (fast)
make test-unit
# Integration tests
make test-integration
# E2E tests (slow)
make test-e2e
# With coverage report
make test-coverageRun tests using pytest directly:
# All tests
uv run pytest
# Specific test file
uv run pytest tests/unit/test_server_service.py
# Specific test class
uv run pytest tests/unit/test_server_service.py::TestServerService
# Specific test function
uv run pytest tests/unit/test_server_service.py::TestServerService::test_register_server
# With verbose output
uv run pytest -v
# With coverage
uv run pytest --cov=registry --cov-report=htmlThe test suite is organized into three main categories:
tests/
├── unit/ # Unit tests (fast, isolated)
│ ├── services/ # Service layer tests
│ ├── api/ # API endpoint tests
│ ├── core/ # Core functionality tests
│ └── agents/ # Agent-specific tests
├── integration/ # Integration tests (slower)
│ ├── test_server_integration.py
│ ├── test_api_integration.py
│ └── test_e2e_workflows.py
├── fixtures/ # Shared test fixtures
│ └── factories.py # Factory functions for test data
├── conftest.py # Shared pytest configuration
└── reports/ # Test reports and coverage data
-
Unit tests: Test individual components in isolation
- Mock external dependencies
- Fast execution (< 1 second per test)
- High coverage of edge cases
-
Integration tests: Test component interactions
- May use real services (databases, files)
- Moderate execution time (< 5 seconds per test)
- Test realistic workflows
-
E2E tests: Test complete user workflows
- Test entire system end-to-end
- Slower execution (5-30 seconds per test)
- Marked with
@pytest.mark.slow
The project includes convenient Make targets for running tests:
# Run all tests
make test
# Run only unit tests (fast)
make test-unit
# Run only integration tests
make test-integration
# Run E2E tests
make test-e2e
# Run with coverage report
make test-coverage
# Run and open HTML coverage report
make test-coverage-htmlFor more control, use pytest commands:
# Run all tests
uv run pytest
# Run tests with specific markers
uv run pytest -m unit # Only unit tests
uv run pytest -m integration # Only integration tests
uv run pytest -m "not slow" # Skip slow tests
# Run tests in parallel (faster)
uv run pytest -n auto # Auto-detect CPU count
# Run with verbose output
uv run pytest -v
# Show print statements
uv run pytest -s
# Run specific tests by keyword
uv run pytest -k "server" # All tests with "server" in name
# Stop on first failure
uv run pytest -x
# Run last failed tests
uv run pytest --lf
# Run failed tests first
uv run pytest --ffIntegration and E2E tests may require:
-
Authentication tokens: Generate tokens before running:
./keycloak/setup/generate-agent-token.sh admin-bot ./keycloak/setup/generate-agent-token.sh lob1-bot ./keycloak/setup/generate-agent-token.sh lob2-bot
-
Running services: Ensure Docker containers are running:
docker-compose up -d
-
Environment variables:
export BASE_URL="http://localhost" export TOKEN_FILE=".oauth-tokens/admin-bot-token.json"
Tests are organized using pytest markers:
@pytest.mark.unit- Unit tests (fast, isolated)@pytest.mark.integration- Integration tests@pytest.mark.e2e- End-to-end tests@pytest.mark.slow- Slow tests (> 5 seconds)@pytest.mark.auth- Authentication/authorization tests@pytest.mark.servers- Server management tests@pytest.mark.agents- Agent-specific tests@pytest.mark.search- Search functionality tests@pytest.mark.health- Health monitoring tests
# Run only unit tests
uv run pytest -m unit
# Run integration tests
uv run pytest -m integration
# Run E2E tests
uv run pytest -m e2e
# Skip slow tests
uv run pytest -m "not slow"
# Run auth and agent tests
uv run pytest -m "auth or agents"
# Run integration but not slow tests
uv run pytest -m "integration and not slow"The project maintains 80% minimum code coverage.
# Run tests with coverage report
uv run pytest --cov=registry --cov-report=term-missing
# Generate HTML coverage report
uv run pytest --cov=registry --cov-report=html
# Open HTML report
open htmlcov/index.html # macOS
xdg-open htmlcov/index.html # LinuxCoverage settings are configured in pyproject.toml:
[tool.pytest.ini_options]
addopts = [
"--cov=registry",
"--cov-report=term-missing",
"--cov-report=html",
"--cov-fail-under=80",
]Coverage includes:
- All source code in
registry/directory - Excludes: tests, migrations, init.py files
- Reports missing lines for easy identification
Tests run automatically in CI/CD pipelines on:
- Every pull request
- Every push to main branch
- Nightly scheduled runs
The project uses GitHub Actions for CI/CD. Test workflows are defined in:
.github/workflows/
├── test.yml # Main test workflow
├── coverage.yml # Coverage reporting
└── integration.yml # Integration test workflow
Install pre-commit hooks to run tests before commits:
# Install pre-commit
pip install pre-commit
# Install hooks
pre-commit install
# Run hooks manually
pre-commit run --all-filesError: Token file not found: .oauth-tokens/admin-bot-token.json
Solution: Generate authentication tokens:
./keycloak/setup/generate-agent-token.sh admin-botError: Cannot connect to gateway at http://localhost
Solution: Start Docker containers:
docker-compose up -dError: ModuleNotFoundError: No module named 'registry'
Solution: Ensure you're using uv run:
uv run pytest # Correct
pytest # May fail if environment not activatedError: fixture 'some_fixture' not found
Solution: Check fixture is defined in:
tests/conftest.py(shared fixtures)- Test file's conftest.py
- Imported from fixtures module
Issue: Tests taking too long
Solution: Skip slow tests during development:
uv run pytest -m "not slow"Error: RuntimeError: Event loop is closed
Solution: Check async fixtures are properly defined:
@pytest.fixture
async def async_client():
async with AsyncClient() as client:
yield clientError: FAIL Required test coverage of 80% not reached
Solution: Add tests for uncovered code:
# Check which lines are missing
uv run pytest --cov=registry --cov-report=term-missing
# Generate detailed HTML report
uv run pytest --cov=registry --cov-report=html
open htmlcov/index.htmlRun tests in debug mode for detailed output:
# Show print statements
uv run pytest -s
# Verbose output
uv run pytest -v
# Very verbose (shows fixtures)
uv run pytest -vv
# Show local variables on failure
uv run pytest -l
# Enter debugger on failure
uv run pytest --pdbEnable logging output:
# Show all logs
uv run pytest --log-cli-level=DEBUG
# Show only INFO and above
uv run pytest --log-cli-level=INFO
# Log to file
uv run pytest --log-file=tests/reports/test.log- Writing Tests Guide - How to write effective tests
- Test Maintenance Guide - Maintaining test suite health
- Pytest Documentation - Official pytest docs
- Coverage.py Documentation - Coverage tool docs
If you encounter issues:
- Check this troubleshooting guide
- Review test output for error messages
- Check relevant documentation
- Ask in team chat or create an issue
Key commands to remember:
# Development workflow
make test-unit # Quick unit tests
make test-coverage # Full test with coverage
uv run pytest -m "not slow" # Skip slow tests
# Before committing
make test # Run all tests
pre-commit run --all-files # Run all checks
# Debugging
uv run pytest -v -s # Verbose with prints
uv run pytest --pdb # Debug on failure