This comprehensive guide covers all aspects of testing the MCP Gateway, from unit tests to end-to-end integration testing.
- Quick Start
- Prerequisites
- Test Categories
- Running Tests
- Coverage Reports
- Writing Tests
- Continuous Integration
- Troubleshooting
# Complete test suite with coverage
make doctest test htmlcov
# Quick smoke test
make smoketest
# Full quality check pipeline
make doctest test htmlcov smoketest lint-web flake8 bandit interrogate pylint verify- Python 3.11+ (3.10 minimum)
- uv (recommended) or pip/virtualenv
- Docker/Podman (for container tests)
- Make (for automation)
- Node.js 18+ (for Playwright UI tests)
# Setup with uv (recommended)
make venv install-dev
# Alternative: traditional pip
python3 -m venv .venv
source .venv/bin/activate
pip install -e ".[dev,test]"Fast, isolated tests for individual components.
# Run all unit tests
make test
# Run specific module tests
pytest tests/unit/mcpgateway/test_config.py -v
# Run with coverage
pytest --cov=mcpgateway --cov-report=term-missing tests/unit/Tests for API endpoints and service interactions.
# Run integration tests
pytest tests/integration/ -v
# Test specific endpoints
pytest tests/integration/test_api.py::test_tools_endpoint -vComplete workflow tests with real services.
# Run E2E tests
pytest tests/e2e/ -v
# Container-based smoke test
make smoketestSecurity validation and vulnerability testing.
# Security test suite
pytest tests/security/ -v
# Static security analysis
make bandit
# Dependency vulnerability scan
make security-scanBrowser-based Admin UI testing with Playwright.
# Install Playwright browsers
make playwright-install
# Run UI tests
make test-ui # With browser UI
make test-ui-headless # Headless mode
make test-ui-debug # Debug mode with inspector
make test-ui-parallel # Parallel execution
# Generate test report
make test-ui-reportAsynchronous operation and WebSocket testing.
# Run async tests
pytest tests/async/ -v --async-mode=autoProperty-based and fuzz testing for robustness.
# Run fuzz tests
pytest tests/fuzz/ -v
# With hypothesis settings
pytest tests/fuzz/ --hypothesis-show-statisticsDatabase migration and upgrade testing.
# Test migrations
pytest tests/migration/ -v
# Test specific migration
pytest tests/migration/test_v0_7_0_migration.py -v# Full test suite (recommended before commits)
make doctest test htmlcov smoketest
# Quick validation
make test
# With code quality checks
make test flake8 pylint# Run all doctests
make doctest
# Verbose output
make doctest-verbose
# With coverage
make doctest-coverage
# Check docstring coverage
make interrogate# Activate virtual environment first
source ~/.venv/mcpgateway/bin/activate
# Run tests matching a pattern
pytest -k "test_auth" -v
# Run tests with specific markers
pytest -m "asyncio" -v
pytest -m "not slow" -v
# Run failed tests from last run
pytest --lf -v
# Run tests in parallel
pytest -n auto tests/unit/# Test a specific file with coverage
. /home/cmihai/.venv/mcpgateway/bin/activate
pytest --cov-report=annotate tests/unit/mcpgateway/test_translate.py
# Test with detailed output
pytest -vvs tests/unit/mcpgateway/services/test_gateway_service.py
# Test specific class or method
pytest tests/unit/mcpgateway/test_config.py::TestSettings -v
pytest tests/unit/mcpgateway/test_auth.py::test_jwt_creation -v# Generate HTML coverage report
make htmlcov
# View report (opens in browser)
open docs/docs/coverage/index.html # macOS
xdg-open docs/docs/coverage/index.html # Linux# Simple coverage summary
make coverage
# Detailed line-by-line coverage
pytest --cov=mcpgateway --cov-report=term-missing tests/
# Coverage for specific modules
pytest --cov=mcpgateway.services --cov-report=term tests/unit/mcpgateway/services/# Enforce minimum coverage (fails if below 80%)
pytest --cov=mcpgateway --cov-fail-under=80 tests/
# Check coverage trends
coverage report --show-missing
coverage html --directory=htmlcov# tests/unit/mcpgateway/test_example.py
import pytest
from unittest.mock import Mock, patch
from mcpgateway.services import ExampleService
class TestExampleService:
"""Test suite for ExampleService."""
@pytest.fixture
def service(self, db_session):
"""Create service instance with mocked dependencies."""
return ExampleService(db=db_session)
def test_basic_operation(self, service):
"""Test basic service operation."""
result = service.do_something("test")
assert result.status == "success"
@pytest.mark.asyncio
async def test_async_operation(self, service):
"""Test async service operation."""
result = await service.async_operation()
assert result is not None
@patch('mcpgateway.services.external_api')
def test_with_mock(self, mock_api, service):
"""Test with mocked external dependency."""
mock_api.return_value = {"status": "ok"}
result = service.call_external()
mock_api.assert_called_once()# Import common fixtures from conftest.py
def test_with_database(db_session):
"""Test using database session fixture."""
# db_session is automatically provided by conftest.py
from mcpgateway.common.models import Tool
tool = Tool(name="test_tool")
db_session.add(tool)
db_session.commit()
assert tool.id is not None
def test_with_client(test_client):
"""Test using FastAPI test client."""
response = test_client.get("/health")
assert response.status_code == 200import pytest
import asyncio
@pytest.mark.asyncio
async def test_websocket_connection():
"""Test WebSocket connection handling."""
from mcpgateway.transports import WebSocketTransport
transport = WebSocketTransport()
async with transport.connect("ws://localhost:4444/ws") as conn:
await conn.send_json({"method": "ping"})
response = await conn.receive_json()
assert response["result"] == "pong"from hypothesis import given, strategies as st
@given(st.text(min_size=1, max_size=255))
def test_name_validation(name):
"""Test name validation with random inputs."""
from mcpgateway.validation import validate_name
if validate_name(name):
assert len(name) <= 255
assert not name.startswith(" ")# SQLite (default)
make test
# PostgreSQL
DATABASE_URL=postgresql://user:pass@localhost/test_mcp make test
# MySQL/MariaDB
DATABASE_URL=mysql+pymysql://user:pass@localhost/test_mcp make test# Test with production settings
ENVIRONMENT=production AUTH_REQUIRED=true make test
# Test with Redis caching
CACHE_TYPE=redis REDIS_URL=redis://localhost:6379 make test
# Test with federation enabled
FEDERATION_ENABLED=true FEDERATION_PEERS='["http://peer1:4444"]' make test# Using hey (HTTP load generator)
make test-hey
# Custom load test
hey -n 1000 -c 10 -H "Authorization: Bearer $TOKEN" http://localhost:4444/health# Run tests with profiling
pytest --profile tests/unit/
# Generate profile report
python -m cProfile -o profile.stats $(which pytest) tests/
python -m pstats profile.statsTests run automatically on:
- Pull requests
- Push to main branch
- Nightly schedule
# .github/workflows/test.yml example
- name: Run test suite
run: |
make venv install-dev
make doctest test htmlcov
make smoketest# Install pre-commit hooks
make pre-commit-install
# Run manually
make pre-commit
# Skip hooks (emergency only)
git commit --no-verify# Maximum verbosity
pytest -vvs tests/unit/
# Show print statements
pytest -s tests/unit/
# Show local variables on failure
pytest -l tests/unit/# Add breakpoint in test
def test_complex_logic():
result = complex_function()
import pdb; pdb.set_trace() # Debugger breakpoint
assert result == expected# Run with pdb on failure
pytest --pdb tests/unit/
# Run with ipdb (if installed)
pytest --pdbcls=IPython.terminal.debugger:TerminalPdb tests/unit/# Capture logs during tests
pytest --log-cli-level=DEBUG tests/unit/
# Save logs to file
pytest --log-file=test.log --log-file-level=DEBUG tests/unit/# Ensure package is installed in editable mode
pip install -e .
# Verify Python path
python -c "import sys; print(sys.path)"# Reset test database
rm -f test_mcp.db
alembic upgrade head
# Use in-memory database for tests
DATABASE_URL=sqlite:///:memory: pytest tests/unit/# Install async test dependencies
pip install pytest-asyncio pytest-aiohttp
# Use proper event loop scope
pytest --asyncio-mode=auto tests/async/# Clear coverage data
coverage erase
# Regenerate coverage
make htmlcov# Reinstall browsers
npx playwright install --with-deps
# Use specific browser
BROWSER=firefox make test-ui# Run tests in random order to detect dependencies
pytest --random-order tests/unit/
# Run each test in a subprocess
pytest --forked tests/unit/
# Clear cache between runs
pytest --cache-clear tests/- Keep tests fast: Unit tests should run in < 1 second
- Use fixtures: Leverage conftest.py for common setup
- Mock external dependencies: Don't rely on network services
- Test edge cases: Include boundary and error conditions
- Maintain test coverage: Aim for > 80% coverage
- Write descriptive test names:
test_auth_fails_with_invalid_token - Group related tests: Use test classes for organization
- Clean up resources: Use fixtures with proper teardown
- Document complex tests: Add docstrings explaining the test purpose
- Run tests before committing: Use pre-commit hooks