A comprehensive validation tool for Model Context Protocol (MCP) servers to ensure protocol compliance, security, and proper implementation.
This tool validates MCP servers by:
- Protocol Compliance: Tests the complete MCP initialization handshake
- Standard Conformance: Validates JSON-RPC 2.0 format and required fields
- Capability Testing: Verifies advertised capabilities (resources, tools, prompts)
- Security Analysis: Integrates with mcp-scan for vulnerability detection
- Registry Validation: Ensures servers match their registry schema definitions
- Detailed Reporting: Exports comprehensive JSON reports with validation checklists
- Automated Testing: Provides programmatic validation for CI/CD pipelines
- β Protocol Validation: Complete MCP handshake and capability testing
- β Security Scanning: Integrated mcp-scan vulnerability analysis
- β JSON Reports: Comprehensive validation reports with linked security scans
- β Step-by-Step Logging: Real-time validation progress with detailed feedback
- β Tool Discovery: Lists all available tools, prompts, and resources
- β Environment Variables: Configurable environment setup
- β Timeout Handling: Configurable validation timeouts
- β Exit Codes: Proper exit codes for automation
- β Verbose Mode: Optional detailed output
# Clone and install
git clone https://github.com/modelcontextprotocol/mcp-validation
cd mcp-validation
uv sync
Or install directly:
pip install mcp-validation
# Validate a Python MCP server
mcp-validate python server.py
# Validate a Node.js MCP server
mcp-validate node server.js
# Validate npx packages (use -- separator for flags)
mcp-validate -- npx -y kubernetes-mcp-server@latest
# Validate servers via container runtime (podman/docker)
mcp-validate -- podman run -i --rm hashicorp/terraform-mcp-server
# IoTDB MCP server example
mcp-validate \
--env IOTDB_HOST=127.0.0.1 \
--env IOTDB_PORT=6667 \
--env IOTDB_USER=root \
--env IOTDB_PASSWORD=root \
python src/iotdb_mcp_server/server.py
# Generate comprehensive JSON report
mcp-validate --json-report validation-report.json python server.py
# With security analysis and custom timeout
mcp-validate \
--timeout 60 \
--json-report full-report.json \
--env API_KEY=secret \
-- npx -y some-mcp-server@latest
# Skip mcp-scan for faster validation
mcp-validate --skip-mcp-scan python server.py
# Full validation with security scan
mcp-validate --timeout 120 --json-report report.json python server.py
import asyncio
from mcp_validation import validate_mcp_server_command
async def test_server():
result = await validate_mcp_server_command(
command_args=["python", "server.py"],
env_vars={"API_KEY": "secret"},
timeout=30.0,
use_mcp_scan=True
)
if result.is_valid:
print(f"β Server is MCP compliant!")
print(f"Tools: {result.tools}")
print(f"Capabilities: {list(result.capabilities.keys())}")
if result.mcp_scan_results:
print(f"Security scan: {result.mcp_scan_file}")
else:
print("β Validation failed:")
for error in result.errors:
print(f" - {error}")
asyncio.run(test_server())
Option | Description | Example |
---|---|---|
command |
Command and arguments to run the MCP server | python server.py |
--env KEY=VALUE |
Set environment variables (repeatable) | --env HOST=localhost |
--timeout SECONDS |
Validation timeout in seconds (default: 30) | --timeout 60 |
--verbose |
Show detailed output including warnings | --verbose |
--skip-mcp-scan |
Skip mcp-scan security analysis | --skip-mcp-scan |
--json-report FILE |
Export detailed JSON report to file | --json-report report.json |
The tool performs these validation steps:
- Process Execution: Starts the server with provided arguments and environment
- Initialize Handshake: Sends MCP
initialize
request with protocol version - Protocol Compliance: Validates JSON-RPC 2.0 format and required response fields
- Capability Discovery: Tests advertised capabilities (resources, tools, prompts)
- Security Analysis: Runs mcp-scan vulnerability detection (optional)
- Report Generation: Creates detailed JSON reports with validation checklist
Testing MCP server: npx -y kubernetes-mcp-server@latest
π Step 1: Sending initialize request...
β
Initialize request successful
π Step 2: Sending initialized notification...
β
Initialized notification sent
π Step 3: Testing capabilities...
π Testing tools...
β
Found 18 tools
π Names: configuration_view, events_list, helm_install, helm_list, helm_uninstall (and 13 more)
π Testing prompts...
β
Found 0 prompts
π Testing resources...
β
Found 0 resources
β
Capability testing complete
π Step 4: Running mcp-scan security analysis...
π Running: uvx mcp-scan@latest --json...
π Scanned 18 tools
β
No security issues detected
πΎ Scan results saved to: mcp-scan-results_20250730_120203.json
β
mcp-scan analysis complete
β Valid: True
β± Execution time: 10.49s
π₯ Server: kubernetes-mcp-server vv0.0.46
π§ Capabilities: logging, prompts, resources, tools
π¨ Tools (18): configuration_view, events_list, helm_install, helm_list, helm_uninstall, namespaces_list, pods_delete, pods_exec, pods_get, pods_list, pods_list_in_namespace, pods_log, pods_run, pods_top, resources_create_or_update, resources_delete, resources_get, resources_list
π Security Scan: No issues found in 18 tools
π JSON report saved to: validation-report.json
The --json-report
option generates comprehensive validation reports:
{
"report_metadata": {
"generated_at": "2025-07-30T12:02:03.456789",
"validator_version": "0.1.0",
"command": "npx -y kubernetes-mcp-server@latest",
"environment_variables": {}
},
"validation_summary": {
"is_valid": true,
"execution_time_seconds": 10.49,
"total_errors": 0,
"total_warnings": 0
},
"validation_checklist": {
"protocol_validation": {
"initialize_request": {"status": "passed", "details": "..."},
"initialize_response": {"status": "passed", "details": "..."},
"protocol_version": {"status": "passed", "details": "..."}
},
"capability_testing": {
"tools_capability": {"status": "passed", "details": "..."},
"resources_capability": {"status": "skipped", "details": "..."}
},
"security_analysis": {
"mcp_scan_execution": {"status": "passed", "details": "..."}
}
},
"server_information": {
"server_info": {"name": "kubernetes-mcp-server", "version": "v0.0.46"},
"capabilities": {"logging": {}, "tools": {"listChanged": true}},
"discovered_items": {
"tools": {"count": 18, "names": ["configuration_view", "..."]}
}
},
"security_analysis": {
"mcp_scan_executed": true,
"mcp_scan_file": "mcp-scan-results_20250730_120203.json",
"summary": {
"tools_scanned": 18,
"vulnerabilities_found": 0,
"vulnerability_types": [],
"risk_levels": []
}
},
"issues": {
"errors": [],
"warnings": []
}
}
0
: Server is MCP compliant1
: Validation failed or server is non-compliant
For servers listed in the MCP Registry, this tool can validate:
- Package installation requirements
- Environment variable specifications
- Argument format compliance
- Protocol implementation correctness
This project uses uv for dependency management and development workflows.
# Install uv (if not already installed)
curl -LsSf https://astral.sh/uv/install.sh | sh
# Clone the repository
git clone https://github.com/modelcontextprotocol/mcp-validation
cd mcp-validation
For convenience, this project includes a Makefile with common development tasks:
# Setup development environment
make install
# Run the full pre-commit workflow (format, lint, test)
make pre-commit
# Run tests
make test
# Format code
make format
# See all available commands
make help
# Install all dependencies including dev extras
uv sync --extra dev
# Alternatively, install the package in development mode
uv pip install -e ".[dev]"
Command | Description |
---|---|
make help |
Show all available commands |
make install |
Install dependencies with dev extras |
make dev-setup |
Complete development environment setup |
make test |
Run all tests (excluding partner repos) |
make test-cov |
Run tests with coverage report |
make test-fast |
Run tests with fail-fast (-x flag) |
make debug-test |
Run tests with debug output and registry logging |
make format |
Format code with Black |
make check |
Check formatting without making changes |
make lint |
Check code with Ruff (no fixes) |
make lint-fix |
Check and fix code issues with Ruff |
make pre-commit |
Run full pre-commit workflow (format, lint, test) |
make ci |
Run CI-like checks (no automatic fixes) |
make clean |
Clean up cache and temporary files |
# Run all tests
make test
# OR manually:
uv run --extra dev pytest tests/ -v
# Run tests with coverage
make test-cov
# OR manually:
uv run --extra dev pytest tests/ --cov=mcp_validation --cov-report=term-missing
# Run specific test file
uv run --extra dev pytest tests/test_enhanced_registry.py -v
# Run tests and stop on first failure
make test-fast
# OR manually:
uv run --extra dev pytest tests/ -x
# Format code with Black
make format
# OR manually:
uv run --extra dev black mcp_validation/
# Check code formatting (without making changes)
make check
# OR manually:
uv run --extra dev black --check mcp_validation/
# Lint with Ruff (with fixes)
make lint-fix
# OR manually:
uv run --extra dev ruff check --fix mcp_validation/
# Lint with Ruff (check only)
make lint
# OR manually:
uv run --extra dev ruff check mcp_validation/
# Type checking with mypy
uv run --extra dev mypy mcp_validation/
# Pre-commit workflow (format, lint, test)
make pre-commit
# CI-style checks (no automatic fixes)
make ci
# Manual pre-commit workflow
uv run --extra dev black mcp_validation/ && \
uv run --extra dev ruff check --fix mcp_validation/ && \
uv run --extra dev pytest tests/ -v
- Testing: All new features must include tests
- Code Style: Use Black for formatting and Ruff for linting
- Type Hints: Add type hints for all public APIs
- Documentation: Update README and docstrings for new features
The project uses pytest with the following configuration in pyproject.toml
:
- Test Discovery: Looks for tests in the
tests/
directory - Async Support: Configured for async/await testing
- Exclusions: Automatically excludes partner repositories and build directories
- Markers: Strict marker checking enabled
# Run tests with debug output and registry logging
make debug-test
# Run tests with verbose output and debug information
uv run --extra dev pytest -v -s
# Run specific test with debugging
uv run --extra dev pytest tests/test_enhanced_registry.py::test_enhanced_registry_validator -v -s
# Run registry tests with debug output
mcp-validate --debug -- npm test
The tool provides comprehensive debug output to track server execution progress:
# Enable debug output for detailed execution tracking
mcp-validate --debug -- python server.py
Debug output includes:
- Execution Context: Working directory, Python version, platform, user, shell
- Command Details: Full command, arguments, executable path
- Environment Variables: Custom variables (with sensitive value masking)
- Process Information: PID, process lifecycle events
- Validator Progress: Individual validator execution with timing and results
- Validation Summary: Overall statistics and execution time
Example debug output:
[10:19:29.872] [EXEC-INFO] π Starting MCP Server Process
[10:19:29.872] [EXEC-INFO] π Working Directory: /path/to/project
[10:19:29.872] [EXEC-INFO] π Python: /usr/bin/python3 (v3.11.0)
[10:19:29.872] [EXEC-INFO] π§ Command: npx @dynatrace-oss/dynatrace-mcp-server
[10:19:29.872] [EXEC-INFO] π Environment Variables:
[10:19:29.872] [EXEC-INFO] API_KEY=ab*****ef
[10:19:29.877] [VALIDATOR-INFO] π [registry] STARTING: (1/6)
[10:19:30.727] [VALIDATOR-INFO] π [registry] PASSED: Time: 0.85s
# Apache IoTDB MCP Server from registry
mcp-validate \
--env IOTDB_HOST=127.0.0.1 \
--env IOTDB_PORT=6667 \
--env IOTDB_USER=root \
--env IOTDB_PASSWORD=root \
--env IOTDB_DATABASE=test \
--env IOTDB_SQL_DIALECT=table \
python src/iotdb_mcp_server/server.py
# GitHub Actions example
- name: Validate MCP Server
run: |
mcp-validate --json-report validation-report.json python server.py
env:
DATABASE_URL: sqlite:///test.db
- name: Upload validation report
uses: actions/upload-artifact@v3
if: always()
with:
name: mcp-validation-report
path: |
validation-report.json
mcp-scan-results_*.json
The tool integrates with mcp-scan for comprehensive security analysis:
- Automatic Detection: Checks for
uvx
ormcp-scan
availability - Vulnerability Scanning: Analyzes tools for potential security issues
- Separate Reports: Security results saved to timestamped JSON files
- Linked Reports: Main validation report references security scan files
- Skip Option: Use
--skip-mcp-scan
for faster validation without security analysis
Contributions are welcome! Please see our contributing guidelines for details.
MIT License - see LICENSE file for details.
- MCP Specification
- MCP Registry
- mcp-scan - Security vulnerability scanner for MCP servers
- MCP Python SDK
- MCP TypeScript SDK