Thank you for your interest in contributing. This document covers how to set up your development environment, coding standards, and the pull request process.
- Development Setup
- Project Structure
- Running Tests
- Coding Standards
- Adding a New MCP Tool
- Submitting a Pull Request
- Reporting Issues
- Python 3.10 or newer
- Git
- (Optional) An Azure OpenAI API key to test LLM analysis tools
# 1. Fork and clone the repo
git clone https://github.com/<your-username>/LISA_MCP_Server.git
cd LISA_MCP_Server
# 2. Create a virtual environment
python3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
# 3. Install the package in editable mode with dev dependencies
pip install -e ".[dev]"
# 4. Verify the server starts
lisa-mcp --helpThe [dev] extra installs ruff, mypy, pytest, and pytest-asyncio.
lisa_mcp/
├── server.py ← FastMCP server — all tool/resource/prompt definitions
├── models.py ← Pydantic v2 data models (no business logic)
└── tools/
├── test_discovery.py ← AST-based LISA repo scanner
├── test_generator.py ← Python + YAML code generation
├── runbook_builder.py ← Runbook read/write/validate
├── test_runner.py ← lisa CLI subprocess wrapper
├── result_parser.py ← JUnit XML + console output parser
├── log_collector.py ← Memory-safe log tail and error extraction
├── llm_analyzer.py ← Azure OpenAI API calls (structured tool_use)
└── report_generator.py ← HTML + Markdown report generation
Rule of thumb: business logic lives in tools/; server.py only wires tools to MCP and handles JSON serialization; models.py holds pure data structures.
# Run all tests
pytest
# Run with verbose output
pytest -v
# Run a specific module
pytest tests/test_result_parser.py
# Run with coverage
pytest --cov=lisa_mcp --cov-report=term-missingTests live in tests/. Each module in tools/ should have a corresponding tests/test_<module>.py.
We use Ruff for linting and formatting.
# Check for issues
ruff check lisa_mcp/
# Auto-fix safe issues
ruff check --fix lisa_mcp/
# Format code
ruff format lisa_mcp/The configuration is in pyproject.toml (line-length = 100, target-version = "py310").
mypy lisa_mcp/All public functions should have type annotations. Use from __future__ import annotations at the top of every file.
- Docstrings: every public function and class needs a docstring. MCP tool docstrings are user-visible — write them clearly.
- No magic numbers: define constants at the module level with a descriptive name.
- Error handling: catch specific exceptions, not bare
except Exceptionwhere possible. Always return a JSON error payload from MCP tools rather than raising. - No print statements: use Python's
loggingmodule or the MCP server's logging facilities. - File I/O: never load an entire log file into memory. Use seek-based reading (see
log_collector._tail_read_lines).
-
Implement the logic in the appropriate
lisa_mcp/tools/*.pymodule (or create a new one if it doesn't fit). -
Add data models to
lisa_mcp/models.pyif the tool returns structured data. -
Register the tool in
lisa_mcp/server.pywith the@mcp.tool()decorator:
@mcp.tool()
def my_new_tool(param_a: str, param_b: int = 10) -> str:
"""
One-line description shown to the MCP client.
Parameters
----------
param_a : Explanation of param_a.
param_b : Explanation of param_b (default 10).
Returns JSON with ...
"""
try:
result = do_the_work(param_a, param_b)
return json.dumps(result, indent=2)
except Exception as exc:
return json.dumps({"error": str(exc), "type": type(exc).__name__}, indent=2)-
Write tests in
tests/test_<module>.py. -
Document the tool in
docs/tools-reference.mdfollowing the existing format. -
Update the tool count in
README.mdif you've added a new tool.
- Create a branch from
main:
git checkout -b feat/my-feature-
Make your changes following the coding standards above.
-
Run checks before pushing:
ruff check lisa_mcp/
mypy lisa_mcp/
pytest-
Update
CHANGELOG.md— add your changes under the[Unreleased]section. -
Push and open a PR:
git push -u origin feat/my-featureThen open a pull request on GitHub. The PR description should explain:
- What the change does
- Why it is needed
- How it was tested
-
ruff checkpasses with no errors -
mypypasses with no errors - New tests added for new functionality
-
CHANGELOG.mdupdated under[Unreleased] -
docs/tools-reference.mdupdated if a tool was added/changed -
README.mdtool count updated if applicable
Please use GitHub Issues to report bugs or request features. Include:
- Your OS and Python version
- The exact tool call or command that failed
- The full error message or stack trace
- Steps to reproduce
For security vulnerabilities, please do not open a public issue — contact the maintainer directly.