Thank you for your interest in contributing to the Self-Correcting Agent Kernel (SCAK)! This document provides guidelines for contributions.
- Code of Conduct
- Getting Started
- Development Setup
- Coding Standards
- Testing
- Pull Request Process
- Research Contributions
- Documentation
We are committed to providing a welcoming and inclusive environment. Please:
- β Be respectful and considerate
- β Focus on constructive feedback
- β Welcome newcomers and help them learn
- β No harassment, discrimination, or trolling
We welcome:
- Bug Fixes: Fix existing issues in the codebase
- Feature Enhancements: Improve existing features
- New Features: Add new capabilities (discuss first in an issue)
- Documentation: Improve README, wiki, or code comments
- Tests: Add or improve test coverage
- Research: Add benchmarks, datasets, or experiments
- Performance: Optimize latency or resource usage
- Check existing issues: Look for related issues or PRs
- Open a discussion: For large changes, create an issue first
- Read the architecture: Understand the self-correction design (see
wiki/) - Review coding standards: See below
- Python 3.8+ (recommended: 3.10)
- Git
- Virtual environment tool (venv, conda, etc.)
# Clone the repository
git clone https://github.com/imran-siddique/self-correcting-agent-kernel.git
cd self-correcting-agent-kernel
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -e ".[dev]" # Includes testing and development tools# Run all tests
pytest tests/ -v
# Run specific test file
pytest tests/test_triage.py -v
# Run with coverage
pytest tests/ --cov=src --cov-report=html# Run type checking (if mypy is installed)
mypy src/
# Run linting (if flake8 is installed)
flake8 src/ --max-line-length=120We follow Partner-level coding standards (see .github/copilot-instructions.md).
-
Type Safety: All functions must have type hints
def compute_score(value: float, threshold: float = 0.5) -> bool: return value >= threshold
-
Async-First: All I/O operations must be async
async def call_llm(prompt: str) -> str: return await llm_client.generate(prompt)
-
No Silent Failures: Every
try/exceptmust emit telemetrytry: result = risky_operation() except Exception as e: telemetry.emit_failure_detected( agent_id=agent_id, error_message=str(e) ) raise
-
Pydantic Models: Use Pydantic for data exchange
from pydantic import BaseModel class PatchRequest(BaseModel): agent_id: str patch_content: str patch_type: str
-
Structured Telemetry: JSON logs, not print statements
telemetry.emit_patch_applied( agent_id=agent_id, patch_id=patch.patch_id )
- src/kernel/: Core correction engine
- src/agents/: Agent implementations
- src/interfaces/: External interfaces (telemetry, LLM clients, etc.)
- tests/: Test suite
- experiments/: Benchmarks and validation
- examples/: Demos and usage examples
- Functions:
snake_case(e.g.,handle_failure,compute_score) - Classes:
PascalCase(e.g.,ShadowTeacher,MemoryController) - Constants:
UPPER_SNAKE_CASE(e.g.,GLOBAL_SEED,MAX_RETRIES) - Modules:
snake_case(e.g.,triage.py,memory.py)
Every PR must include:
- Unit tests for new functions
- Integration tests for new features
- Docstrings explaining test purpose
- Assertions with clear failure messages
import pytest
from src.kernel.triage import FailureTriage, FixStrategy
class TestFailureTriage:
"""Test the Failure Triage Engine."""
@pytest.fixture
def triage(self):
"""Create triage instance."""
return FailureTriage()
def test_critical_operations_go_sync(self, triage):
"""Test that critical operations route to SYNC_JIT."""
strategy = triage.decide_strategy(
user_prompt="Process refund for customer",
context={"action": "execute_payment"}
)
assert strategy == FixStrategy.SYNC_JIT, \
"Payment operations must be sync for safety"# Before submitting PR, run:
pytest tests/ -v --cov=srcExpected: All tests pass, >80% coverage
# Fork the repository on GitHub, then:
git clone https://github.com/YOUR_USERNAME/self-correcting-agent-kernel.git
cd self-correcting-agent-kernel
# Create feature branch
git checkout -b feature/your-feature-name- Follow coding standards (see above)
- Write tests
- Update documentation
- Add yourself to
.github/CONTRIBUTORS.md(if it exists)
Commit Message Format:
<type>: <description>
<optional body>
<optional footer>
Types:
feat: New featurefix: Bug fixdocs: Documentation onlytest: Adding or updating testsrefactor: Code restructuring (no behavior change)perf: Performance improvementchore: Maintenance (dependencies, config, etc.)
Example:
git add .
git commit -m "feat: add multi-turn laziness detection
- Extend Completeness Auditor to handle multi-turn context
- Add new test suite for multi-turn scenarios
- Update GAIA benchmark with multi-turn queries
Closes #123"git push origin feature/your-feature-nameThen open a pull request on GitHub with:
- Title: Clear, concise summary
- Description: What changes, why, and how to test
- Linked Issues:
Closes #123orRelates to #456 - Checklist:
- Tests pass
- Documentation updated
- Coding standards followed
- Respond to feedback promptly
- Make requested changes
- Push updates to the same branch (PR auto-updates)
Once approved by maintainers:
- PR will be squashed and merged
- Feature branch can be deleted
To add a new benchmark:
-
Create dataset: Add to
datasets/<benchmark_name>/{ "id": "query_001", "category": "laziness", "query": "Find recent errors", "ground_truth": {"data_exists": true, ...} } -
Create benchmark script: Add to
experiments/<benchmark_name>/def run_benchmark(queries: List[Dict]) -> Dict: # Implementation pass
-
Document: Add README.md explaining:
- Purpose of benchmark
- How to run
- Expected results
- Citation (if based on prior work)
-
Cite prior work: Add references to the project documentation
To add a new paper citation:
-
Add to documentation: In relevant section, add:
### [Section Name] 1. **Authors (Year).** *"Paper Title"* Venue. DOI/arXiv - **Core Contribution**: What they did - **Our Implementation**: How we use it - **Connection**: Why it's relevant
-
Add to paper/bibliography.bib (for LaTeX):
@inproceedings{author2023title, title={Paper Title}, author={Author, A. and Author, B.}, booktitle={Venue}, year={2023}, url={https://arxiv.org/abs/...} }
Docstrings (Google style):
def handle_failure(
agent_id: str,
error_message: str,
context: dict
) -> dict:
"""
Handle agent failure with self-correction architecture.
Args:
agent_id: Unique agent identifier
error_message: Error description
context: Additional context (tool trace, user prompt, etc.)
Returns:
Dict with patch_applied, patch_id, strategy
Raises:
ValueError: If agent_id is invalid
Example:
>>> result = handle_failure("agent-001", "Timeout", {})
>>> print(result["patch_applied"])
True
"""
# Implementation
passIf your change affects usage:
- Update relevant section in
README.md - Add example if new feature
- Update table of contents if new section
For architectural changes:
- Update relevant wiki page (
wiki/*.md) - Add diagrams if helpful (Mermaid or ASCII)
- Link from main wiki README
- Issues: Open a GitHub issue for bugs or questions
- Discussions: Use GitHub Discussions for general questions
- Email: research@scak.ai (for sensitive or private matters)
Contributors will be:
- Listed in
.github/CONTRIBUTORS.md - Acknowledged in paper (if research contribution)
- Invited to co-author follow-up papers (for significant contributions)
Thank you for contributing to SCAK! π
Last Updated: 2026-01-18
Version: 1.0