Skip to content

Latest commit

 

History

History
402 lines (281 loc) · 7.92 KB

File metadata and controls

402 lines (281 loc) · 7.92 KB

Contributing to Claude Code Hooks Security Research

Thank you for your interest in contributing! This project aims to provide comprehensive research and practical implementations for replacing MCP servers with native Claude Code hooks.

🎯 Ways to Contribute

1. New Hook Implementations

  • Security hooks (XSS detection, LDAP injection, etc.)
  • Automation hooks (linting, testing, deployment)
  • Integration hooks (CI/CD, monitoring, notifications)

2. Documentation Improvements

  • Fix typos and clarify instructions
  • Add examples and use cases
  • Improve installation guides
  • Translate documentation

3. Performance Optimization

  • Reduce hook latency
  • Optimize detection patterns
  • Improve context loading efficiency

4. Detection Patterns

  • Add new threat patterns
  • Reduce false positives
  • Improve detection accuracy

5. Testing

  • Add unit tests
  • Add integration tests
  • Add performance benchmarks

6. Research

  • Document new hook use cases
  • Analyze MCP security patterns
  • Study real-world deployments

🚀 Getting Started

1. Fork the Repository

Click the "Fork" button at the top right of this page.

2. Clone Your Fork

git clone https://github.com/YOUR-USERNAME/claude-hooks-security-research.git
cd claude-hooks-security-research

3. Create a Branch

git checkout -b feature/your-feature-name
# or
git checkout -b fix/your-fix-name

4. Make Your Changes

Follow the coding standards below.

5. Test Your Changes

# Test specific hook
echo '{"tool_type":"test","tool_input":{}}' | python3 hooks/pre-tool-use/your_hook.py

# Run all tests
./automation/test_all_hooks.sh

6. Commit Your Changes

git add .
git commit -m "Add: Description of your changes"

Use conventional commit format:

  • Add: - New feature or hook
  • Fix: - Bug fix
  • Docs: - Documentation changes
  • Test: - Test additions/changes
  • Perf: - Performance improvements
  • Refactor: - Code refactoring

7. Push and Create Pull Request

git push origin feature/your-feature-name

Then go to GitHub and create a Pull Request.


📝 Coding Standards

Python Code Style

Follow PEP 8:

  • 4 spaces for indentation
  • Max line length: 100 characters
  • Docstrings for all functions

Example:

#!/usr/bin/env python3
"""
Brief description of the hook

Longer description if needed.
"""

import sys
import json

def main():
    """Main entry point for hook"""
    try:
        hook_input = json.load(sys.stdin)
        # Process input
        result = process(hook_input)

        if should_block(result):
            print(f"🚨 BLOCKED: {result['reason']}", file=sys.stderr)
            sys.exit(2)

        sys.exit(0)
    except Exception as e:
        print(f"⚠️  Hook error: {e}", file=sys.stderr)
        sys.exit(0)  # Allow on error

if __name__ == "__main__":
    main()

Bash Script Style

Follow best practices:

  • Use set -e for error handling
  • Quote all variables
  • Use meaningful variable names

Example:

#!/bin/bash
set -e

HOOK_INPUT=$(cat)
TOOL_TYPE=$(echo "$HOOK_INPUT" | jq -r '.tool_type')

if [[ "$TOOL_TYPE" != "Write" ]]; then
    exit 0
fi

# Process...

Documentation Style

Markdown formatting:

  • Use headers hierarchically (h1 → h2 → h3)
  • Include code examples
  • Add emojis for visual organization (optional)
  • Link to related documents

🧪 Testing Requirements

Unit Tests

All new hooks should include unit tests:

# tests/unit/test_your_hook.py

import json
from io import StringIO
import sys

def test_blocks_malicious_input():
    """Test that hook blocks malicious input"""
    input_data = {
        "tool_type": "Bash",
        "tool_input": {"command": "rm -rf /"}
    }

    # Test hook logic
    # Assert expected behavior

Integration Tests

Test hooks with realistic scenarios:

# tests/integration/test_security_workflow.sh

# Test full security workflow
echo '{"tool_type":"Write","tool_input":{"content":"sk-1234567890"}}' | \
    python3 hooks/pre-tool-use/security_guard.py

# Should block with exit code 2
if [ $? -eq 2 ]; then
    echo "✅ Integration test passed"
else
    echo "❌ Integration test failed"
    exit 1
fi

Performance Benchmarks

New hooks should meet performance requirements:

# tests/performance/benchmark_hook.py

import time

def benchmark_hook():
    start = time.perf_counter()
    # Run hook
    elapsed = (time.perf_counter() - start) * 1000

    assert elapsed < 10, f"Hook too slow: {elapsed:.2f}ms"

Requirements:

  • PreToolUse hooks: <10ms
  • PostToolUse hooks: <50ms
  • Session hooks: <100ms

📚 Documentation Requirements

For New Hooks

Each new hook should include:

  1. Docstring explaining purpose and behavior
  2. Usage example in README.md
  3. Configuration options documented
  4. Performance metrics measured
  5. Test cases demonstrating functionality

For Documentation Changes

  • Test all code examples
  • Check all links work
  • Verify formatting renders correctly
  • Update table of contents if needed

🔍 Code Review Process

What We Look For

Functionality:

  • ✅ Does it work as intended?
  • ✅ Are edge cases handled?
  • ✅ Is error handling appropriate?

Performance:

  • ✅ Meets latency requirements?
  • ✅ Efficient algorithms used?
  • ✅ No memory leaks?

Security:

  • ✅ No security vulnerabilities?
  • ✅ Input validation present?
  • ✅ No credential exposure?

Code Quality:

  • ✅ Clean, readable code?
  • ✅ Follows style guidelines?
  • ✅ Well-documented?

Testing:

  • ✅ Tests included?
  • ✅ Tests pass?
  • ✅ Good test coverage?

Review Timeline

  • Initial review: 1-3 days
  • Follow-up reviews: 1-2 days
  • Merge decision: After approval from 1 maintainer

🐛 Bug Reports

Before Submitting

  • Check existing issues for duplicates
  • Test with latest version
  • Collect error logs and details

Bug Report Template

Title: Brief description

Description: Clear description of the bug

Steps to Reproduce:

  1. Step one
  2. Step two
  3. ...

Expected Behavior: What should happen

Actual Behavior: What actually happens

Environment:

  • OS: [e.g., macOS, Linux, Windows]
  • Claude Code version: [e.g., 2.0.10]
  • Python version: [e.g., 3.9]

Logs:

Paste relevant logs here

💡 Feature Requests

Feature Request Template

Title: Feature name

Problem: What problem does this solve?

Proposed Solution: How should it work?

Alternatives Considered: What other approaches did you consider?

Additional Context: Any other relevant information


🤝 Code of Conduct

Our Pledge

We pledge to make participation in our project a harassment-free experience for everyone.

Our Standards

Examples of behavior that contributes to a positive environment:

  • Using welcoming and inclusive language
  • Being respectful of differing viewpoints
  • Gracefully accepting constructive criticism
  • Focusing on what is best for the community

Examples of unacceptable behavior:

  • Trolling, insulting/derogatory comments, personal attacks
  • Public or private harassment
  • Publishing others' private information without permission
  • Other conduct which could reasonably be considered inappropriate

Enforcement

Instances of abusive behavior may be reported to the project maintainers. All complaints will be reviewed and investigated.


📞 Questions?


🎉 Recognition

Contributors will be:

  • Listed in README.md
  • Mentioned in release notes
  • Given credit in documentation

Thank you for contributing to Claude Code Hooks Security Research!

Your contributions help make AI-assisted development more secure for everyone.