Thanks for your interest in contributing! This guide covers everything you need to get started.
# Clone the repository
git clone https://github.com/aiauthz/llm-authz-audit.git
cd llm-authz-audit
# Create a virtual environment
python3 -m venv .venv
source .venv/bin/activate
# Install with dev dependencies
pip install -e ".[dev,ai]"
# Verify everything works
pytest- Create a branch from
main:git checkout -b feat/my-feature # features git checkout -b fix/my-bugfix # bug fixes git checkout -b analyzer/my-name # new analyzers
- Make your changes and add tests.
- Run the test suite and linters before committing.
- Submit a pull request.
Analyzers live in src/llm_authz_audit/analyzers/. Each analyzer is a class decorated with @register_analyzer.
Create src/llm_authz_audit/analyzers/my_analyzer.py:
from __future__ import annotations
from llm_authz_audit.analyzers import register_analyzer
from llm_authz_audit.core.models import Finding, ScanContext
@register_analyzer
class MyAnalyzer:
"""One-line description of what this analyzer detects."""
name = "MyAnalyzer"
def analyze(self, context: ScanContext) -> list[Finding]:
findings: list[Finding] = []
for entry in context.file_entries:
if not entry.path.suffix == ".py":
continue
# Your detection logic here
...
return findingsAdd your import to src/llm_authz_audit/analyzers/__init__.py inside the _discover_analyzers() function:
from llm_authz_audit.analyzers import my_analyzer # noqa: F401Create a YAML rule file in src/llm_authz_audit/rules/builtin/:
rules:
- id: MY001
title: "Short description"
severity: HIGH # CRITICAL, HIGH, MEDIUM, LOW
owasp_llm: LLM06 # OWASP LLM Top 10 mapping
description: >
Longer explanation of the issue and why it matters.
recommendation: >
How to fix or mitigate the issue.
suppress_if:
- "# nosec"Create tests/unit/test_my_analyzer.py:
import pytest
from llm_authz_audit.analyzers.my_analyzer import MyAnalyzer
@pytest.fixture
def analyzer():
return MyAnalyzer()
def test_detects_issue(analyzer, make_scan_context):
code = '''
# vulnerable code here
'''
ctx = make_scan_context({"app.py": code})
findings = analyzer.analyze(ctx)
assert len(findings) >= 1
assert findings[0].rule_id == "MY001"
def test_skips_safe_code(analyzer, make_scan_context):
code = '''
# safe code here
'''
ctx = make_scan_context({"app.py": code})
findings = analyzer.analyze(ctx)
assert len(findings) == 0Rules are defined in YAML files under src/llm_authz_audit/rules/builtin/. Each file can contain multiple rules.
rules:
- id: XX001 # Unique ID matching analyzer prefix
title: "Short title" # Shown in scan output
severity: HIGH # CRITICAL | HIGH | MEDIUM | LOW
owasp_llm: LLM06 # OWASP LLM Top 10 reference
description: > # Detailed explanation
What the rule detects and why it matters.
recommendation: > # How to fix
Steps to remediate the finding.
suppress_if: # Optional — patterns that suppress the finding
- "# nosec"
- "os.environ"Rules are loaded automatically by RuleLoader — no registration step needed.
# Run all tests
pytest
# Run with coverage
pytest --cov=llm_authz_audit
# Run a specific test file
pytest tests/unit/test_secrets_analyzer.py
# Run verbose
pytest -v- Unit tests go in
tests/unit/, integration tests intests/integration/. - Use the
make_scan_contextfixture to create scan contexts from inline code strings. - Test fixture projects live in
tests/fixtures/. - Test output includes Rich formatting — use substring checks (
assert "SEC001" in output), not exact string matches.
We use ruff for linting and formatting:
ruff check src/ tests/
ruff format src/ tests/Type checking with mypy:
mypy src/Conventions:
- Python 3.11+ features are fine (e.g.,
X | Yunion syntax). - Use
from __future__ import annotationsin all modules. - Keep imports sorted (ruff handles this).
Before opening a PR, verify:
- All tests pass (
pytest) - Linting passes (
ruff check src/ tests/) - New analyzers have corresponding rules and tests
- New rules have valid YAML schema and OWASP mapping
- Commit messages are clear and descriptive
Releases are automated via GitHub Actions. When a version tag is pushed, CI publishes to PyPI and npm automatically.
# Bump version in pyproject.toml + npm/package.json, commit, and tag
./scripts/bump-version.sh 1.1.0
# Push to trigger the release workflow
git push origin main --tagsThe release workflow will:
- Run the full test suite
- Publish to PyPI (via OIDC trusted publishing)
- Publish to npm (via
NODE_AUTH_TOKENsecret) - Create a GitHub Release with auto-generated notes
Important: PyPI is published before npm because the npm wrapper installs from PyPI at runtime.
Please include:
- Python version (
python3 --version) - llm-authz-audit version (
pip show llm-authz-audit) - Minimal reproduction steps
- Expected vs actual behavior
Describe the security gap or workflow improvement you'd like to see. If proposing a new analyzer, explain:
- What vulnerability it detects
- Which frameworks/libraries it applies to
- Example vulnerable code
- Example safe code