Thank you for your interest in contributing. These repositories focus on AI governance, release readiness, responsible AI deployment, evaluation, and operational tooling.
- New use cases with realistic constraints
- Documentation improvements that make guidance clearer or more actionable
- Framework extensions such as new checklists, mappings, or governance patterns
- Worked examples with realistic inputs and outputs
- Tooling improvements for validators, CLIs, templates, and demo apps
- Translations that preserve technical accuracy
Before opening a pull request, open an issue describing:
- what you want to add or change
- why it would help practitioners
- any relevant industry, regulatory, or implementation context
This avoids duplicated effort and helps keep each repository within scope.
git clone https://github.com/simaba/<repo-name>
git checkout -b feature/your-descriptive-branch-namePlease prefer:
- concrete examples over abstract claims
- reusable templates over one-off prose
- truthful maturity labels over aspirational wording
- clear boundaries between what is shipped, planned, and illustrative
Include:
- a concise summary of the change
- the reason for the change
- any user-visible behavior change
- linked issue number when relevant
| Repository | Strongest contribution types |
|---|---|
governance-playbook |
operating-model patterns, governance artifacts, worked examples |
release-governance |
release-stage criteria, gate definitions, lifecycle examples |
release-checklist |
validator rules, example configs, tests, reporting improvements |
accountability-patterns |
accountability patterns, redress flows, oversight examples |
ai-prism |
high-quality curated resources, broken-link fixes, better categorization |
nist-rmf-guide |
implementation examples, mappings, gap-closure guidance |
regulated-ai |
template improvements, starter workflows, CI validation examples |
multi-agent-governance |
trust models, failure handling, governance controls |
agent-orchestration |
orchestration patterns, runnable examples, control-flow tradeoffs |
agent-eval |
evaluation scenarios, rubric design, measurable pass/fail criteria |
agent-simulator |
new scenarios, simulator logic, evaluation extensions |
lean-ai-ops |
app quality, analytics rigor, examples, export quality, tests |
A contribution is much more likely to be accepted if it:
- matches the repository's actual scope
- improves clarity or practical usefulness
- includes at least one realistic example when adding a new concept
- avoids placeholder content presented as shipped capability
- keeps naming and cross-repo references consistent
By contributing, you agree to abide by the Code of Conduct.
Use GitHub Discussions for broader questions about framework usage, repo direction, or contribution fit.