|
1 | | -# awesome-ai-governance |
2 | | -A curated list of AI governance frameworks, tools, regulations, and resources for responsible AI deployment |
| 1 | +# Awesome AI Governance [](https://awesome.re) |
| 2 | + |
| 3 | +> A curated list of frameworks, tools, regulations, papers, and resources for |
| 4 | +> responsible and trustworthy AI deployment in regulated industries. |
| 5 | +
|
| 6 | +Maintained by [Sima Bagheri](https://github.com/simaba) · [LinkedIn](https://www.linkedin.com/in/simabagheri) · [Medium](https://medium.com/@simabagheri) |
| 7 | + |
| 8 | +**Focus areas:** Enterprise AI governance · LLM deployment safety · Risk management · Regulatory compliance (NIST AI RMF, EU AI Act, ISO 42001) · Release readiness · Incident response |
| 9 | + |
| 10 | +--- |
| 11 | + |
| 12 | +## Contents |
| 13 | + |
| 14 | +- [Regulatory Frameworks](#regulatory-frameworks) |
| 15 | +- [Risk Management Frameworks](#risk-management-frameworks) |
| 16 | +- [Governance Tools & Platforms](#governance-tools--platforms) |
| 17 | +- [AI Testing & Evaluation](#ai-testing--evaluation) |
| 18 | +- [Incident Management](#incident-management) |
| 19 | +- [Model Cards & Documentation](#model-cards--documentation) |
| 20 | +- [Academic Papers](#academic-papers) |
| 21 | +- [Datasets & Benchmarks](#datasets--benchmarks) |
| 22 | +- [Communities & Organizations](#communities--organizations) |
| 23 | +- [Courses & Learning](#courses--learning) |
| 24 | +- [My Open-Source Frameworks](#my-open-source-frameworks) |
| 25 | + |
| 26 | +--- |
| 27 | + |
| 28 | +## Regulatory Frameworks |
| 29 | + |
| 30 | +### United States |
| 31 | +- **[NIST AI Risk Management Framework (AI RMF 1.0)](https://www.nist.gov/system/files/documents/2023/01/26/AI%20RMF%201.0.pdf)** — The U.S. government's voluntary framework for managing risks in the design, development, deployment, and use of AI systems. Organized around four functions: Govern, Map, Measure, Manage. |
| 32 | +- **[NIST AI RMF Playbook](https://airc.nist.gov/Docs/2)** — Practical guidance for implementing the AI RMF, with suggested actions for each subcategory. |
| 33 | +- **[Executive Order on Safe, Secure, and Trustworthy AI](https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/)** — U.S. Executive Order (Oct 2023) establishing new standards for AI safety and security. |
| 34 | +- **[NIST AI Safety Institute (AISI)](https://www.nist.gov/artificial-intelligence/executive-order-safe-secure-and-trustworthy-artificial-intelligence)** — Federal body coordinating AI safety research and standards. |
| 35 | +- **[OMB AI Governance Policy M-24-10](https://www.whitehouse.gov/wp-content/uploads/2024/03/M-24-10-Advancing-Governance-Innovation-and-Risk-Management-for-Agency-Use-of-Artificial-Intelligence.pdf)** — Governance and risk management requirements for federal agency AI use. |
| 36 | + |
| 37 | +### European Union |
| 38 | +- **[EU AI Act](https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689)** — The world's first comprehensive legal framework for AI, using a risk-based tiered approach (unacceptable, high, limited, minimal risk). |
| 39 | +- **[EU AI Act Summary](https://artificialintelligenceact.eu/)** — Plain-language guide to the EU AI Act provisions and timelines. |
| 40 | +- **[GDPR & AI](https://edpb.europa.eu/our-work-tools/our-documents/guidelines/guidelines-032021-virtual-voice-assistants_en)** — European Data Protection Board guidance on AI and GDPR intersection. |
| 41 | + |
| 42 | +### International Standards |
| 43 | +- **[ISO/IEC 42001:2023](https://www.iso.org/standard/81230.html)** — International standard for AI management systems. Provides requirements and guidance for establishing, implementing, maintaining, and improving an AI management system. |
| 44 | +- **[ISO/IEC 23894:2023](https://www.iso.org/standard/77304.html)** — Guidance on risk management for AI systems. |
| 45 | +- **[IEEE 7000 Series](https://standards.ieee.org/initiatives/artificial-intelligence-systems/standards/)** — IEEE standards for ethically aligned AI design. |
| 46 | +- **[OECD AI Principles](https://oecd.ai/en/ai-principles)** — International principles on trustworthy AI adopted by 46 countries. |
| 47 | + |
| 48 | +--- |
| 49 | + |
| 50 | +## Risk Management Frameworks |
| 51 | + |
| 52 | +- **[NIST AI RMF Core](https://airc.nist.gov/home)** — Interactive version of the AI RMF with searchable categories and subcategories. |
| 53 | +- **[Microsoft Responsible AI Standard](https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf)** — Microsoft's internal responsible AI framework, publicly shared. |
| 54 | +- **[Google PAIR Guidebook](https://pair.withgoogle.com/guidebook/)** — People + AI Research guidebook for designing human-centered AI. |
| 55 | +- **[IBM AI Fairness 360](https://aif360.mybluemix.net/)** — Open-source toolkit for examining, reporting, and mitigating discrimination in ML models. |
| 56 | +- **[MITRE ATLAS](https://atlas.mitre.org/)** — Adversarial Threat Landscape for AI Systems — knowledge base of AI-specific adversarial tactics. |
| 57 | +- **[OWASP Top 10 for LLMs](https://owasp.org/www-project-top-10-for-large-language-model-applications/)** — The 10 most critical security risks for LLM applications. |
| 58 | + |
| 59 | +--- |
| 60 | + |
| 61 | +## Governance Tools & Platforms |
| 62 | + |
| 63 | +- **[Microsoft Responsible AI Toolbox](https://github.com/microsoft/responsible-ai-toolbox)**  — Integrated suite for responsible AI assessment including error analysis, fairness, causal inference, and counterfactual analysis. |
| 64 | +- **[Giskard](https://github.com/Giskard-AI/giskard)**  — Open-source AI quality testing platform for detecting biases, vulnerabilities, and performance issues. |
| 65 | +- **[verifywise](https://github.com/verifywise/verifywise)**  — AI compliance platform with direct NIST AI RMF and EU AI Act mappings. |
| 66 | +- **[Evidently AI](https://github.com/evidentlyai/evidently)**  — Evaluate, test, and monitor ML and LLM models in production. |
| 67 | +- **[WhyLabs](https://whylabs.ai/)** — AI observability platform for model monitoring and drift detection. |
| 68 | +- **[Fiddler AI](https://www.fiddler.ai/)** — Explainable AI and model performance monitoring for enterprises. |
| 69 | +- **[Microsoft PyRIT](https://github.com/Azure/PyRIT)**  — Python Risk Identification Toolkit for generative AI red teaming. |
| 70 | +- **[LangFuse](https://github.com/langfuse/langfuse)**  — Open-source LLM observability and analytics. |
| 71 | + |
| 72 | +--- |
| 73 | + |
| 74 | +## AI Testing & Evaluation |
| 75 | + |
| 76 | +- **[Holistic Evaluation of Language Models (HELM)](https://crfm.stanford.edu/helm/)** — Stanford's comprehensive LLM evaluation framework across scenarios, metrics, and models. |
| 77 | +- **[EleutherAI LM Evaluation Harness](https://github.com/EleutherAI/lm-evaluation-harness)**  — Unified framework for evaluating language models across 200+ tasks. |
| 78 | +- **[DeepEval](https://github.com/confident-ai/deepeval)**  — LLM evaluation framework with metrics for RAG, hallucination, and safety. |
| 79 | +- **[TruLens](https://github.com/truera/trulens)**  — Evaluation and tracking for LLM-based applications. |
| 80 | +- **[RAGAS](https://github.com/explodinggradients/ragas)**  — Evaluation framework for Retrieval Augmented Generation pipelines. |
| 81 | +- **[MLflow Model Evaluation](https://mlflow.org/docs/latest/model-evaluation/index.html)** — Built-in model evaluation with support for LLMs and custom metrics. |
| 82 | + |
| 83 | +--- |
| 84 | + |
| 85 | +## Incident Management |
| 86 | + |
| 87 | +- **[AI Incident Database](https://incidentdatabase.ai/)** — Crowdsourced database of AI incidents and failures across industries. |
| 88 | +- **[AI Vulnerability Database (AVID)](https://avidml.org/)** — Taxonomy of AI failure modes, biases, and vulnerabilities. |
| 89 | +- **[NIST AI Incident Tracking](https://airc.nist.gov/Docs/2)** — NIST guidance on AI incident classification and response. |
| 90 | +- **[Weights & Biases Incident Retrospectives](https://wandb.ai/site/articles)** — Real-world ML incident retrospectives from practitioners. |
| 91 | + |
| 92 | +--- |
| 93 | + |
| 94 | +## Model Cards & Documentation |
| 95 | + |
| 96 | +- **[Model Cards for Model Reporting (Google)](https://arxiv.org/abs/1810.03993)** — Original paper introducing model cards as a transparency mechanism. |
| 97 | +- **[Hugging Face Model Card Toolkit](https://huggingface.co/docs/hub/model-cards)** — Standardized model card format with template and auto-generation. |
| 98 | +- **[Google Model Card Toolkit](https://github.com/google/model-card-toolkit)** — Python toolkit for generating model cards programmatically. |
| 99 | +- **[Datasheets for Datasets](https://arxiv.org/abs/1803.09010)** — Framework for documenting datasets with provenance, composition, and intended use. |
| 100 | + |
| 101 | +--- |
| 102 | + |
| 103 | +## Academic Papers |
| 104 | + |
| 105 | +- **[Concrete Problems in AI Safety (Amodei et al., 2016)](https://arxiv.org/abs/1606.06565)** — Foundational paper defining five practical AI safety problems. |
| 106 | +- **[Stochastic Parrots (Bender et al., 2021)](https://dl.acm.org/doi/10.1145/3442188.3445922)** — Seminal paper on risks of large language models. |
| 107 | +- **[Model Cards for Model Reporting (Mitchell et al., 2019)](https://arxiv.org/abs/1810.03993)** — Introduced model cards as a documentation standard. |
| 108 | +- **[The Alignment Problem (Krakovna et al., 2020)](https://arxiv.org/abs/2009.01148)** — Survey of specification gaming in AI systems. |
| 109 | +- **[Trustworthy AI (Varshney, 2022)](https://www.ibm.com/watson/assets/duo/pdf/Trustworthy_AI.pdf)** — Practical guide to building trustworthy ML systems. |
| 110 | +- **[Governing AI for Humanity (UN Advisory Body, 2024)](https://www.un.org/sites/un2.un.org/files/governing_ai_for_humanity_final_report_en.pdf)** — UN report on global AI governance frameworks. |
| 111 | + |
| 112 | +--- |
| 113 | + |
| 114 | +## Datasets & Benchmarks |
| 115 | + |
| 116 | +- **[BigBench](https://github.com/google/BIG-bench)**  — Collaborative benchmark for large language model evaluation beyond current capabilities. |
| 117 | +- **[TruthfulQA](https://github.com/sylinrl/TruthfulQA)** — Benchmark measuring whether LLMs generate truthful answers. |
| 118 | +- **[HarmBench](https://github.com/centerforaisafety/HarmBench)** — Standardized evaluation framework for automated red teaming. |
| 119 | +- **[MMLU](https://github.com/hendrycks/test)** — Massive Multitask Language Understanding benchmark across 57 subjects. |
| 120 | + |
| 121 | +--- |
| 122 | + |
| 123 | +## Communities & Organizations |
| 124 | + |
| 125 | +- **[Partnership on AI](https://partnershiponai.org/)** — Multi-stakeholder organization advancing responsible AI practices. |
| 126 | +- **[MLCommons](https://mlcommons.org/)** — Open engineering consortium for ML benchmarks and safety evaluations. |
| 127 | +- **[Montreal AI Ethics Institute (MAIEI)](https://montrealethics.ai/)** — Research institute for AI ethics with practitioner community. |
| 128 | +- **[Center for AI Safety (CAIS)](https://www.safe.ai/)** — Research organization focused on reducing societal risks from AI. |
| 129 | +- **[FINOS (Fintech Open Source Foundation)](https://www.finos.org/ai-readiness)** — AI readiness resources for financial services industry. |
| 130 | +- **[NIST National AI Initiative](https://www.nist.gov/artificial-intelligence)** — U.S. government AI standards and research coordination. |
| 131 | +- **[Future of Life Institute](https://futureoflife.org/cause-area/artificial-intelligence/)** — Research on existential and catastrophic AI risks. |
| 132 | + |
| 133 | +--- |
| 134 | + |
| 135 | +## Courses & Learning |
| 136 | + |
| 137 | +- **[Responsible AI practices (Google)](https://ai.google/responsibility/responsible-ai-practices/)** — Google's practical guidance on responsible AI development. |
| 138 | +- **[AI Ethics (fast.ai)](https://ethics.fast.ai/)** — Free course on AI ethics and data ethics. |
| 139 | +- **[Trustworthy AI (IBM)](https://www.ibm.com/training/badge/trustworthy-ai-foundations)** — IBM's trustworthy AI foundations certification. |
| 140 | +- **[NIST AI RMF Workshop Videos](https://www.nist.gov/artificial-intelligence/ai-risk-management-framework)** — Free workshop recordings on implementing the AI RMF. |
| 141 | +- **[Human-Centered AI (Stanford HAI)](https://hai.stanford.edu/education)** — Stanford's human-centered AI educational resources. |
| 142 | + |
| 143 | +--- |
| 144 | + |
| 145 | +## My Open-Source Frameworks |
| 146 | + |
| 147 | +Frameworks I have built for AI governance and release readiness in regulated industries: |
| 148 | + |
| 149 | +| Repository | Description | Stars | |
| 150 | +|---|---|---| |
| 151 | +| [enterprise-ai-governance-playbook](https://github.com/simaba/enterprise-ai-governance-playbook) | End-to-end AI governance playbook aligned with NIST AI RMF |  | |
| 152 | +| [ai-release-readiness-checklist](https://github.com/simaba/ai-release-readiness-checklist) | YAML-based release gate checklist for LLM/ML deployments |  | |
| 153 | +| [ai-risk-taxonomy](https://github.com/simaba/ai-risk-taxonomy) | Structured taxonomy of AI risks mapped to NIST AI RMF |  | |
| 154 | +| [llm-governance-readiness-framework](https://github.com/simaba/llm-governance-readiness-framework) | LLM-specific governance maturity framework |  | |
| 155 | +| [regulated-ai-use-case-library](https://github.com/simaba/regulated-ai-use-case-library) | AI use cases with governance context for regulated industries |  | |
| 156 | +| [nist-ai-rmf-implementation-guide](https://github.com/simaba/nist-ai-rmf-implementation-guide) | Practitioner guide to implementing NIST AI RMF |  | |
| 157 | + |
| 158 | +--- |
| 159 | + |
| 160 | +## Contributing |
| 161 | + |
| 162 | +Contributions are welcome! Please read the [Contributing Guidelines](CONTRIBUTING.md) |
| 163 | +and open an issue before submitting a PR. |
| 164 | + |
| 165 | +**How to add a resource:** |
| 166 | +1. Verify the resource is publicly accessible and actively maintained |
| 167 | +2. Add it to the appropriate section with a one-line description |
| 168 | +3. For GitHub repos: add a stars badge using `` |
| 169 | +4. Open a PR with the title `Add: [Resource Name]` |
| 170 | + |
| 171 | +--- |
| 172 | + |
| 173 | +## License |
| 174 | + |
| 175 | +[](https://creativecommons.org/publicdomain/zero/1.0/) |
| 176 | + |
| 177 | +To the extent possible under law, Sima Bagheri has waived all copyright and |
| 178 | +related or neighboring rights to this work. |
0 commit comments