|
1 | 1 | # AI Accountability Design Patterns |
2 | 2 |
|
3 | | -[](LICENSE) |
4 | | -[](https://github.com/simaba/ai-accountability-design-patterns/commits/main) |
| 3 | +[](https://airc.nist.gov/home) |
| 4 | +[](LICENSE) |
| 5 | +[](https://github.com/simaba/ai-accountability-design-patterns/discussions) |
5 | 6 |
|
6 | | -A practical pattern library for designing human accountability into AI-enabled systems — covering escalation logic, ownership models, and intervention paths. |
| 7 | +A catalog of design patterns for building accountable AI systems in regulated industries. |
| 8 | +Each pattern provides a problem statement, solution structure, implementation guidance, |
| 9 | +and mapping to NIST AI RMF and EU AI Act requirements. |
7 | 10 |
|
8 | 11 | --- |
9 | 12 |
|
10 | | -## Why this exists |
| 13 | +## What Is AI Accountability? |
11 | 14 |
|
12 | | -AI systems often fail operationally not only because of model behaviour, but because: |
| 15 | +AI accountability means that individuals and organizations can be held responsible |
| 16 | +for the outcomes of AI systems — that there are clear lines of ownership, transparent |
| 17 | +decision processes, and mechanisms for redress when things go wrong. |
13 | 18 |
|
14 | | -- escalation logic is vague or missing |
15 | | -- ownership is fragmented across teams |
16 | | -- humans are nominally "in the loop" but lack real authority |
17 | | -- override paths are under-specified or untested |
| 19 | +The NIST AI RMF defines accountability as one of seven characteristics of trustworthy AI: |
| 20 | +> *"AI actors should be accountable for the development, deployment, and impacts of AI |
| 21 | +> systems, including supporting human oversight."* |
18 | 22 |
|
19 | 23 | --- |
20 | 24 |
|
21 | | -## Core design principle |
22 | | - |
23 | | -```mermaid |
24 | | -flowchart TD |
25 | | - A[AI system output] --> B{Intervention |
26 | | -conditions met?} |
27 | | - B -->|No| C[Output delivered] |
28 | | - B -->|Yes| D[Human notified |
29 | | -with context] |
30 | | - D --> E{Authority to |
31 | | -intervene?} |
32 | | - E -->|Yes| F[Human overrides |
33 | | -or confirms] |
34 | | - E -->|No| G[Escalate to |
35 | | -authorised party] |
36 | | - F & G --> H[Decision logged |
37 | | -and reviewable] |
38 | | -``` |
39 | | - |
40 | | -> Human oversight is only meaningful when: intervention conditions are explicit, authority is real, context is sufficient, and decisions are logged and reviewable. |
| 25 | +## Pattern Catalog |
41 | 26 |
|
42 | | ---- |
43 | | - |
44 | | -## Patterns included |
| 27 | +### Governance Patterns |
45 | 28 |
|
46 | | -| Pattern | What it addresses | |
47 | | -|---------|-----------------| |
48 | | -| `patterns/human-override.md` | When and how humans can override AI decisions | |
49 | | -| `patterns/escalation-thresholds.md` | Defining triggers for human escalation | |
50 | | -| `patterns/ownership-models.md` | Assigning clear operational ownership | |
51 | | -| `patterns/decision-context.md` | Ensuring humans have sufficient context to act | |
52 | | -| `patterns/incident-accountability.md` | Post-incident ownership and review | |
| 29 | +| Pattern | Problem | Solution | |
| 30 | +|---|---|---| |
| 31 | +| **Model Inventory** | No central registry of AI systems in production | Maintain a versioned, owner-assigned inventory of all deployed models | |
| 32 | +| **Ownership Assignment** | Unclear who is responsible when an AI system fails | Assign a named technical owner and business owner to every AI system | |
| 33 | +| **AI Policy Cascade** | Governance policies not reaching practitioners | Publish policy as code — embed governance rules in CI/CD pipelines | |
| 34 | +| **Governance Gate** | AI systems deployed without appropriate review | Require signed-off checklists at defined lifecycle milestones | |
53 | 35 |
|
54 | | ---- |
| 36 | +### Transparency Patterns |
55 | 37 |
|
56 | | -## Worked examples |
| 38 | +| Pattern | Problem | Solution | |
| 39 | +|---|---|---| |
| 40 | +| **Model Card** | No documentation of model capabilities and limitations | Create a structured model card for every production model | |
| 41 | +| **Decision Log** | AI decisions not auditable after the fact | Log inputs, outputs, model version, and confidence for every decision | |
| 42 | +| **Confidence Surfacing** | Users cannot tell when AI is uncertain | Surface confidence scores and uncertainty estimates in the UI | |
| 43 | +| **Explanation on Demand** | Stakeholders cannot understand AI decisions | Implement on-demand SHAP/LIME explanations for high-stakes decisions | |
57 | 44 |
|
58 | | -| Example | Industry context | |
59 | | -|---------|----------------| |
60 | | -| `examples/customer-support-agent.md` | AI-assisted customer service with override path | |
61 | | -| `examples/ivi-assistant.md` | In-vehicle AI assistant with safety escalation | |
| 45 | +### Human Oversight Patterns |
62 | 46 |
|
63 | | ---- |
| 47 | +| Pattern | Problem | Solution | |
| 48 | +|---|---|---| |
| 49 | +| **Human-in-the-Loop Gate** | High-stakes decisions made autonomously | Require human review before action for decisions above a risk threshold | |
| 50 | +| **Override Mechanism** | Operators cannot override erroneous AI decisions | Implement a documented, audited override pathway with reason capture | |
| 51 | +| **Escalation Ladder** | Edge cases fall through without review | Define a tiered escalation path for low-confidence or novel inputs | |
| 52 | +| **Sunset Clause** | Models remain in production past their useful life | Set mandatory model review dates; require affirmative renewal to continue | |
64 | 53 |
|
65 | | -## Templates |
| 54 | +### Redress Patterns |
66 | 55 |
|
67 | | -- `templates/accountability-review-checklist.md` — review checklist for new AI deployments |
| 56 | +| Pattern | Problem | Solution | |
| 57 | +|---|---|---| |
| 58 | +| **Adverse Action Explanation** | Affected individuals cannot understand why they were denied | Generate plain-language explanations with specific contributing factors | |
| 59 | +| **Appeal Pathway** | No mechanism for contesting AI decisions | Implement a formal appeal process with human review and documented outcomes | |
| 60 | +| **Impact Audit** | Unknown whether AI system is causing disproportionate harm | Conduct regular disparate impact audits by protected characteristics | |
68 | 61 |
|
69 | 62 | --- |
70 | 63 |
|
71 | | -## Who this is for |
| 64 | +## NIST AI RMF Mapping |
72 | 65 |
|
73 | | -- AI product managers designing human-in-the-loop systems |
74 | | -- Platform and systems engineers implementing escalation logic |
75 | | -- Governance and risk leaders in regulated industries |
76 | | -- Operations and quality teams accountable for AI outcomes |
| 66 | +See [docs/nist-rmf-mapping.md](docs/nist-rmf-mapping.md) for a full mapping of each |
| 67 | +pattern to NIST AI RMF functions and subcategories. |
77 | 68 |
|
78 | 69 | --- |
79 | 70 |
|
80 | | -## Related repositories |
81 | | - |
82 | | -This repository is part of a connected toolkit for responsible AI operations: |
| 71 | +## Ecosystem |
83 | 72 |
|
84 | 73 | | Repository | Purpose | |
85 | | -|-----------|---------| |
86 | | -| [Enterprise AI Governance Playbook](https://github.com/simaba/enterprise-ai-governance-playbook) | End-to-end AI operating model from intake to improvement | |
87 | | -| [AI Release Governance Framework](https://github.com/simaba/ai-release-governance-framework) | Risk-based release gates for AI systems | |
88 | | -| [AI Release Readiness Checklist](https://github.com/simaba/ai-release-readiness-checklist) | Risk-tiered pre-release checklists with CLI tool | |
89 | | -| [AI Accountability Design Patterns](https://github.com/simaba/ai-accountability-design-patterns) | Patterns for human oversight and escalation | |
90 | | -| [Multi-Agent Governance Framework](https://github.com/simaba/multi-agent-governance-framework) | Roles, authority, and escalation for agent systems | |
91 | | -| [Multi-Agent Orchestration Patterns](https://github.com/simaba/multi-agent-orchestration-patterns) | Sequential, parallel, and feedback-loop patterns | |
92 | | -| [AI Agent Evaluation Framework](https://github.com/simaba/ai-agent-evaluation-framework) | System-level evaluation across 5 dimensions | |
93 | | -| [Agent System Simulator](https://github.com/simaba/agent-system-simulator) | Runnable multi-agent simulator with governance controls | |
94 | | -| [LLM-powered Lean Six Sigma](https://github.com/simaba/LLM-powered-Lean-Six-Sigma) | AI copilot for structured process improvement | |
95 | | - |
96 | | ---- |
97 | | - |
98 | | -*Shared in a personal capacity. Open to collaborations and feedback — connect on [LinkedIn](https://linkedin.com/in/simaba) or [Medium](https://medium.com/@bagheri.sima).* |
| 74 | +|---|---| |
| 75 | +| [enterprise-ai-governance-playbook](https://github.com/simaba/enterprise-ai-governance-playbook) | End-to-end governance playbook | |
| 76 | +| [ai-release-readiness-checklist](https://github.com/simaba/ai-release-readiness-checklist) | Release gate framework + CLI | |
| 77 | +| [ai-risk-taxonomy](https://github.com/simaba/ai-risk-taxonomy) | Structured AI risk taxonomy | |
| 78 | +| [nist-ai-rmf-implementation-guide](https://github.com/simaba/nist-ai-rmf-implementation-guide) | NIST AI RMF practitioner guide | |
| 79 | +| [awesome-ai-governance](https://github.com/simaba/awesome-ai-governance) | Curated governance resources | |
| 80 | + |
| 81 | +*Maintained by [Sima Bagheri](https://github.com/simaba) · Connect on [LinkedIn](https://www.linkedin.com/in/simabagheri)* |
0 commit comments