Skip to content

Commit f081940

Browse files
authored
Add Annex B for AI Concepts in ISO/IEC 42001:2023
This document serves as an implementation reference guide for ISO/IEC 42001:2023, detailing AI concepts, their characteristics, types, risks, objectives, principles, lifecycle phases, and terminology relevant to AI governance.
1 parent 9cb2a46 commit f081940

1 file changed

Lines changed: 171 additions & 0 deletions

File tree

12-ANNEX-B-AI-CONCEPTS.md

Lines changed: 171 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,171 @@
1+
# Annex B — AI Concepts and Their Application to ISO/IEC 42001
2+
## ISO/IEC 42001:2023 | Informative Reference Guide
3+
4+
> **Note:** This document is an implementation reference guide. It is NOT a reproduction of the ISO/IEC 42001:2023 standard. Users must obtain a licensed copy of the standard from ISO (iso.org) for the full normative text.
5+
6+
---
7+
8+
## Purpose
9+
10+
ISO/IEC 42001:2023 Annex B provides guidance on how AI-specific concepts referenced in the standard should be understood and applied in an AIMS context. This document summarises those concepts and provides practical implementation notes to help practitioners apply them correctly.
11+
12+
---
13+
14+
## B.1 — AI Systems and Their Characteristics
15+
16+
### What is an AI System?
17+
An AI system is a machine-based system that, for a given set of objectives, makes predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.
18+
19+
Key characteristics relevant to AIMS:
20+
- **Autonomy** — AI systems can act without constant human direction. The degree of autonomy varies from none (pure automation) to full (unsupervised decision-making).
21+
- **Adaptivity** — Many AI systems learn from data and change their behaviour over time. This creates ongoing governance requirements that static software systems do not have.
22+
- **Opacity** — Complex AI models (particularly deep learning) may not be fully explainable, creating transparency and accountability challenges.
23+
- **Scale** — AI systems can affect large numbers of people simultaneously, amplifying both beneficial and harmful effects.
24+
25+
### AIMS Implication
26+
The AIMS must account for the full lifecycle of AI systems — not just deployment, but design, development, monitoring, and decommissioning. See `AI-LIFECYCLE-MANAGEMENT-PROCEDURE.md`.
27+
28+
---
29+
30+
## B.2 — Types of AI Systems Encountered in AIMS Scope
31+
32+
| AI System Type | Description | Typical AIMS Considerations |
33+
|---------------|-------------|---------------------------|
34+
| Machine Learning (supervised) | Learns from labelled training data to make predictions | Bias in training data; performance drift; fairness evaluation |
35+
| Machine Learning (unsupervised) | Finds patterns in unlabelled data | Interpretability; validation of clustering quality |
36+
| Reinforcement Learning | Learns by trial and error with rewards | Safety constraints; unexpected emergent behaviour |
37+
| Large Language Models (LLMs) | Generates text, code, analysis from prompts | Hallucination risk; prompt injection; data leakage |
38+
| Computer Vision | Interprets images and video | Bias across demographic groups; privacy (biometrics) |
39+
| Recommender Systems | Suggests content, products, or actions | Filter bubbles; manipulation risk; transparency |
40+
| Decision Support Systems | Assists human decision-makers | Over-reliance; automation bias; explainability |
41+
| Autonomous Agents | Takes sequences of actions without human intervention | Scope containment; oversight mechanisms; fallback |
42+
43+
---
44+
45+
## B.3 — AI Risk Concepts
46+
47+
### Risk vs. Traditional IT Risk
48+
AI risk is distinct from traditional IT risk in several important ways:
49+
50+
| Dimension | Traditional IT | AI-Specific |
51+
|-----------|--------------|-------------|
52+
| Failure mode | Deterministic — error or no error | Probabilistic — wrong with varying confidence |
53+
| Validation | Test all paths to verify behaviour | Cannot test all possible inputs |
54+
| Drift | Software doesn't change itself | AI models can degrade over time |
55+
| Bias | Not inherent | Can encode and amplify societal biases |
56+
| Explainability | Code logic is auditable | Neural network decisions may be opaque |
57+
| Adversarial vulnerability | Patch-based security | Adversarial examples; prompt injection |
58+
59+
### Key AI Risk Concepts for AIMS
60+
61+
**Data Risk** — Risks arising from the data used to train, validate, and operate AI systems. Includes data quality, data bias, data provenance, and data protection.
62+
63+
**Model Risk** — Risks arising from the AI model itself. Includes model performance, model drift, model bias, model opacity, and adversarial vulnerability.
64+
65+
**Deployment Risk** — Risks arising from how AI systems are deployed and integrated. Includes integration failures, scaling issues, and inadequate human oversight.
66+
67+
**Operational Risk** — Risks arising during ongoing operation. Includes monitoring gaps, incident response failures, and supply chain risks.
68+
69+
**Societal Risk** — Broader risks to society from AI. Includes discrimination, surveillance, manipulation, and concentration of power.
70+
71+
---
72+
73+
## B.4 — AI Objectives and Their Relationship to AIMS Objectives
74+
75+
ISO/IEC 42001:2023 Clause 6.2 requires the organisation to establish AI objectives. Annex B provides guidance on how these relate to responsible AI principles.
76+
77+
### Recommended AI Objective Categories
78+
79+
| Objective Category | Example Objectives | Relevant Annex A Controls |
80+
|------------------|------------------|--------------------------|
81+
| Fairness | Reduce demographic disparity in AI outcomes to < 5% | A.4.7 |
82+
| Transparency | 100% of AI systems have published model cards | A.4.8, A.9.2 |
83+
| Accountability | 100% of AI systems have named owners | A.2.3, A.4.10 |
84+
| Safety | Zero high-severity AI incidents per quarter | A.4.4, A.6.2.13 |
85+
| Privacy | Zero GDPR violations related to AI | A.4.9 |
86+
| Performance | All AI systems operating within 5% of baseline | A.4.3, A.6.2.10 |
87+
| Regulatory | Full EU AI Act compliance before Aug 2026 | A.10.2 |
88+
89+
See `AI-OBJECTIVES-REGISTER.md` for the live objectives register.
90+
91+
---
92+
93+
## B.5 — Responsible AI Principles and Their Annex A Mapping
94+
95+
| Responsible AI Principle | Primary Annex A Domain | Key Controls |
96+
|--------------------------|----------------------|-------------|
97+
| Fairness / Non-discrimination | A.4 | A.4.7 |
98+
| Transparency / Explainability | A.4, A.9 | A.4.8, A.9.2 |
99+
| Accountability | A.2, A.4 | A.2.3, A.4.10 |
100+
| Human oversight and control | A.3, A.6 | A.3.2, A.6.2.9 |
101+
| Privacy | A.4 | A.4.9 |
102+
| Safety | A.4, A.6 | A.4.4, A.6.2.13 |
103+
| Security | A.4, A.6 | A.4.5 |
104+
| Reliability / Robustness | A.4, A.6 | A.4.3, A.6.2.10 |
105+
| Societal benefit | A.7 | A.7.2 |
106+
| Environmental sustainability | A.7 | A.7.2 |
107+
108+
---
109+
110+
## B.6 — AI Lifecycle Phases
111+
112+
ISO/IEC 42001:2023 uses a consistent lifecycle model for AI systems. Understanding which phase an AI system is in determines which controls apply.
113+
114+
| Phase | Description | Key Controls | Key Documents |
115+
|-------|-------------|-------------|--------------|
116+
| Design | Define purpose, requirements, responsible AI design | A.6.1.2 | AI-SYSTEM-IMPACT-ASSESSMENT.md |
117+
| Data | Collect, prepare, and govern training/operational data | A.6.2.2 | AI-RISK-REGISTER.md |
118+
| Development | Build, train, validate AI model | A.6.2.4, A.6.2.5 | AI-DEPLOYMENT-CHECKLIST.md |
119+
| Deployment | Release AI system to production | A.6.2.7, A.6.3 | AI-DEPLOYMENT-CHECKLIST.md |
120+
| Operation | Monitor, maintain, update | A.6.2.8, A.6.2.10 | AI-PERFORMANCE-MONITORING-PLAN.md |
121+
| Change | Modify the system materially | A.6.2.11 | AI-CHANGE-CONTROL-PROCEDURE.md |
122+
| Decommission | Retire the AI system | A.6.2.12 | AI-LIFECYCLE-MANAGEMENT-PROCEDURE.md |
123+
124+
---
125+
126+
## B.7 — AI System Classification for Risk-Based Controls
127+
128+
A risk-tiering approach helps apply proportionate controls. Recommended classification:
129+
130+
| Tier | Description | Examples | Controls Intensity |
131+
|------|-------------|---------|-------------------|
132+
| Tier 1 — Critical | High-risk AI; affects individuals' fundamental rights, safety, or significant decisions | Credit scoring, recruitment screening, medical AI | Full controls; highest oversight; quarterly monitoring |
133+
| Tier 2 — High | Significant AI; material impact on individuals or operations | Customer service AI, fraud detection, HR analytics | Strong controls; regular monitoring; DPIA required |
134+
| Tier 3 — Medium | Moderate AI; limited individual impact | Internal productivity tools, content moderation aids | Standard controls; annual monitoring |
135+
| Tier 4 — Low | Minimal AI; no meaningful individual impact | Spam filters, autocomplete, search ranking (internal) | Basic controls; periodic review |
136+
137+
Note: EU AI Act classification (prohibited, high-risk, limited-risk, minimal-risk) must also be applied where EU nexus exists. See `LEGAL-REGULATORY-REQUIREMENTS-REGISTER.md`.
138+
139+
---
140+
141+
## B.8 — Key AI Terminology Quick Reference
142+
143+
| Term | Definition | AIMS Relevance |
144+
|------|-----------|---------------|
145+
| Algorithm | Set of rules or instructions for solving a problem | Foundation of AI systems |
146+
| Bias (AI) | Systematic errors in AI outputs due to flawed training data or model design | Fairness control A.4.7 |
147+
| Concept drift | Change in the statistical relationship between input and output over time | Monitoring A.6.2.10 |
148+
| Data drift | Change in the statistical distribution of input data over time | Monitoring A.6.2.10 |
149+
| Explainability | Ability to explain an AI decision in understandable terms | Transparency A.4.8 |
150+
| Feature | An input variable used by an AI model | Data governance A.6.2.2 |
151+
| Hallucination | AI generating confident but incorrect outputs (especially LLMs) | Reliability A.4.3 |
152+
| Human-in-the-loop | Human reviews and approves each AI decision | Human oversight A.3.2 |
153+
| Model card | Documentation of an AI model's design, performance, and limitations | Documentation A.9.2 |
154+
| Overfitting | Model performs well on training data but poorly on new data | Testing A.6.2.5 |
155+
| Prompt injection | Malicious input designed to override AI system instructions | Security A.4.5 |
156+
| Training data | Data used to train (build) an AI model | Data A.6.2.2 |
157+
| Transfer learning | Using a pre-trained model as starting point for a new task | Acquisition A.6.2.3 |
158+
| Zero-day (AI) | Novel attack or failure mode not yet known or defended against | Security A.4.5 |
159+
160+
---
161+
162+
## Obtaining the Full Standard
163+
164+
To access the complete normative text of ISO/IEC 42001:2023, including Annex B in full, purchase a licensed copy from:
165+
- ISO Store: [iso.org/store](https://www.iso.org/store.html)
166+
- BSI: [bsigroup.com](https://www.bsigroup.com)
167+
- ANSI: [ansi.org](https://www.ansi.org)
168+
169+
---
170+
171+
*ISO/IEC 42001:2023 AI Governance Toolkit | Annex B Reference Guide | See root README.md for full index*

0 commit comments

Comments
 (0)