Skip to content

Commit 4c4bc31

Browse files
committed
Added Unintended Cascading Failures
1 parent cbe778a commit 4c4bc31

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

49 files changed

+17239
-128
lines changed

dictionary.txt

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -387,3 +387,9 @@ underscoring
387387
deepfake
388388
deepfakes
389389
grampian
390+
interpretability
391+
unsupervised
392+
misalign
393+
explainability
394+
hallucinations
395+
proactive
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
---
2+
title: Ecosystem Diversity
3+
description: Encouraging the development of multiple, independent AI models instead of relying on a single dominant system.
4+
featured:
5+
class: c
6+
element: '<action>Ecosystem Diversity</action>'
7+
tags:
8+
- Ecosystem Diversity
9+
- AI Practice
10+
practice:
11+
mitigates:
12+
- tag: Loss Of Diversity
13+
reason: "Diversified AI systems reduce systemic risks and encourage innovation."
14+
efficacy: High
15+
---
16+
17+
<PracticeIntro details={frontMatter} />
18+
19+
- Encouraging the development of multiple, independent AI models instead of relying on a single dominant system.
20+
21+
- Requires regulatory incentives or decentralised AI development initiatives.
Lines changed: 37 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,37 @@
1+
---
2+
title: Global AI Governance
3+
description: International cooperation is necessary to prevent AI firms from evading national regulations by relocating to jurisdictions with lax oversight.
4+
featured:
5+
class: c
6+
element: '<action>National AI Regulation</action>'
7+
tags:
8+
- National AI Regulation
9+
- AI Practice
10+
practice:
11+
mitigates:
12+
- tag: Synthetic Intelligence Rivalry
13+
reason: "Can provide international oversight, but effectiveness depends on cooperation among nations."
14+
efficacy: Medium
15+
- tag: Social Manipulation
16+
reason: "Encourages best practices and self-regulation, but relies on voluntary compliance without legal backing."
17+
efficacy: Medium
18+
19+
---
20+
21+
- Agreements between countries, similar to financial regulations, could establish shared standards for AI ethics, accountability, and human involvement in AI-controlled economies.
22+
23+
- Challenging to implement due to differing national interests, enforcement issues, and political resistance.
24+
25+
- Industry-wide codes of conduct to discourage manipulative AI.
26+
27+
- Incentivize designers to embed fairness and user consent into algorithmic systems.
28+
29+
- Professional bodies and industry coalitions can quickly adopt and publicise guidelines, though ensuring universal adherence remains a challenge. Firms have varying incentives, budgets, and ethical priorities, making universal buy-in elusive.
30+
31+
## Examples
32+
33+
-  [Understanding artificial intelligence ethics and safety - Turing Institute](https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf)
34+
35+
- [AI Playbook for the UK Government](https://www.gov.uk/government/publications/ai-playbook-for-the-uk-government/artificial-intelligence-playbook-for-the-uk-government-html#principles)
36+
37+
- [DOD Adopts Ethical Principles for Artificial Intelligence](https://www.defense.gov/News/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/)

docs/ai/Practices/Human-In-The-Loop.md

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ featured:
66
element: '<action>Human In The Loop</action>'
77
tags:
88
- Human In The Loop
9-
- Practice
9+
- AI Practice
1010
practice:
1111
mitigates:
1212
- tag: Loss Of Human Control
@@ -21,3 +21,15 @@ practice:
2121

2222
- **Efficacy:** Medium – Reduces risk by limiting autonomy on high-stakes tasks; however, humans may become complacent or fail to intervene effectively if over-trusting AI.
2323
- **Ease of Implementation:** Moderate – Policy, regulatory standards, and user training are needed to embed human oversight effectively.
24+
25+
26+
## Types Of Human In The Loop
27+
28+
(edit this)
29+
30+
There are three broad types of control humans can exercise:6
31+
• Semi-autonomous operation, where the machine performs a task and then stops and waits for approval from the human operator before continuing. This control type is often referred to as “human in the loop.”
32+
• Supervised autonomous operation, where the machine, once activated, performs a task under the supervision of a human and will continue performing the task unless the human operator intervenes to halt its operation. This control type is often referred to as “human on the loop.”
33+
• Fully autonomous operation, where the machine, once activated, performs a task and the human operator does not have the ability to supervise its operation and intervene in the event of system failure. This control type is often referred to as “human out of the loop.”
34+
35+
- from https://s3.us-east-1.amazonaws.com/files.cnas.org/hero/documents/CNAS_Autonomous-weapons-operational-risk.pdf
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
---
2+
title: Interpretability
3+
description: Developing tools to analyse AI decision-making processes and detect emergent behaviors before they become risks.
4+
featured:
5+
class: c
6+
element: '<action>Interpretability</action>'
7+
tags:
8+
- Interpretability
9+
- AI Practice
10+
practice:
11+
mitigates:
12+
- tag: Emergent Behaviour
13+
reason: "An explicit interruption capability can avert catastrophic errors or runaway behaviours"
14+
---
15+
16+
<PracticeIntro details={frontMatter} />
17+
18+
- Helps understand AI behavior but does not prevent emergent capabilities from appearing.
19+
20+
- Research in explainable AI is advancing, but understanding deep learning models remains complex.

docs/ai/Practices/Kill-Switch.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ featured:
66
element: '<action>Kill Switch Mechanism</action>'
77
tags:
88
- Kill Switch
9-
- Practice
9+
- AI Practice
1010
practice:
1111
mitigates:
1212
- tag: Loss Of Human Control
Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
---
2+
title: Multi-Stakeholder Oversight
3+
description: Governments can implement policies that ensure AI-driven firms remain accountable to human oversight.
4+
featured:
5+
class: c
6+
element: '<action>Multi-Stakeholder Oversight</action>'
7+
tags:
8+
- Multi-Stakeholder Oversight
9+
- AI Practice
10+
practice:
11+
mitigates:
12+
- tag: Loss Of Diversity
13+
reason: "Ensuring that AI governance involves multiple institutions, including governments, researchers, and civil society, to prevent monopolisation."
14+
efficacy: Medium
15+
---
16+
17+
<PracticeIntro details={frontMatter} />
18+
19+
- Ensuring that AI governance involves multiple institutions, including governments, researchers, and civil society, to prevent monopolisation.
20+
21+
- Helps distribute AI power more equitably but may struggle with enforcement.
22+
23+
- Requires cooperation between multiple sectors, which can be slow and politically complex.
Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
---
2+
title: National AI Regulation
3+
description: Governments can implement policies that ensure AI-driven firms remain accountable to human oversight.
4+
featured:
5+
class: c
6+
element: '<action>National AI Regulation</action>'
7+
tags:
8+
- National AI Regulation
9+
- AI Practice
10+
practice:
11+
mitigates:
12+
- tag: Synthetic Intelligence Rivalry
13+
reason: "Government policies can strongly influence AI firms' behavior if enforced effectively."
14+
efficacy: High
15+
---
16+
17+
<PracticeIntro details={frontMatter} />
18+
19+
- Governments can implement policies that ensure AI-driven firms remain accountable to human oversight. This could include requiring AI systems to maintain transparency, adhere to ethical standards, and uphold employment obligations by ensuring that a minimum level of human involvement remains in corporate decision-making.
20+
21+
- Requires strong legal frameworks, enforcement mechanisms, and political will.
Lines changed: 29 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,29 @@
1+
---
2+
title: Public Awareness
3+
description: Equip citizens with media literacy skills to spot deepfakes and manipulation attempts.
4+
featured:
5+
class: c
6+
element: '<action>Public Awareness</action>'
7+
tags:
8+
- Public Awareness
9+
- AI Practice
10+
practice:
11+
mitigates:
12+
- tag: Social Manipulation
13+
reason: "Empowered, media-savvy populations are significantly harder to manipulate. However, scaling efforts to entire populations is a substantial challenge given diverse educational, cultural, and socioeconomic barriers."
14+
efficacy: Medium
15+
---
16+
17+
<PracticeIntro details={frontMatter} />
18+
19+
- Equip citizens with media literacy skills to spot deepfakes and manipulation attempts.
20+
21+
- Encourage public understanding of how personal data can be exploited by AI-driven systems.
22+
23+
- While public outreach is feasible, achieving wide coverage and sustained engagement can be resource-intensive.  Overcoming entrenched biases, misinformation echo chambers, and public apathy is an uphill battle, particularly if there’s no supportive policy or consistent funding.
24+
25+
## Examples
26+
27+
- https://newslit.org
28+
29+
- https://www.unesco.org/en/media-information-literacy

docs/ai/Practices/Replication-Control.md

Lines changed: 10 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1,27 +1,29 @@
11
---
22
title: Replication Control
3-
description: TBD.
3+
description: Replication control becomes relevant when an AI system can duplicate itself—or be duplicated—beyond the reach of any central authority.
44
featured:
55
class: c
66
element: '<action>Replication Control</action>'
77
tags:
88
- Replication Control
9-
- Practice
9+
- AI Practice
1010
practice:
1111
mitigates:
1212
- tag: Loss Of Human Control
1313
reason: "An explicit interruption capability can avert catastrophic errors or runaway behaviours"
14+
- tag: Emergent Behaviour
15+
reason: "Preventing self-replicating AI or unsupervised proliferation of emergent behaviours by implementing strict replication oversight."
16+
efficacy: High
1417
---
1518

19+
<PracticeIntro details={frontMatter} />
1620

21+
- Replication control becomes relevant when an AI system can duplicate itself—or be duplicated—beyond the reach of any central authority (analogous to a computer virus—though with potentially far greater autonomy and adaptability).
1722

23+
- An organization/person builds a very capable AI with some misaligned objectives. If they distribute its model or code openly, it effectively becomes “in the wild.”
1824

25+
- Could controls be put in place to prevent this from happening? TODO: figure this out.
1926

20-
### Replication Control
27+
- In open-source communities or decentralised systems, controlling replication requires broad consensus and technical enforcement measures.
2128

22-
- Replication control becomes relevant when an AI system can duplicate itself—or be duplicated—beyond the reach of any central authority (analogous to a computer virus—though with potentially far greater autonomy and adaptability).
23-
- An organization/person builds a very capable AI with some misaligned objectives. If they distribute its model or code openly, it effectively becomes “in the wild.”
24-
- Could controls be put in place to prevent this from happening? TODO: figure this out.
2529

26-
- **Efficacy:** Medium – Limits the spread of potentially rogue AI copies.
27-
- **Ease of Implementation:** Low – In open-source communities or decentralized systems, controlling replication requires broad consensus and technical enforcement measures.

0 commit comments

Comments
 (0)