Skip to content

Commit 2b5dc13

Browse files
committed
First draft
1 parent 291a930 commit 2b5dc13

File tree

7 files changed

+19
-24
lines changed

7 files changed

+19
-24
lines changed

docs/ai/Practices/Global-AI-Governance.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,8 @@ practice:
1818
- tag: Synthetic Intelligence With Malicious Intent
1919
reason: International agreements restricting AI weaponization and requiring human oversight for all military AI operations.
2020
---
21+
22+
<PracticeIntro details={frontMatter} />
2123

2224
- Agreements between countries, similar to financial regulations, could establish shared standards for AI ethics, accountability, and human involvement in AI-controlled economies.
2325

docs/ai/Practices/Human-In-The-Loop.md

Lines changed: 4 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -21,17 +21,10 @@ practice:
2121
- AI may suggest diagnoses or treatments, but a certified professional reviews and confirms before enacting them. In the above NHS Grampian example, the AI is augmenting human decision making with a third opinion, rather than replacing human judgement altogether (yet).
2222
- Some proposals mandate that human operators confirm critical actions (e.g., missile launches), preventing AI from unilaterally making life-or-death decisions. This might work in scenarios where response time isn't a factor.
2323

24-
- **Efficacy:** Medium – Reduces risk by limiting autonomy on high-stakes tasks; however, humans may become complacent or fail to intervene effectively if over-trusting AI.
25-
- **Ease of Implementation:** Moderate – Policy, regulatory standards, and user training are needed to embed human oversight effectively.
26-
27-
2824
## Types Of Human In The Loop
2925

30-
(edit this)
31-
32-
There are three broad types of control humans can exercise:6
33-
• Semi-autonomous operation, where the machine performs a task and then stops and waits for approval from the human operator before continuing. This control type is often referred to as “human in the loop.”
34-
• Supervised autonomous operation, where the machine, once activated, performs a task under the supervision of a human and will continue performing the task unless the human operator intervenes to halt its operation. This control type is often referred to as “human on the loop.”
35-
• Fully autonomous operation, where the machine, once activated, performs a task and the human operator does not have the ability to supervise its operation and intervene in the event of system failure. This control type is often referred to as “human out of the loop.”
26+
> - **Semi-autonomous operation**: machine performs a task and then stops and waits for approval from the human operator before continuing. This control type is often referred to as "human in the loop."
27+
> - **Supervised autonomous operation**, where the machine, once activated, performs a task under the supervision of a human and will continue performing the task unless the human operator intervenes to halt its operation. This control type is often referred to as “human on the loop.”
28+
> - **Fully autonomous operation**, where the machine, once activated, performs a task and the human operator does not have the ability to supervise its operation and intervene in the event of system failure. This control type is often referred to as “human out of the loop.”
3629
37-
- from https://s3.us-east-1.amazonaws.com/files.cnas.org/hero/documents/CNAS_Autonomous-weapons-operational-risk.pdf
30+
From: https://s3.us-east-1.amazonaws.com/files.cnas.org/hero/documents/CNAS_Autonomous-weapons-operational-risk.pdf

docs/ai/Practices/Kill-Switch.md

Lines changed: 2 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -17,13 +17,8 @@ practice:
1717

1818
<PracticeIntro details={frontMatter} />
1919

20-
### Kill-Switch Mechanisms
21-
22-
- **Examples:**
20+
## Examples
21+
2322
- **Google DeepMind’s ‘Big Red Button’ concept** (2016), proposed as a method to interrupt a reinforcement learning AI without it learning to resist interruption.
2423

2524
- **Hardware Interrupts in Robotics:** Physical or software-based emergency stops that immediately terminate AI operation.
26-
27-
- **Efficacy:** High – An explicit interruption capability can avert catastrophic errors or runaway behaviours, but it's more likely that they will be employed once the error has started, in order to prevent further harm.
28-
- **Ease of Implementation:** Medium – Requires robust design and consistent testing to avoid workarounds by advanced AI.
29-

docs/ai/Practices/National-AI-Regulation.md

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,14 @@ practice:
1212
- tag: Synthetic Intelligence Rivalry
1313
reason: "Government policies can strongly influence AI firms' behavior if enforced effectively."
1414
efficacy: High
15+
- tag: Loss Of Diversity
16+
reason: "Antitrust Regulations – Breaking up AI monopolies."
1517
---
1618

1719
<PracticeIntro details={frontMatter} />
1820

1921
- Governments can implement policies that ensure AI-driven firms remain accountable to human oversight. This could include requiring AI systems to maintain transparency, adhere to ethical standards, and uphold employment obligations by ensuring that a minimum level of human involvement remains in corporate decision-making.
2022

21-
- Requires strong legal frameworks, enforcement mechanisms, and political will.
23+
- Requires strong legal frameworks, enforcement mechanisms, and political will.
24+
25+
- Antitrust Regulations – Breaking up AI monopolies.

docs/ai/Start.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
---
2-
title: Artificial Intelligence Risk
2+
title: Artificial Intelligence Risk (Draft)
33
description: Risk-First Track of articles on Artificial Intelligence Risk
44

55
featured:
@@ -21,8 +21,8 @@ A sequence looking at societal-level risks due to Artificial Intelligence (AI).
2121

2222
## Threats
2323

24-
<TagList filter="ai" tag="AI Threats" />
24+
<TagList tag="AI Threats" />
2525

2626
## Practices
2727

28-
<TagList filter="ai" tag="AI Practice" />
28+
<TagList tag="AI Practice" />

docs/ai/Threats/Loss-Of-Diversity.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ A lack of diversity could create system-wide vulnerabilities, where a single fla
1919

2020
## Description
2121

22-
22+
AI development is increasingly controlled by a small number of corporations and governments, leading to monopolistic control over critical systems.
2323

2424
## Sources
2525

@@ -35,6 +35,8 @@ A lack of diversity could create system-wide vulnerabilities, where a single fla
3535

3636
- **Lack of Innovation Due to Model Homogeneity:** When the same AI architectures are used across different sectors, it stifles alternative approaches that might better serve specialized needs. A uniform AI landscape risks optimizing for narrow commercial objectives rather than the diverse interests of different populations and industries.
3737

38+
- **Mass Surveillance & Social Control:** Governments and corporations use AI to track and influence populations. [Example: ClearView](https://www.clearview.ai)
39+
3840
## Can Risk Management Address This Risk?
3941

4042
Partially. Traditional risk management can identify and highlight the dangers of AI monoculture, but effective mitigation requires strong regulatory intervention and industry-wide commitment—both of which are difficult to enforce under current economic incentives.

docs/ai/Threats/Superintelligence-With-Malicious-Intent.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,6 @@ part_of: AI Threats
1515

1616
<AIThreatIntro fm={frontMatter} />
1717

18-
1918
## Risk Score: High
2019

2120
AI systems that surpass human intelligence could develop goals that conflict with human well-being, either by design or through unintended consequences. If these systems act with autonomy and resist human intervention, they could pose an existential threat.

0 commit comments

Comments
 (0)