You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/ai/Practices/Global-AI-Governance.md
+2Lines changed: 2 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,6 +18,8 @@ practice:
18
18
- tag: Synthetic Intelligence With Malicious Intent
19
19
reason: International agreements restricting AI weaponization and requiring human oversight for all military AI operations.
20
20
---
21
+
22
+
<PracticeIntrodetails={frontMatter} />
21
23
22
24
- Agreements between countries, similar to financial regulations, could establish shared standards for AI ethics, accountability, and human involvement in AI-controlled economies.
Copy file name to clipboardExpand all lines: docs/ai/Practices/Human-In-The-Loop.md
+4-11Lines changed: 4 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,17 +21,10 @@ practice:
21
21
- AI may suggest diagnoses or treatments, but a certified professional reviews and confirms before enacting them. In the above NHS Grampian example, the AI is augmenting human decision making with a third opinion, rather than replacing human judgement altogether (yet).
22
22
- Some proposals mandate that human operators confirm critical actions (e.g., missile launches), preventing AI from unilaterally making life-or-death decisions. This might work in scenarios where response time isn't a factor.
23
23
24
-
-**Efficacy:** Medium – Reduces risk by limiting autonomy on high-stakes tasks; however, humans may become complacent or fail to intervene effectively if over-trusting AI.
25
-
-**Ease of Implementation:** Moderate – Policy, regulatory standards, and user training are needed to embed human oversight effectively.
26
-
27
-
28
24
## Types Of Human In The Loop
29
25
30
-
(edit this)
31
-
32
-
There are three broad types of control humans can exercise:6
33
-
• Semi-autonomous operation, where the machine performs a task and then stops and waits for approval from the human operator before continuing. This control type is often referred to as “human in the loop.”
34
-
• Supervised autonomous operation, where the machine, once activated, performs a task under the supervision of a human and will continue performing the task unless the human operator intervenes to halt its operation. This control type is often referred to as “human on the loop.”
35
-
• Fully autonomous operation, where the machine, once activated, performs a task and the human operator does not have the ability to supervise its operation and intervene in the event of system failure. This control type is often referred to as “human out of the loop.”
26
+
> -**Semi-autonomous operation**: machine performs a task and then stops and waits for approval from the human operator before continuing. This control type is often referred to as "human in the loop."
27
+
> -**Supervised autonomous operation**, where the machine, once activated, performs a task under the supervision of a human and will continue performing the task unless the human operator intervenes to halt its operation. This control type is often referred to as “human on the loop.”
28
+
> -**Fully autonomous operation**, where the machine, once activated, performs a task and the human operator does not have the ability to supervise its operation and intervene in the event of system failure. This control type is often referred to as “human out of the loop.”
Copy file name to clipboardExpand all lines: docs/ai/Practices/Kill-Switch.md
+2-7Lines changed: 2 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,13 +17,8 @@ practice:
17
17
18
18
<PracticeIntrodetails={frontMatter} />
19
19
20
-
### Kill-Switch Mechanisms
21
-
22
-
-**Examples:**
20
+
## Examples
21
+
23
22
-**Google DeepMind’s ‘Big Red Button’ concept** (2016), proposed as a method to interrupt a reinforcement learning AI without it learning to resist interruption.
24
23
25
24
-**Hardware Interrupts in Robotics:** Physical or software-based emergency stops that immediately terminate AI operation.
26
-
27
-
-**Efficacy:** High – An explicit interruption capability can avert catastrophic errors or runaway behaviours, but it's more likely that they will be employed once the error has started, in order to prevent further harm.
28
-
-**Ease of Implementation:** Medium – Requires robust design and consistent testing to avoid workarounds by advanced AI.
Copy file name to clipboardExpand all lines: docs/ai/Practices/National-AI-Regulation.md
+5-1Lines changed: 5 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,10 +12,14 @@ practice:
12
12
- tag: Synthetic Intelligence Rivalry
13
13
reason: "Government policies can strongly influence AI firms' behavior if enforced effectively."
14
14
efficacy: High
15
+
- tag: Loss Of Diversity
16
+
reason: "Antitrust Regulations – Breaking up AI monopolies."
15
17
---
16
18
17
19
<PracticeIntrodetails={frontMatter} />
18
20
19
21
- Governments can implement policies that ensure AI-driven firms remain accountable to human oversight. This could include requiring AI systems to maintain transparency, adhere to ethical standards, and uphold employment obligations by ensuring that a minimum level of human involvement remains in corporate decision-making.
20
22
21
-
- Requires strong legal frameworks, enforcement mechanisms, and political will.
23
+
- Requires strong legal frameworks, enforcement mechanisms, and political will.
24
+
25
+
- Antitrust Regulations – Breaking up AI monopolies.
Copy file name to clipboardExpand all lines: docs/ai/Threats/Loss-Of-Diversity.md
+3-1Lines changed: 3 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ A lack of diversity could create system-wide vulnerabilities, where a single fla
19
19
20
20
## Description
21
21
22
-
22
+
AI development is increasingly controlled by a small number of corporations and governments, leading to monopolistic control over critical systems.
23
23
24
24
## Sources
25
25
@@ -35,6 +35,8 @@ A lack of diversity could create system-wide vulnerabilities, where a single fla
35
35
36
36
-**Lack of Innovation Due to Model Homogeneity:** When the same AI architectures are used across different sectors, it stifles alternative approaches that might better serve specialized needs. A uniform AI landscape risks optimizing for narrow commercial objectives rather than the diverse interests of different populations and industries.
37
37
38
+
-**Mass Surveillance & Social Control:** Governments and corporations use AI to track and influence populations. [Example: ClearView](https://www.clearview.ai)
39
+
38
40
## Can Risk Management Address This Risk?
39
41
40
42
Partially. Traditional risk management can identify and highlight the dangers of AI monoculture, but effective mitigation requires strong regulatory intervention and industry-wide commitment—both of which are difficult to enforce under current economic incentives.
Copy file name to clipboardExpand all lines: docs/ai/Threats/Superintelligence-With-Malicious-Intent.md
-1Lines changed: 0 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,6 @@ part_of: AI Threats
15
15
16
16
<AIThreatIntrofm={frontMatter} />
17
17
18
-
19
18
## Risk Score: High
20
19
21
20
AI systems that surpass human intelligence could develop goals that conflict with human well-being, either by design or through unintended consequences. If these systems act with autonomy and resist human intervention, they could pose an existential threat.
0 commit comments