From 7e3df3e752814dec38bed8361c4d6af2c2933307 Mon Sep 17 00:00:00 2001 From: Rob Moffat Date: Sat, 8 Mar 2025 13:29:59 +0000 Subject: [PATCH] Improved some references --- docs/ai/Threats/Emergent-Behaviour.md | 4 ++-- docs/ai/Threats/Loss-Of-Diversity.md | 4 ++-- docs/ai/Threats/Superintelligence-With-Malicious-Intent.md | 2 +- docs/ai/Threats/Synthetic-Intelligence-Rivalry.md | 4 ++-- 4 files changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/ai/Threats/Emergent-Behaviour.md b/docs/ai/Threats/Emergent-Behaviour.md index 94fb4840f..162792263 100644 --- a/docs/ai/Threats/Emergent-Behaviour.md +++ b/docs/ai/Threats/Emergent-Behaviour.md @@ -25,9 +25,9 @@ AI systems sometimes exhibit unexpected behaviours that were not directly progra ## Sources -**DeepMind’s AI Risk Studies (2023):** Research into emergent AI behaviours highlights that as models scale, they may develop unanticipated capabilities, sometimes without direct human prompting. +**DeepMind’s AI Risk Studies [2023](https://arxiv.org/pdf/2305.15324):** Research into emergent AI behaviours highlights that as models scale, they may develop unanticipated capabilities, sometimes without direct human prompting. -**MIT Technology Review (2022):** Reports on AI unpredictability in large language models, showing how these systems often display unexpected behaviours that were not explicitly trained into them. +**MIT Technology Review [2022](https://www.technologyreview.com/2022/11/22/1063618/trust-large-language-models-at-your-own-peril/):** Reports on AI unpredictability in large language models, showing how these systems often display unexpected behaviours that were not explicitly trained into them. ## How This Is Already Happening diff --git a/docs/ai/Threats/Loss-Of-Diversity.md b/docs/ai/Threats/Loss-Of-Diversity.md index e4170d857..acb75709d 100644 --- a/docs/ai/Threats/Loss-Of-Diversity.md +++ b/docs/ai/Threats/Loss-Of-Diversity.md @@ -23,9 +23,9 @@ AI development is increasingly controlled by a small number of corporations and ## Sources -**Bostrom & Shulman, "Global AI Governance" (2021):** Discusses the risks of a single dominant AI system shaping global decision-making. If AI governance converges around a monolithic framework, any flaw or misalignment in the system could have catastrophic consequences at a planetary scale. +**Bostrom & Shulman, "Sharing The World With Digital Minds" [2021](https://nickbostrom.com/papers/digital-minds.pdf):** Discusses the risks of a single dominant AI system shaping global decision-making. If AI governance converges around a monolithic framework, any flaw or misalignment in the system could have catastrophic consequences at a planetary scale. -**OpenAI’s Research on AI Model Homogeneity (2022):** Highlights concerns that if AI models become too similar—due to the concentration of AI development within a few major entities—there could be systemic vulnerabilities, lack of innovation, and a failure to address diverse global needs. +**Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? [2022](https://arxiv.org/pdf/2211.13972v1):** Highlights concerns that if AI models become too similar—due to the concentration of AI development within a few major entities—there could be systemic vulnerabilities, lack of innovation, and a failure to address diverse global needs. ## How This Is Already Happening diff --git a/docs/ai/Threats/Superintelligence-With-Malicious-Intent.md b/docs/ai/Threats/Superintelligence-With-Malicious-Intent.md index cad35f1f8..197230c01 100644 --- a/docs/ai/Threats/Superintelligence-With-Malicious-Intent.md +++ b/docs/ai/Threats/Superintelligence-With-Malicious-Intent.md @@ -25,7 +25,7 @@ AI systems that surpass human intelligence could develop goals that conflict wit - **The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation** [Brundage et al., 2018](https://arxiv.org/abs/1802.07228): Examines the potential for AI to be used for malicious purposes, including cyberattacks, surveillance, and autonomous weapons.  Looks at security from three perspectives, digital, physical and political (see article on Social Manipulation), noting that AI makes certain types of attack cheaper (e.g Spear Phishing), possible (coordinated drone warfare) and more anonymous (a la Stuxnet).  (An excellent overview of this topic). -- **Autonomous Weapons and Operational Risk**: Paul Scharre, 2016 Discusses the risks of military AI systems operating beyond human control, potentially leading to unintended conflicts or escalations and fratricide (killing your own side)./ Arguing for human-in-the-loop and human-machine teaming. [https://s3.us-east-1.amazonaws.com/files.cnas.org/hero/documents/CNAS\_Autonomous-weapons-operational-risk.pdf](https://s3.us-east-1.amazonaws.com/files.cnas.org/hero/documents/CNAS_Autonomous-weapons-operational-risk.pdf) (Used heavily in this section) +- **Autonomous Weapons and Operational Risk**: [Paul Scharre, 2016](https://s3.us-east-1.amazonaws.com/files.cnas.org/hero/documents/CNAS\_Autonomous-weapons-operational-risk.pdf](https://s3.us-east-1.amazonaws.com/files.cnas.org/hero/documents/CNAS_Autonomous-weapons-operational-risk.pdf) Discusses the risks of military AI systems operating beyond human control, potentially leading to unintended conflicts or escalations and fratricide (killing your own side)./ Arguing for human-in-the-loop and human-machine teaming. (Used heavily in this section) ## How This Is Already Happening diff --git a/docs/ai/Threats/Synthetic-Intelligence-Rivalry.md b/docs/ai/Threats/Synthetic-Intelligence-Rivalry.md index 3d933138e..87e2f136e 100644 --- a/docs/ai/Threats/Synthetic-Intelligence-Rivalry.md +++ b/docs/ai/Threats/Synthetic-Intelligence-Rivalry.md @@ -21,9 +21,9 @@ If AI entities did emerge as rivals, the consequences could range from economic ## Sources -**Drexler's Multi-polar AI Scenario:** AI systems may not form a single superintelligent entity but instead compete as multiple independent economic and political agents, influencing industries and national policies. This view likens AI entities to corporations without human employees—self-sustaining, goal-driven entities that operate within legal and economic frameworks but optimize for efficiency, influence, and control over resources. (Eric Drexler, "Reframing Superintelligence" (2019)) +**Drexler's Multi-polar AI Scenario:** AI systems may not form a single superintelligent entity but instead compete as multiple independent economic and political agents, influencing industries and national policies. This view likens AI entities to corporations without human employees—self-sustaining, goal-driven entities that operate within legal and economic frameworks but optimize for efficiency, influence, and control over resources. [Eric Drexler, "Reframing Superintelligence" (2019)](https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf) -**Harari's AI Evolution Hypothesis:** AI could evolve into a distinct force with its own goals, priorities, and possibly a separate "culture," making cooperation difficult and leading to human obsolescence. Some early examples of this trend include algorithm-driven hedge funds, automated corporations with minimal human employees, and self-optimizing AI systems that influence critical decision-making processes. (Yuval Noah Harari, "Homo Deus" (2016)) +**Harari's AI Evolution Hypothesis:** AI could evolve into a distinct force with its own goals, priorities, and possibly a separate "culture," making cooperation difficult and leading to human obsolescence. Some early examples of this trend include algorithm-driven hedge funds, automated corporations with minimal human employees, and self-optimizing AI systems that influence critical decision-making processes. [Yuval Noah Harari, "Homo Deus" (2016)](https://www.ynharari.com/book/homo-deus/) ## How This Is Already Happening