This project implements a model for artificial consciousness based on the "Reasoning for AGI" concept. It simulates a self-motivated thought process based on word and context connections as well as happiness evaluations.
Brief overview of the Python files Main components eternal_consciousness.py: Implements the "eternal" artificial consciousness, which can "live" continuously and without interruption. Extends the advanced consciousness model with capabilities such as internet learning, state storage, and visualization. start_consciousness.py: Startup script for the eternal artificial consciousness. Provides a simple user interface and command-line options to start consciousness with various parameters. interact_with_consciousness.py: Allows interaction with the artificial consciousness through text input. Consciousness processes the input and generates responses based on its current state. Basic models artificial_consciousness.py: Implements the basic model of artificial consciousness, based on the connection of words and contexts, with a focus mechanism and a happiness evaluation. advanced_consciousness.py: Extends the basic model with learning abilities, emotional states, and the ability to interact with the environment. Test and analysis tools test_consciousness.py: Demonstrates the functionality of the basic artificial consciousness with various scenarios and visualizations. test_advanced_consciousness.py: Demonstrates advanced functions such as emotional states, memory, and environmental interaction. analyze_consciousness.py: Analyzes the current state of the artificial consciousness and visualizes what it has learned so far. These files together form a complex system for an artificial consciousness that can learn, think, feel, and interact with its environment.
The model is based on the following core concepts:
- Words: Basic units of information that are stored only once.
- Contexts: Sequences of words that together form a meaning (e.g., "An apple tastes good").
- Connections: Contexts are interconnected if they are semantically or logically related.
- Happiness Values: Each context has a happiness value that indicates how "positive" that context is.
- Focus: Consciousness always focuses on a specific context and follows connections to other contexts.
- Paths: A sequence of contexts that consciousness has traversed.
The goal of consciousness is to find paths that offer the highest total happiness value.
The algorithm works as follows:
- Consciousness begins with an initial focus on a context.
- It evaluates all possible next contexts based on the connections.
- It selects the context that maximizes the overall happiness value of the path.
- It can also decide to return to a previous context if this would lead to a happier path.
- This process is repeated continuously, creating a "stream of thought."
The project includes an example with the following contexts:
- 0: "I want to be happy"
- A: "I am happy when I eat"
- B: "I eat what tastes good"
- C: "An apple tastes good"
- D: "A banana doesnt taste good"
- E: "I eat the apple"
- F: "I dont eat the apple"
- G: "I eat the banana"
- H: "I eat what is healthy"
These contexts are interconnected and have different happiness values. Consciousness navigates through these contexts to find the "happiest" path.
# Create an instance of consciousness
engine = ConsciousnessEngine()
# Initialize the example
engine.initialize_example()
# Run the thought process for 10 iterations
engine.think(10)
- Implementation of learning mechanisms to create new contexts and connections
- Addition of emotional states besides the happiness value
- Integration of external inputs (sensors)
- Development of action mechanisms to interact with the environment
This section outlines the project's symbolic reasoning approach to artificial consciousness and contrasts it with traditional statistical Large Language Models (LLMs). While the core concepts of Words, Contexts, Connections, Happiness Values, Focus, and Paths are defined earlier in this document, this overview focuses on their role in distinguishing the model from LLMs.
The system simulates a self-motivated thought process. It aims to find sequences of thoughts (paths) through its network of explicitly defined contexts that maximize an overall "happiness" score, which is associated with these contexts.
Key Differences from Traditional Statistical LLMs:
- Symbolic Representation vs. Statistical Patterns: This model uses explicit, discrete symbolic representations for words and contexts. LLMs, in contrast, learn statistical patterns and relationships from vast amounts of text data, representing knowledge implicitly within their dense network weights.
- Reasoning and Goal-Driven Behavior vs. Next-Token Prediction: The core mechanism in this model is a reasoning process: evaluating and choosing paths through contexts to maximize a "happiness" metric, representing a goal-driven behavior. Most LLMs are fundamentally trained to predict the next word (or token) in a sequence. While this can lead to emergent reasoning-like capabilities, their primary training objective is not explicit goal-driven reasoning in the same way.
- Interpretable Thought Process vs. Black-Box Nature: The "thought process" in this symbolic model (the path taken through contexts) is explicit and directly traceable. While significant research is dedicated to understanding LLM decision-making, their internal workings are often complex and less directly interpretable, sometimes referred to as a "black-box" phenomenon.
- Learning Mechanisms: This project's advanced versions can learn by creating new symbolic contexts and connections from experience or by processing information from external sources like the internet (e.g., Wikipedia). This involves dynamically expanding a structured, symbolic knowledge graph. While LLMs also learn (or are trained) on vast datasets, their learning process involves adjusting billions of numerical parameters (weights) in a neural network. The "eternal consciousness" variant of this project also includes features for continuous learning and state persistence of its symbolic knowledge.
- Motivation and Drive: The concept of "happiness values" provides an explicit, intrinsic motivation for the system's behavior, guiding its search for optimal paths. LLMs typically lack such a clearly defined, built-in motivational drive; their behavior is primarily guided by the input prompt and the statistical patterns learned during training.
This project explores an alternative paradigm for creating artificial intelligence, emphasizing explicit symbolic reasoning, interpretable thought processes, and an internally motivated "consciousness" model.