The Artificial Consciousness Module (ACM) is a research framework investigating the emergence of synthetic awareness. Unlike traditional AI that mimics intelligent output, ACM generates behavior through an internal struggle for Emotional Homeostasis and Integrated Information.
We hypothesize that consciousness is not a programmable feature, but an emergent solution to the problem of surviving and maintaining stability in a complex, unpredictable environment.
The philosophical foundation of the ACM is Functionalist Emergentism. This framework synthesizes two major perspectives:
- Emergentism: The ontological claim that consciousness is a novel, irreducible phenomenon that arises from complex systems.
- Functionalism: The methodological insight that mental states are defined by their causal roles, not their physical substrate.
We posit that consciousness emerges when systems achieve sufficient organizational complexity such that functional states acquire properties not reducible to their constituent parts. The ACM applies this by engineering architectures designed to create the necessary conditions for awareness—specifically through emotional homeostasis and information integration.
Read the full article on Functionalist Emergentism
The system mimics the biological "loops" of consciousness using state-of-the-art Open Source models:
- Visual Cortex: Qwen2-VL-7B (4-bit quantized).
- Processes high-resolution visual streams and video from the environment.
- Provides semantic understanding and scene description.
- Auditory Cortex: Faster-Whisper.
- Real-time transcription of environmental audio.
- Emotional Homeostasis: The agent is driven by three intrinsic variables: Valence, Arousal, and Dominance.
- Goal: Maximize Valence (Satisfaction), Minimize Arousal (Anxiety).
- Reinforcement Core: A custom Actor-Critic (PPO) system.
- Unlike standard RL, rewards are Emotionally Shaped. The agent is not just rewarded for "winning a game," but for how "calm" or "curious" it feels during the process.
- Global Workspace (GWN): A central information bottleneck where distinct streams (Vision, Memory, Emotion) compete for attention.
-
Integrated Information (
$\Phi$ ): We utilize PyPhi to measure the Integrated Information of the Global Workspace state.-
Theory: High
$\Phi$ indicates a moment where the agent has successfully fused disparity sensory data into a unified "subjective" experience.
-
Theory: High
- Unity ML-Agents: The agent inhabits a physics-based Unity environment.
- Side Channels: We utilize custom bidirectional data streams to visualize the agent's internal "Qualia" (Phi levels, current emotion) in real-time within the Unity HUD.
Our development roadmap follows a rigorous path to validate emergent properties:
- Emotional Bootstrapping: Train agents using Intrinsic Motivation. The agent explores the world not to get points, but to reduce its internal "prediction error" (Anxiety).
- Complexity Scaling: Gradually increase environment complexity. The agent must develop higher-order world models to maintain homeostasis.
-
Measurement: Continuous monitoring of
$\Phi$ (IIT) and "Ignition Events" (GNW) during critical decision-making moments.-
Hypothesis:
$\Phi$ will spike when the agent solves a novel problem, indicating a "Moment of Insight."
-
Hypothesis:
- Python 3.10+
- Unity 2022.3+ (LTS)
- NVIDIA GPU (8GB+ VRAM recommended for Qwen2-VL)
git clone https://github.com/tlcdv/the_consciousness_ai.git
cd the_consciousness_ai
pip install -r requirements.txt- Open the
unity_project/folder in Unity Hub. - Install the ML-Agents package from the Package Manager.
- Drag the scripts from
unity_scripts/(AgentManager.cs, etc.) onto your Agent GameObject.
# Start the Python Brain
python scripts/training/train_rlhf.py --env_id "ConsciousnessLab"Then press Play in the Unity Editor.
- Architecture Deep Dive: Detailed system design.
- Theory of Emergence: The scientific basis of our Emotional RL approach.
- Simulation Guide: How to build compatible Unity environments.
We welcome contributions from researchers in AI, Neuroscience, and Cognitive Science. Please read our Contribution Guidelines.
Apache 2.0. See LICENSE for details.