Skip to content

Neural Network: Technical Overview

Rufus Pearce edited this page Sep 27, 2025 · 1 revision

Neural Network Technical Overview

image

The system's neural network is a unique, single-layer, fully-connected network architecture that dynamically grows through a process of neurogenesis.

Traditional backpropagation is not used for learning, instead relying on a pure Hebbian model (https://github.com/ViciousSquid/Dosidicus/wiki/Hebbian-learning)

...

1. Core Architecture

  • The network starts as a single-layer perceptron with 7 core, named neurons. These neurons represent the fundamental emotional and physical states of the squid:
    • Circular (Basic Needs): hunger, happiness, cleanliness, sleepiness
    • Square (Complex States): satisfaction, anxiety, curiosity

    Each neuron's activation value ranges from 0-100.

    Unlike a typical deep learning model, this network is not structured into distinct input, hidden, and output layers. Instead, all neurons exist on a single plane and are fully interconnected. Each connection between two neurons has a weight, initialized with a random value between -1 and 1, which represents the strength and nature (excitatory or inhibitory) of their relationship.

2. Learning Mechanism: Hebbian Learning

The network updates its connection weights using a Hebbian learning rule, which follows the principle "neurons that fire together, wire together."

  • Learning Cycle: The learning process is not continuous but occurs in discrete cycles, triggered by a timer (by default, every 30 seconds).
  • Activation: During a learning cycle, any neuron whose activation value exceeds a predefined threshold (e.g., > 50) is considered "active.".
  • Weight Update: The system randomly selects a few pairs of currently active neurons. The weight between these pairs is then adjusted according to the Hebbian rule: the change in weight is proportional to the product of the two neurons' activation values multiplied by a learning rate. This strengthens the connection between neurons that are concurrently active.
  • Weight Decay: To ensure network stability and prevent weights from growing indefinitely, a small weight decay is applied over time, gradually weakening all connections.

3. Dynamic Architecture: Neurogenesis

The network's most advanced feature is its ability to create new neurons, a process called neurogenesis. This allows the brain's architecture to grow and adapt based on the squid's experiences.

  • Triggers: Neurogenesis is initiated by one of three counters exceeding a set threshold:
    1. Novelty: Increases when the squid encounters new objects or experiences.
    2. Stress: Increases during stressful events.
    3. Reward: Increases when the squid experiences a positive outcome.
  • Creation Process: When a counter surpasses its threshold and a cooldown period has passed, a new neuron is created.
    • The neuron is named based on its trigger (e.g., novel_0, stress_0).
    • It is positioned visually on the network graph near other currently active neurons.
    • Crucially, it is immediately connected to the existing network with a set of default weights. For example, a new 'reward' neuron automatically forms a strong positive connection to 'satisfaction' and 'happiness'.
  • Dynamic Thresholds: The thresholds required to trigger neurogenesis are not static. They scale upwards as the network grows in size, preventing runaway neuron creation and promoting stability in a mature network.

4. Network Stability and Pruning

To manage the complexity of a dynamically growing network, the system employs pruning mechanisms to remove inefficient or irrelevant components. This feature is critical for long-term network health and is enabled by default.

  • Connection Pruning: The system can periodically remove connections whose absolute weight falls below a very low threshold, cleaning up insignificant links.
  • Neuron Pruning: When the network approaches its configured maximum neuron limit, it can trigger the pruning of entire neurons. This process targets newly created (non-core) neurons that have failed to form strong connections or remain largely inactive.

Clone this wiki locally