A synthetic hippocampus for AI systems. Safety-first hypergraph-based associative memory architecture designed to explore persistent, coherent, and inspectable memory for AI. Transparent reasoning through spreading activation.
๐ Read the OG Concept / Blueprint โข ๐ Quick Start โข ๐ฌ Discussions โข ๐ค Contribute
SCE is a brain-inspired memory layer designed to act as a "System 2" reasoning substrate for AI.
Unlike vector databases that retrieve isolated chunks based on similarity, SCE models information as a structured graph of relationshipsโmimicking how biological neural networks form and strengthen connections. It uses energy propagation ("spreading activation") to dynamically assemble context, allowing systems to "remember" and "reason" through network dynamics rather than similarity search.
- Contextual Coherence - Retrieves related concepts even if they don't share keywords
- Emergent Reasoning - Detects contradictions and generates hypotheses via graph topology
- Transparency - Every reasoning path is fully inspectable and auditable
- Safety-First Design - Built with AI alignment and security as core principles
Open old_demo/sce_demo.tsx or old_demo/sce_demo_with_brain_behavior.tsx in Claude Artifacts for an instant, interactive visualization.
Runs the interactive visualization in your browser.
npm install
npm run devRuns as a standalone desktop application with file system access (Experimental).
npm run tauri devThe Native App relies on Tauri v2, which compiles a high-performance binary specific to your operating system.
| Platform | Output | Prerequisites |
|---|---|---|
| Windows | .exe / .msi |
C++ Build Tools + Rust |
| macOS | .app / .dmg |
Xcode (xcode-select --install) + Rust |
| Linux | Binary / .AppImage |
webkit2gtk (e.g., sudo apt install libwebkit2gtk-4.1-dev) + Rust |
| Component | Technology |
|---|---|
| Frontend | React 19, TypeScript, Vite |
| Styling | Tailwind CSS 3, Glassmorphism UI |
| Visualization | Lucide Icons, Recharts, Custom Graph Renderer |
| Math Engine | Custom Hypergraph (TypeScript) |
| Desktop | Tauri 2.0 + Rust (Native ARM64/x64) |
| AI Integration | Google Gemini, Groq, Ollama (Local) |
| Resource | Description |
|---|---|
| ๐ OG Concept / Blueprint Paper | Complete theoretical foundation |
| ๐ง API Reference | Integration guide & function docs |
| ๐ Quick Start Tutorial | 10-minute hands-on guide |
| ๐ Detailed Updates | Update logs |
| ๐ Architecture Notes | Research directions & considerations |
| ๐ค Contributing Guide | How to contribute |
| ๐ก๏ธ Security Policy | Responsible disclosure |
Status: Active Research & Development
API Stability: Expect breaking changes
Development Philosophy: Deliberately kept minimal to encourage experimentation and exploration. See architecture notes for research directions and considerations.
- Hypergraph Engine & Spreading Activation
- Hebbian Learning & MMR Pruning
- Interactive Visualization (Web)
- Native Desktop Support (Tauri)
- Contradiction Detection & Resolution
- Multi-Focus Working Memory
- Temporal Scoring with Time-Decay
- Goal-Directed Activation (Gravity Wells)
- Archival Strategy for Stale Nodes
- Build baseline security features
- Basic app features for quick experimentation
- Advanced pruning strategies
- Community feedback integration
| Platform | Status | How to Run |
|---|---|---|
| ๐ Web | โ Stable | npm run dev |
| ๐ฅ๏ธ Desktop | โ Experimental | npm run tauri dev |
Build Your Own: Use
npm run tauri buildto create a standalone desktop application for your operating system. Pre-built binaries available in Releases.
Realism Note: This is a research prototype optimized for correctness and inspectability, not performance. Expect higher memory usage and latency with large graph sizes (>10k nodes). Optimizations are planned for future releases.
This started while building a digital twin project. I kept running into the same fundamental issues with existing memory systems:
- Contextual Fragmentation - Related information retrieved independently, losing cross-domain relationships
- Flat Relevance - Vector similarity captures surface semantics but ignores relational and structural importance
- Token Inefficiency - Retrieved chunks injected wholesale, regardless of marginal informational value
I did what I always do (when I cannot climb the wall) I demolished it and started to rebuild a new one from first principles, pulling from existing research across the entire spectrumโneuroscience, graph theory, information theory, cognitive architecture.
The result is a completely new type of architecture. Not an incremental improvement. A different foundation.
I'm deeply concerned about the race toward ASI and the security challenges we face today. I think this architecture has theoretical possibilities to tackle core problems in AI safety and alignmentโproblems that won't wait for traditional research timelines.
You can see it working, adapting, and evolving in the demo with a tiny test set. Now it needs to be tested at scale and in depth.
Built from first principles, synthesizing insights from:
- Neuroscience - Hippocampal memory consolidation, synaptic plasticity
- Cognitive Architecture - ACT-R, SOAR, spreading activation models
- Graph Theory - Hypergraph dynamics and topology
- Information Theory - Maximal Marginal Relevance, entropy-based pruning
Before we jump to test the theoretical limits, we need to find the physical/practical boundaries first. Break things. Optimize. Build. Push it until it fails.
- How well does it work?
- How deep can it go?
- What problems can it actually solve?
These questions can only be answered by the community testing, probing, and pushing the boundaries.
Author's Note: This architecture was developed by a single developer, not a team, nor a research lab. Because of this, I welcome every researcher and developer to explore the depths of this architecture. The potential is there, but it needs more minds to experiment with it.
Important: I'm intentionally keeping the core implementation minimal to avoid constraining exploration. The goal is to provide a working foundation that others can build upon, modify, and take in new directions. See my architecture notes for specific areas needing investigation.
- Adversarial testing and red-teaming
- Alignment research applications
- Risk analysis and mitigation strategies
- Security auditing
- Performance benchmarking and optimization
- Alternative activation functions
- Novel pruning strategies
- Graph compression techniques
- Integration examples and tutorials
- Custom node type implementations
See my architecture notes for specific research directions
Current AI systems are racing toward greater capability with fundamentally opaque memory. Vector databases, transformer memory, RAGโall hide reasoning in black boxes.
SCE takes a different path:
- โ Memory as explicit structure, not embeddings
- โ Learning through natural dynamics, not gradient descent
- โ Reasoning visible in topology, not hidden in weights
- โ Safety through inspectability by design
Not about making AI more capableโabout making it safer as it becomes more capable.
License: Apache 2.0 - See LICENSE
If you use SCE in your research, please cite:
@misc{sce_2025,
title={The Synapse Context Engine (SCE): Safe AI Memory Architecture},
author={Lasse Sainia},
year={2025},
url={https://github.com/sasus-dev/synapse-context-engine}
}A first-principles approach to AI memory and safety
Built by one developer. Refined by a community.
โญ Star this repo if you believe AI safety needs new approaches
๐ฌ Try the demo via Claude โข ๐ด Fork and experiment โข ๐ฌ Join the discussion
Support This Work
If SCE helps your research or project, consider supporting its development:
๐ Sponsor via GitHub โข ๐ Star the repo
Questions? Open a Discussion ๐ฌ
Found a bug? Report it ๐
Want to contribute? See Guidelines ๐ค