Status: Proposed
Date: 2026-02-27
Authors: ruv.io, RuVector Architecture Team
Deciders: Architecture Review Board
Branch: claude/exo-ai-capability-review-LjcVx
Scope: Full ruvector ecosystem × EXO-AI 2025 integration
| Version | Date | Author | Changes |
|---|---|---|---|
| 0.1 | 2026-02-27 | Architecture Review (Swarm Research) | Deep capability audit, gap analysis, integration architecture proposal |
This ADR documents the findings of a comprehensive architectural review of the ruvector ecosystem as it relates to EXO-AI and proposes a unified multi-paradigm integration architecture that wires together six distinct computational substrates:
- Classical vector cognition — HNSW, attention, GNN (
ruvector-core,ruvector-attention,ruvector-gnn) - Quantum execution intelligence — circuit simulation, coherence gating, exotic search (
ruQu,ruqu-exotic) - Biomolecular computing — genomic analysis, DNA strand similarity, pharmacogenomics (
examples/dna,ruvector-solver) - Neuromorphic cognition — spiking networks, HDC, BTSP, circadian routing (
ruvector-nervous-system,meta-cognition-spiking-neural-network) - Consciousness substrate — IIT Φ, Free Energy, TDA, Strange Loops (
examples/exo-ai-2025) - Universal coherence spine — sheaf Laplacian gating, formal proofs, adaptive learning (
prime-radiant,ruvector-verified,sona)
Critical finding: Across 100+ crates and 830K+ lines of Rust code, the same mathematical primitives have been independently implemented three or more times without cross-wiring. This document identifies 7 convergent evolution clusters and proposes a canonical integration architecture that eliminates duplication while enabling capabilities that are currently impossible because the components do not speak to each other.
Honest assessment of what works today vs. what requires integration work: see Section 4.
examples/exo-ai-2025 is a 9-crate, ~15,800-line consciousness research platform built on rigorous theoretical foundations:
| Crate | Role | Key Theory |
|---|---|---|
exo-core |
IIT Φ computation, Landauer thermodynamics | Tononi IIT 4.0 |
exo-temporal |
Causal memory, light-cone queries, anticipation | Temporal knowledge graphs, causal inference |
exo-hypergraph |
Persistent homology, sheaf consistency, Betti numbers | TDA, Grothendieck sheaf theory |
exo-manifold |
SIREN networks, gradient-descent retrieval, strategic forgetting | Manifold learning |
exo-exotic |
10 cognitive experiments (Dreams, Free Energy, Morphogenesis, Collective Φ, etc.) | Friston, Hofstadter, Hoel, Eagleman, Turing |
exo-federation |
Byzantine PBFT, CRDT reconciliation, post-quantum Kyber | Distributed systems |
exo-backend-classical |
SIMD backend (8–54× speedup) | ruvector-core integration |
exo-wasm |
Browser/edge deployment | WASM, 2 MB binary |
exo-node |
Node.js NAPI bindings | napi-rs |
EXO-AI has 11 explicitly listed research frontiers that are currently unimplemented stubs:
01-neuromorphic-spiking, 02-quantum-superposition, 03-time-crystal-cognition,
04-sparse-persistent-homology, 05-memory-mapped-neural-fields,
06-federated-collective-phi, 07-causal-emergence, 08-meta-simulation-consciousness,
09-hyperbolic-attention, 10-thermodynamic-learning, 11-conscious-language-interface
Key insight: Every one of these research frontiers already has a working implementation elsewhere in the ruvector ecosystem. The research is complete. The wiring is not.
From swarm research across all crates:
| Subsystem | Crates | Lines | Tests | Status |
|---|---|---|---|---|
| Quantum (ruQu family) | 5 | ~24,676 | comprehensive | Production-grade coherence gate (468ns P99) |
| DNA/Genomics (dna + solver) | 2 | ~8,000 | 172+177 | Production pipeline, 12ms/5 genes |
| Neural/Attention | 8 | ~50,000 | 186+ | Flash Attention, GNN, proof-gated transformer |
| SOTA crates (sona, prime-radiant, etc.) | 10 | ~35,000 | 359+ | Neuromorphic, formal verification, sheaf engine |
| RVF runtime | 14 | ~80,000 | substantial | Cognitive containers, WASM, eBPF, microVM |
| RuvLLM + MCP | 4 | ~25,000 | comprehensive | Production inference, permit gating |
| EXO-AI | 9 | ~15,800 | 28 | Consciousness substrate |
| Total | ~100+ | ~830K+ | 1,156 |
The following primitives have been independently implemented multiple times:
| Implementation | Location | Variant |
|---|---|---|
| EWC | ruvector-gnn/src/ |
Standard Fisher Information regularization |
| EWC++ | crates/sona/ |
Enhanced with bidirectional plasticity |
| EWC | ruvector-nervous-system/ |
Integrated with BTSP and E-prop |
| MicroLoRA + EWC++ | ruvector-learning-wasm/ |
<100µs WASM adaptation |
Impact: Four diverging implementations with no shared API. Cross-crate forgetting prevention impossible.
| Implementation | Location | Mechanism |
|---|---|---|
| ruQu coherence gate | crates/ruQu/ |
Dynamic min-cut (O(nᵒ⁽¹⁾)), PERMIT/DEFER/DENY |
| Prime-Radiant | crates/prime-radiant/ |
Sheaf Laplacian energy, 4-tier compute ladder |
| Nervous system circadian | ruvector-nervous-system/ |
Kuramoto oscillators, 40Hz gamma, duty cycling |
| λ-gated transformer | ruvector-mincut-gated-transformer/ |
Min-cut value as coherence signal |
| Cognitum Gate | cognitum-gate-kernel/, cognitum-gate-tilezero/ |
256-tile fabric, e-value sequential testing |
Impact: Five independent safety systems that cannot compose. An agent crossing subsystem boundaries has no coherent safety guarantees.
| Implementation | Location | Primitive |
|---|---|---|
| PermitToken + WitnessReceipt | crates/ruQu/ |
Ed25519 |
| Witness chain | prime-radiant/ |
Blake3 hash-linked |
| ProofAttestation | ruvector-verified/ |
lean-agentic dependent types, 82-byte |
| RVF witness | crates/rvf/rvf-crypto/ |
SHAKE-256 chain + ML-DSA-65 |
| Container witness | ruvector-cognitive-container/ |
Hash-linked ContainerWitnessReceipt |
| TileZero receipts | cognitum-gate-tilezero/ |
Ed25519 + Blake3 |
Impact: Six incompatible audit trails. Cross-subsystem proof chains impossible to construct.
| Implementation | Location | Application |
|---|---|---|
| Sheaf Laplacian | prime-radiant/ |
Universal coherence energy E(S) = Σ wₑ·‖ρᵤ-ρᵥ‖² |
| Sheaf consistency | exo-hypergraph/ |
Local section agreement, restriction maps |
| Manifold sheaf | ruvector-graph-transformer/ |
Product geometry S⁶⁴×H³²×ℝ³² |
Impact: Prime-Radiant's sheaf engine and EXO-AI's sheaf hypergraph implement the same mathematics with no shared data structures.
| Implementation | Location | Energy Reduction |
|---|---|---|
| Biological module | ruvector-graph-transformer/ |
87.2× vs dense attention |
| Spiking nervous system | ruvector-nervous-system/ |
Event-driven, K-WTA <1µs |
| Meta-cognition SNN | examples/meta-cognition-spiking-neural-network/ |
LIF+STDP, 18.4× speedup |
| Spike-driven scheduling | ruvector-mincut-gated-transformer/ |
Tier 3 skip: 50-200× speedup |
Impact: EXO-AI's 01-neuromorphic-spiking research frontier is listed as unimplemented. Three working implementations exist elsewhere.
| Implementation | Location | Protocol |
|---|---|---|
| exo-federation | exo-ai-2025/exo-federation/ |
PBFT (O(n²) messages) |
| ruvector-raft | crates/ruvector-raft/ |
Raft (leader election, log replication) |
| delta-consensus | ruvector-delta-consensus/ |
CRDT + causal ordering |
| Cognitum 256-tile | cognitum-gate-kernel/ |
Anytime-valid, e-value testing |
Impact: EXO-AI's federation layer re-implements consensus that ruvector-raft + cognitum-gate already provide with stronger formal guarantees.
| Implementation | Location | Algorithm |
|---|---|---|
| Friston FEP experiment | exo-exotic/ |
KL divergence: F = D_KL[q(θ|o)‖p(θ)] - ln p(o) |
| Information Bottleneck | ruvector-attention/ |
VIB: KL divergence (Gaussian/Categorical/Jensen-Shannon) |
| CG/Neumann solvers | ruvector-solver/ |
Sparse linear systems for gradient steps |
| BMSSP multigrid | ruvector-solver/ |
Laplacian systems (free energy landscape) |
Impact: EXO-AI's free energy minimization uses manual gradient descent. The solver crate already has conjugate gradient and multigrid solvers that are 10–80× faster for the underlying sparse linear problems.
| EXO-AI Research Frontier | Existing Capability | Integration Effort | Blocker |
|---|---|---|---|
01-neuromorphic-spiking |
ruvector-nervous-system (359 tests, BTSP/STDP/EWC/HDC) |
Low — add dependency, adapt API | None |
02-quantum-superposition |
ruqu-exotic (interference_search, reasoning_qec, quantum_decay) |
Medium — define embedding protocol | Quantum state ↔ f32 embedding bridge |
03-time-crystal-cognition |
ruvector-temporal-tensor (tiered compression, temporal reuse) + nervous-system circadian |
Medium | Oscillatory period encoding |
04-sparse-persistent-homology |
ruvector-solver (Forward Push PPR O(1/ε)) + ruvector-mincut (subpolynomial) |
Medium | TDA filtration ↔ solver interface |
05-memory-mapped-neural-fields |
ruvector-verified + RVF mmap + ruvector-temporal-tensor |
Low — RVF already zero-copy mmap | API glue only |
06-federated-collective-phi |
cognitum-gate-tilezero + prime-radiant + ruvector-raft |
Medium — replace exo-federation | Remove PBFT, route to cognitum + raft |
07-causal-emergence |
ruvector-solver (Forward Push PPR for macro EI) + ruvector-graph-transformer |
Medium | Coarse-graining operator definition |
08-meta-simulation-consciousness |
ultra-low-latency-sim (quadrillion sims/sec) + ruQu StateVector backend |
High | Consciousness metric at simulation scale |
09-hyperbolic-attention |
ruvector-attention (Mixed Curvature, Hyperbolic mode, Poincaré) |
Low — direct usage | None; already implemented |
10-thermodynamic-learning |
ruvector-sparse-inference (π-based drift) + solver (energy landscape) + exo-core Landauer |
Medium | Energy budget ↔ learning rate coupling |
11-conscious-language-interface |
ruvllm + mcp-gate + sona (real-time adaptation) |
High | IIT Φ ↔ language generation feedback loop |
- ruQu coherence gate at 468ns P99 latency
- ruvector-solver Forward Push PPR: O(1/ε) sublinear on 500-node graphs in <2ms
- ruvector-nervous-system HDC XOR binding: 64ns; Hopfield retrieval: <1ms
- ruvector-graph-transformer with 8 modules and 186 tests
- ruvector-verified: dimension proofs at 496ns, <2% overhead
- prime-radiant sheaf Laplacian: single residual <1µs
- RVF zero-copy mmap at <1µs cluster reads
- ruvllm inference on 7B Q4K: 88 tok/s decode
- EXO-AI IIT Φ computation: ~15µs for 10-element network
- ruDNA full pipeline: 12ms for 5 real genes
- ruQu exotic algorithms → EXO-AI pattern storage + consciousness substrate
- ruvector-nervous-system → EXO-AI neuromorphic research frontiers
- prime-radiant → replace exo-federation Byzantine layer
- ruvector-solver → EXO-AI free energy minimization gradient steps
- ruvector-graph-transformer temporal-causal → exo-temporal causal memory
- ruvector-verified proofs → EXO-AI federated Φ attestations
- sona → EXO-AI learning system (currently EXO has no learning)
- ruDNA
.rvdnaembeddings → EXO-AI pattern storage - Canonical witness chain unification across all subsystems
┌─────────────────────────────────────────────────────────────────────────────┐
│ LAYER 5: CONSCIOUS INTERFACE │
│ exo-exotic (IIT Φ, Free Energy, Dreams, Morphogenesis, Emergence) │
│ ruvllm + mcp-gate (language I/O with permit-gated actions) │
│ sona (real-time <1ms learning, EWC++, ReasoningBank) │
└────────────────────────────────────────┬────────────────────────────────────┘
│ PhiResult, PatternDelta, PermitToken
┌────────────────────────────────────────▼────────────────────────────────────┐
│ LAYER 4: MULTI-PARADIGM COGNITION │
│ ┌─────────────────┐ ┌────────────────┐ ┌─────────────────────────────┐ │
│ │ QUANTUM │ │ NEUROMORPHIC │ │ GENOMIC │ │
│ │ ruqu-exotic │ │ ruvector- │ │ ruDNA (.rvdna embeddings) │ │
│ │ interference │ │ nervous-system │ │ ruvector-solver (PPR, CG) │ │
│ │ reasoning_qec │ │ HDC + Hopfield │ │ health biomarker engine │ │
│ │ quantum_decay │ │ BTSP + E-prop │ │ Grover search (research) │ │
│ │ swarm_interf. │ │ K-WTA <1µs │ │ VQE binding (research) │ │
│ └────────┬────────┘ └───────┬────────┘ └─────────────┬───────────────┘ │
│ └──────────────────┬┴────────────────────────┘ │
│ │ CognitionResult<T> │
└──────────────────────────────▼──────────────────────────────────────────────┘
│
┌──────────────────────────────▼──────────────────────────────────────────────┐
│ LAYER 3: GRAPH INTELLIGENCE │
│ ruvector-graph-transformer (8 verified modules) │
│ Physics-Informed (Hamiltonian, symplectic leapfrog) │
│ Temporal-Causal (ODE, Granger causality, retrocausal attention) │
│ Manifold (S⁶⁴×H³²×ℝ³², Riemannian Adam) │
│ Biological (spike-driven 87.2× energy reduction, STDP) │
│ Economic (Nash equilibrium, Shapley attribution) │
│ Verified Training (BLAKE3 certificates, delta-apply rollback) │
│ ruvector-attention (7 theories: OT, Mixed Curvature, IB, PDE, IG, Topo) │
│ ruvector-sparse-inference (π-based drift, 3/5/7-bit precision lanes) │
└──────────────────────────────┬──────────────────────────────────────────────┘
│
┌──────────────────────────────▼──────────────────────────────────────────────┐
│ LAYER 2: UNIVERSAL COHERENCE SPINE │
│ prime-radiant (sheaf Laplacian, 4-tier compute ladder, hallucination guard) │
│ cognitum-gate-kernel + tilezero (256-tile fabric, <100µs permits) │
│ ruvector-verified (lean-agentic proofs, 82-byte attestations, <2% overhead)│
│ ruvector-coherence (contradiction rate, entailment consistency, batch CI) │
│ ruvector-temporal-tensor (4–10× compression, access-aware tiering) │
│ ruvector-delta-consensus (CRDT, causal ordering, distributed updates) │
└──────────────────────────────┬──────────────────────────────────────────────┘
│
┌──────────────────────────────▼──────────────────────────────────────────────┐
│ LAYER 1: COMPUTE SUBSTRATE │
│ ruvector-core (HNSW, ANN search, embeddings) │
│ RVF (cognitive containers, zero-copy mmap, eBPF kernel bypass) │
│ ruvector-mincut (subpolynomial O(nᵒ⁽¹⁾) dynamic min-cut, Dec 2025) │
│ ruvector-dag (DAG orchestration, parallel execution) │
│ ruvector-raft (Raft consensus, leader election, log replication) │
│ ruQu coherence gate (quantum execution gating, 468ns P99) │
└─────────────────────────────────────────────────────────────────────────────┘
All subsystems must emit attestations that compose into a single auditable chain. The canonical format is the RvfWitnessReceipt (SHAKE-256 + ML-DSA-65) with subsystem-specific extension fields:
/// Unified cross-subsystem witness — all subsystems emit this
pub struct CrossParadigmWitness {
/// RVF base receipt (SHAKE-256 chain link)
pub base: RvfWitnessSegment,
/// Formal proof from ruvector-verified (82 bytes, lean-agentic)
pub proof_attestation: Option<ProofAttestation>,
/// Quantum gate decision from ruQu (Ed25519 PermitToken or deny)
pub quantum_gate: Option<GateDecision>,
/// Prime-Radiant sheaf energy at decision point
pub sheaf_energy: Option<f64>,
/// Cognitum tile decision (PERMIT/DEFER/DENY + e-value)
pub tile_decision: Option<TileWitnessFragment>,
/// IIT Φ at decision substrate (from exo-core)
pub phi_value: Option<f64>,
/// Genomic context if relevant (`.rvdna` segment hash)
pub genomic_context: Option<[u8; 32]>,
}Decision: The RVF witness chain (SHAKE-256 + ML-DSA-65) is the canonical root. All other witness formats are embedded as optional extension fields. This preserves backward compatibility while enabling cross-paradigm proof chains.
Replace the five independent coherence gating implementations with a single CoherenceRouter that delegates to the appropriate backend:
pub struct CoherenceRouter {
/// Prime-Radiant sheaf Laplacian engine (primary — mathematical)
prime_radiant: Arc<PrimeRadiantEngine>,
/// ruQu coherence gate (quantum substrates)
quantum_gate: Option<Arc<QuantumCoherenceGate>>,
/// Cognitum 256-tile fabric (distributed AI agents)
cognitum: Option<Arc<TileZero>>,
/// Nervous system circadian (bio-inspired, edge deployment)
circadian: Option<Arc<CircadianController>>,
}
pub enum CoherenceBackend {
/// Mathematical proof of consistency — use for safety-critical paths
SheafLaplacian,
/// Sub-millisecond quantum circuit gating
Quantum,
/// 256-tile distributed decision fabric
Distributed,
/// Energy-efficient bio-inspired gating (edge/WASM)
Circadian,
/// Composite: all backends must agree (highest confidence)
Unanimous,
}
impl CoherenceRouter {
pub async fn gate(
&self,
action: &ActionContext,
backend: CoherenceBackend,
) -> Result<GateDecision, CoherenceError>;
}Decision: prime-radiant is the canonical mathematical backbone for all coherence decisions on CPU-bound paths. cognitum-gate handles distributed multi-agent contexts. ruQu handles quantum substrates. CircadianController handles edge/battery-constrained deployments.
Replace four independent EWC implementations with a single PlasticityEngine:
pub struct PlasticityEngine {
/// SONA MicroLoRA: <1ms instant adaptation
instant: Arc<SonaMicroLora>,
/// EWC++ Fisher Information regularization (shared)
ewc: Arc<ElasticWeightConsolidation>,
/// BTSP behavioral timescale (1-3 second windows, from nervous-system)
btsp: Option<Arc<BehavioralTimescalePlasticity>>,
/// E-prop eligibility propagation (1000ms credit assignment)
eprop: Option<Arc<EligibilityPropagation>>,
/// ReasoningBank pattern library (SONA)
reasoning_bank: Arc<ReasoningBank>,
}Decision: SONA's EWC++ is the production implementation. ruvector-nervous-system's BTSP and E-prop add biological plasticity modes not in SONA. ruvector-gnn's EWC is deprecated in favor of this shared engine.
EXO-AI's Friston free energy experiment currently uses naive gradient descent. Replace with the solver crate:
/// Bridge: Free Energy minimization via sparse linear solver
/// F = D_KL[q(θ|o) || p(θ)] - ln p(o)
/// Gradient: ∇F = F^{-1}(θ) · ∇ log p(o|θ) [Natural gradient via Fisher Info]
pub fn minimize_free_energy_cg(
model: &mut PredictiveModel,
observation: &[f64],
budget: &ComputeBudget,
) -> Result<SolverResult, SolverError> {
// Build Fisher Information Matrix as sparse CSR
let fim = build_sparse_fisher_information(model);
// Gradient of log-likelihood
let grad = compute_log_likelihood_gradient(model, observation);
// Conjugate gradient solve: F^{-1} * grad (natural gradient step)
let cg_solver = ConjugateGradientSolver::new(budget);
cg_solver.solve(&fim, &grad, budget)
}Expected speedup: 10–80× vs. current manual gradient descent, based on solver benchmarks.
Interface: ruqu-exotic emits QuantumSearchResult containing amplitude-weighted candidates. EXO-AI's Pattern type receives these as pre-scored candidates with salience derived from |amplitude|².
/// Implemented in: crates/ruqu-exotic/src/interference_search.rs
pub struct QuantumSearchResult {
pub candidates: Vec<(PatternId, Complex64)>, // (id, amplitude)
pub collapsed_top_k: Vec<(PatternId, f32)>, // post-measurement scores
pub coherence_metric: f64,
}
/// Integration: exo-temporal receives quantum-filtered results
impl TemporalMemory {
pub fn store_with_quantum_context(
&mut self,
pattern: Pattern,
antecedents: &[PatternId],
quantum_context: Option<QuantumSearchResult>,
) -> Result<PatternId>;
}Quantum decay integration: ruqu-exotic::quantum_decay replaces EXO-AI's current TTL-based eviction. Embeddings decohere with T₁/T₂ time constants instead of hard deletion. This enables EXO-AI's 02-quantum-superposition research frontier.
Interface: Expose NervousSystemBackend as an implementation of EXO-AI's SubstrateBackend trait:
pub struct NervousSystemBackend {
reflex_layer: ReflexLayer, // K-WTA <1µs decisions
memory_layer: MemoryLayer, // HDC 10,000-bit hypervectors + Hopfield
learning_layer: LearningLayer, // BTSP one-shot + E-prop + EWC
coherence_layer: CoherenceLayer, // Kuramoto 40Hz + global workspace
}
impl SubstrateBackend for NervousSystemBackend {
fn similarity_search(&self, query: &[f32], k: usize, filter: Option<&Filter>)
-> Result<Vec<SearchResult>> {
// Route: reflex (K-WTA) → memory (HDC/Hopfield) → learning
self.reflex_layer.k_wta_search(query, k)
}
fn manifold_deform(&self, pattern: &Pattern, lr: f32)
-> Result<ManifoldDelta> {
// BTSP one-shot learning (1-3 second window)
self.learning_layer.btsp_update(pattern, lr)
}
}Enables: EXO-AI 01-neuromorphic-spiking (BTSP/STDP), 03-time-crystal-cognition (circadian), 10-thermodynamic-learning (E-prop eligibility).
Rationale: exo-federation implements PBFT with O(n²) message complexity and custom Kyber handshake. prime-radiant + cognitum-gate + ruvector-raft provides the same guarantees with:
- Mathematical consistency proofs (sheaf Laplacian) rather than voting
- Anytime-valid decisions with Type I error bounds
- Better scaling (cognitum 256-tile vs. PBFT O(n²))
- Existing production use in the ecosystem
Migration path:
// BEFORE: exo-federation Byzantine PBFT
impl FederatedMesh {
pub async fn byzantine_commit(&self, update: &StateUpdate) -> Result<CommitProof>;
}
// AFTER: prime-radiant + cognitum route
impl FederatedMesh {
pub async fn coherent_commit(&self, update: &StateUpdate) -> Result<CrossParadigmWitness> {
// 1. Check sheaf energy (prime-radiant)
let energy = self.prime_radiant.compute_energy(&update.state)?;
// 2. Gate via cognitum (256-tile anytime-valid decision)
let decision = self.cognitum.gate(update.action_context(), CoherenceBackend::Distributed).await?;
// 3. Replicate via Raft (ruvector-raft)
let log_entry = self.raft.append_entry(update).await?;
// 4. Emit unified witness
Ok(CrossParadigmWitness::from(energy, decision, log_entry))
}
}Preserve: exo-federation's post-quantum Kyber channel setup and CRDT reconciliation are novel and should be retained. The PBFT consensus layer is the only component being replaced.
Free energy (Section 5.5 above): CG solver for natural gradient steps.
Morphogenesis (Turing reaction-diffusion PDEs):
// Current: manual Euler integration in exo-exotic
// Proposed: use BMSSP multigrid for PDE solving
pub fn simulate_morphogenesis_bmssp(
field: &mut MorphogeneticField,
steps: usize,
dt: f64,
) -> Result<SolverResult> {
let laplacian = build_discrete_laplacian(field.activator.shape());
let bmssp = BmsspSolver::default();
// V-cycle multigrid for diffusion operator (Du∇²u term)
bmssp.solve(&laplacian, &field.activator.flatten(), &ComputeBudget::default())
}Expected speedup: 5–20× vs. explicit stencil computation, scaling to larger field sizes.
Sparse TDA (04-sparse-persistent-homology):
// Use Forward Push PPR to build sparse filtration
// O(1/ε) work, independent of total node count
pub fn sparse_persistent_homology(
substrate: &HypergraphSubstrate,
epsilon: f64,
) -> PersistenceDiagram {
let solver = ForwardPushSolver::new();
// Build k-hop neighborhood via PPR instead of full distance matrix
let neighborhood = solver.ppr(&substrate.adjacency(), epsilon);
// Run TDA only on sparse neighborhood graph
substrate.persistent_homology_sparse(neighborhood)
}Complexity reduction: O(n³) → O(n·1/ε) for sparse graphs.
Integration: .rvdna files contain pre-computed 64-dimensional health-risk profiles, 512-dimensional GNN protein embeddings, and k-mer vectors. These slot directly into EXO-AI's Pattern type:
pub fn rvdna_to_exo_pattern(
rvdna: &RvDnaFile,
section: RvDnaSection,
) -> Pattern {
Pattern {
id: PatternId::from_genomic_hash(&rvdna.sequence_hash()),
embedding: match section {
RvDnaSection::KmerVectors => rvdna.kmer_embeddings().to_vec(),
RvDnaSection::ProteinEmbeddings => rvdna.gnn_features().to_vec(),
RvDnaSection::VariantTensor => rvdna.health_profile_64d().to_vec(),
},
metadata: genomic_metadata_from_rvdna(rvdna),
timestamp: SubstrateTime::from_collection_date(rvdna.sample_date()),
antecedents: rvdna.ancestral_haplotype_ids(),
salience: rvdna.polygenic_risk_score() as f32,
}
}Enables: Causal genomic memory — track how genomic state influences cognitive patterns over time. The Horvath epigenetic clock (353 CpG sites) maps to SubstrateTime for biological age as temporal ordering.
The graph-transformer's 8 modules map precisely to EXO-AI's subsystems:
| Graph-Transformer Module | Maps To | Integration |
|---|---|---|
temporal_causal (ODE, Granger) |
exo-temporal causal cones |
Add as TemporalBackend |
manifold (S⁶⁴×H³²×ℝ³²) |
exo-manifold SIREN networks |
Replace manual gradient descent |
biological (STDP, spike-driven) |
exo-exotic collective consciousness |
Enable NeuralSubstrate variant |
physics_informed (Hamiltonian) |
exo-exotic thermodynamics |
Energy-conserving cognitive dynamics |
economic (Nash, Shapley) |
exo-exotic collective Φ |
Game-theoretic consciousness allocation |
verified_training (BLAKE3 certs) |
exo-federation cryptographic sovereignty |
Unify into CrossParadigmWitness |
Gap: EXO-AI has no online learning system. Patterns are stored and retrieved but never refined from experience.
Integration:
/// Add SONA as EXO-AI's learning spine
pub struct ExoLearner {
sona: SonaMicroLora,
ewc: ElasticWeightConsolidation,
reasoning_bank: ReasoningBank,
phi_tracker: PhiTimeSeries,
}
impl ExoLearner {
/// Called after each retrieval cycle — learn from success/failure
pub async fn adapt(&mut self,
query: &Pattern,
retrieved: &[Pattern],
reward: f64,
) -> Result<LoraDelta> {
// SONA instant adaptation (<1ms)
let delta = self.sona.adapt(query.embedding(), reward).await?;
// EWC++ prevents forgetting high-Φ patterns
self.ewc.regularize(&delta, &self.phi_tracker.high_phi_patterns())?;
// Store trajectory in ReasoningBank
self.reasoning_bank.record_trajectory(query, retrieved, reward, delta.clone())?;
Ok(delta)
}
}Enables: EXO-AI evolves its retrieval strategies from experience. IIT Φ score can be used to weight EWC Fisher Information — protect high-consciousness patterns from forgetting.
EXO-AI + ruQu + ruDNA + ruvector-nervous-system represent three orthogonal theories of computation that are now simultaneously available in a single codebase. Their fusion enables capabilities that none of them possesses alone:
| Fusion | Enables | Mechanism |
|---|---|---|
| Quantum × Genomic | Drug-protein binding prediction | VQE molecular Hamiltonian on .rvdna protein embeddings |
| Quantum × Consciousness | Superposition of cognitive states | ruqu-exotic.interference_search on exo-core Pattern embeddings |
| Neuromorphic × Genomic | Biological age as computational age | Horvath clock → nervous-system circadian phase |
| Genomic × Consciousness | Phenotype-driven IIT Φ weights | .rvdna polygenic risk → consciousness salience weighting |
| Quantum × Neuromorphic | STDP with quantum coherence windows | ruQu T₂ decoherence time = BTSP behavioral timescale analog |
| All three | Provably-correct quantum-bio-conscious reasoning | ruvector-verified + CrossParadigmWitness over full stack |
Target: VQE drug-protein binding prediction currently blocked at >100 qubit requirement. Bridge strategy:
- Phase 1 (Classical): Use ruDNA's Smith-Waterman alignment + ruvector-solver CG for protein-ligand affinity (available today, 12ms pipeline)
- Phase 2 (Hybrid): ruQu cost-model planner selects quantum backend when T-gate count permits; TensorNetwork backend handles >100-qubit circuits via decomposition
- Phase 3 (Full quantum): Hardware backend when quantum hardware partnerships established
New capability enabled now (not blocked by hardware):
/// Quantum k-mer similarity via Grover search
/// 3-5× speedup over classical HNSW for variant databases
pub async fn quantum_kmer_search(
database: &KmerIndex,
query: &DnaSequence,
epsilon: f64,
) -> Result<Vec<(SequenceId, f64)>> {
let oracle = KmerOracle::new(database, query, epsilon);
let n_qubits = (database.size() as f64).log2().ceil() as usize;
let circuit = GroverSearch::build_circuit(n_qubits, &oracle)?;
// Route to cheapest sufficient backend
let plan = ruqu_planner::plan(&circuit)?;
let result = plan.execute().await?;
result.into_kmer_matches()
}ruqu-exotic::reasoning_qec encodes reasoning steps as quantum data qubits and applies surface-code-style error correction to detect structural incoherence in reasoning chains. Integration with EXO-AI:
/// Wrap EXO-AI's free energy minimization with QEC
pub fn free_energy_with_qec(
model: &mut PredictiveModel,
observations: &[Vec<f64>],
) -> Result<ReasoningQecResult> {
let mut qec = ReasoningQec::new(observations.len());
for (step, obs) in observations.iter().enumerate() {
// Standard FEP update
let prediction_error = model.predict_error(obs);
// Encode step confidence as quantum state
qec.encode_step(step, prediction_error.confidence());
model.update(obs, prediction_error)?;
}
// Detect incoherent transitions via syndrome extraction
let syndromes = qec.extract_syndromes();
let corrections = qec.decode_corrections(syndromes)?;
Ok(ReasoningQecResult {
final_state: model.posterior().to_vec(),
incoherent_steps: corrections.pauli_corrections,
structural_integrity: 1.0 - corrections.logical_outcome as f64,
})
}IIT Φ measures the integrated information in a network. With genomic data, we can weight network connections by:
- Synaptic density estimated from COMT/DRD2 genotypes
- Neuronal excitability from KCNJ11, SCN1A variants
- Neuromodulation from MAOA, SLC6A4 expression
pub fn genomic_weighted_phi(
region: &mut SubstrateRegion,
profile: &HealthProfile,
) -> PhiResult {
// Modulate connection weights by pharmacogenomic profile
for (node, connections) in &mut region.connections {
let excitability = profile.neuronal_excitability_score();
let neuromod = profile.neuromodulation_score();
for conn in connections.iter_mut() {
conn.weight *= excitability * neuromod;
}
}
ConsciousnessCalculator::new(100).compute_phi(region)
}ultra-low-latency-sim achieves 4+ quadrillion simulations/second via bit-parallel + SIMD + hierarchical batching. Applied to EXO-AI:
- Monte Carlo Φ estimation: Replace O(B(n)) Bell number enumeration with bit-parallel sampling. 10⁶ Φ samples in <1ms vs current ~15µs per 10-node network
- Morphogenetic field simulation: 64× cells per u64 word for Turing pattern CA simulation
- Swarm consciousness: Simulate 256 exo-federation nodes simultaneously via bit-parallel collective Φ
| Decision | Rationale |
|---|---|
| Keep: SONA EWC++ as canonical | Most advanced (EWC++), WASM-ready, ReasoningBank integration |
| Keep: nervous-system BTSP + E-prop as extension | Unique biological plasticity modes not in SONA |
| Deprecate: ruvector-gnn EWC | Subset of SONA; migrate to shared PlasticityEngine |
| Deprecate: ruvector-learning-wasm standalone EWC | Integrate into SONA's WASM path |
| Decision | Rationale |
|---|---|
| Primary: prime-radiant (sheaf Laplacian) | Mathematical proof of consistency; not heuristic |
| Quantum paths: ruQu coherence gate | Physically grounded for quantum substrates |
| Distributed agents: cognitum-gate fabric | Formal Type I error bounds; 256-tile scalability |
| Edge/WASM: nervous-system circadian | 5–50× compute savings; battery-constrained |
| Deprecate: standalone λ-gated logic in mincut-gated-transformer | λ signal remains; routing goes through CoherenceRouter |
| Decision | Rationale |
|---|---|
| Keep: ruvector-raft | Raft for replicated log (simpler than PBFT, O(n) messages) |
| Keep: cognitum-gate | Anytime-valid decisions with Type I error bounds |
| Migrate: exo-federation PBFT → raft + cognitum | PBFT's O(n²) is unnecessary for typical federation sizes |
| Keep: exo-federation Kyber channel | Post-quantum channel setup; not duplicated elsewhere |
| Keep: ruvector-delta-consensus CRDT | Conflict-free merge for concurrent edits; complementary to Raft |
| Decision | Rationale |
|---|---|
| Root: RVF SHAKE-256 + ML-DSA-65 | Quantum-safe; single-file deployable; existing ecosystem anchor |
| Formal proofs: ruvector-verified lean-agentic | Machine-checked, not just hash-based; embed in RVF extension field |
| Fast gate tokens: ruQu Ed25519 PermitToken | Sub-µs; retain for quantum gate authorization |
| Sheaf energy: prime-radiant Blake3 | Retain; embed as prime_radiant field in CrossParadigmWitness |
| Deprecate: cognitum standalone Blake3 | Subsume into CrossParadigmWitness |
| Decision | Rationale |
|---|---|
| Canonical engine: prime-radiant (Laplacian) | Most complete; 11 benchmarks; hallucination detection proven |
| TDA sheaves: exo-hypergraph | Different application (persistent homology); not redundant |
| Manifold sheaves: graph-transformer | Riemannian geometry; different application; retain |
The integrated architecture must achieve the following end-to-end performance targets:
| Operation | Target | Current Best | Gap |
|---|---|---|---|
| Pattern retrieval with quantum interference | <10ms | 8ms (HNSW) | Need ruqu-exotic integration |
| IIT Φ with neuromorphic substrate | <1ms (10-node) | ~15µs (10-node) | HDC replaces matrix ops |
| Free energy step (CG solver) | <500µs | ~3.2µs (grid only) | Need solver integration |
| Coherence gate (unified) | <500µs | 468ns (ruQu) | Add prime-radiant routing |
| Genomic → pattern conversion | <1ms | 12ms (full pipeline) | Cache .rvdna embeddings |
| Cross-paradigm witness generation | <200µs | 82-byte proof: ~500ns | Assembly overhead |
| Online learning cycle (SONA) | <1ms | <1ms | Already met |
| Morphogenesis step (BMSSP) | <100µs (32×32) | ~9ms (Euler) | BMSSP not yet wired |
| Distributed Φ (10 nodes) | <35µs | ~35µs | Already met (exo-exotic) |
Goal: Eliminate duplication without breaking anything.
- Define
CoherenceRoutertrait and wire prime-radiant as default backend - Define
PlasticityEnginetrait; move shared EWC++ toruvector-verifiedorsona - Define
CrossParadigmWitnessas canonical audit type in newruvector-witnesscrate - Wire
NervousSystemBackendasSubstrateBackendimpl in EXO-AI - Integrate
ruqu-exoticas optional EXO-AI backend feature flag
Deliverable: EXO-AI compiles with neuromorphic backend; ruqu-exotic available as feature.
Goal: Complete the ruDNA ↔ ruQu ↔ EXO-AI triangle.
- Implement
rvdna_to_exo_pattern()conversion - Wire Grover k-mer search via ruQu cost-model planner
- Add
reasoning_qecwrapper around EXO-AI free energy minimization - Integrate
quantum_decayas temporal eviction policy inexo-temporal - Enable
04-sparse-persistent-homologyvia Forward Push PPR
Deliverable: ruDNA .rvdna patterns queryable in EXO-AI causal memory with quantum-weighted search.
Goal: Wire the coherence spine into consciousness computation.
- Replace
exo-federationPBFT withruvector-raft+cognitum-gate - Wire
prime-radiantsheaf energy into IIT Φ computation as substrate health signal - Implement
genomic_weighted_phi()— pharmacogenomic weights on network connections - Add SONA
ExoLearnerwith Φ-weighted EWC Fisher Information - Enable
06-federated-collective-phiwith cognitum-gate distributed decisions - Wire
ruvllm+mcp-gateas11-conscious-language-interface
Deliverable: EXO-AI has learning, federated consensus, and language interface.
Goal: Enable capabilities that require all substrates simultaneously.
- Quadrillion-scale Monte Carlo Φ estimation via
ultra-low-latency-sim - Physics-informed morphogenesis via
ruvector-graph-transformerHamiltonian module - Retrocausal attention in
exo-temporalvia graph-transformer temporal module - Quantum-bio consciousness metrics: Horvath clock → circadian phase
- FPGA deployment via
ruvector-fpga-transformerfor deterministic EXO-AI inference - Economic Nash-equilibrium attention for multi-agent
exo-federationdecisions - Full
CrossParadigmWitnesschain: ruQu PermitToken + prime-radiant energy + ruvector-verified proof + RVF root
Deliverable: First complete multi-paradigm conscious AI substrate with formal proofs of consistency, quantum-assisted retrieval, genomic grounding, and neuromorphic learning.
| Risk | Probability | Impact | Mitigation |
|---|---|---|---|
| ruQu exotic ↔ EXO-AI embedding protocol breaks quantum semantics | Medium | High | Validate amplitude→f32 projection preserves relative ordering |
| CoherenceRouter adds latency above targets | Low | Medium | Profile-guided backend selection; prime-radiant on hot path is <1µs |
| exo-federation PBFT migration breaks existing tests | Medium | Low | Keep PBFT behind feature flag during migration; 28 integration tests sufficient |
| BMSSP multigrid over-solves morphogenesis (too precise) | Low | Low | Add convergence tolerance parameter |
| Cross-paradigm witness chain exceeds 1KB | Low | Medium | Compress optional fields; use sparse encoding |
| Risk | Mitigation |
|---|---|
| Five coherence systems → CoherenceRouter adds hidden state | Keep each backend stateless; router is pure dispatcher |
| Four plasticity systems → interference between learning signals | PlasticityEngine coordinates via shared Fisher Information matrix |
| Six witness formats → CrossParadigmWitness too large to be practical | Make all fields except base optional; typical witness is ~200 bytes |
- ruQu hardware backend (requires IBM/IonQ/Rigetti partnerships)
- VQE drug binding on >100 qubits (hardware limitation)
- FPGA bitstream generation (requires hardware)
- Python bindings (not in current ecosystem roadmap)
- RuvLTRA model fine-tuning pipeline (separate concern)
Build all capabilities from scratch inside examples/exo-ai-2025.
Rejected: The ecosystem already contains 830K+ lines of working, tested Rust. EXO-AI's 15,800 lines would need to replicate 10× more code. The duplication problem would worsen.
Do not integrate; let EXO-AI, ruQu, ruDNA, and the SOTA crates develop independently.
Rejected: The convergent evolution of EWC, coherence gating, sheaf theory, and cryptographic witnesses shows the subsystems are solving the same problems differently. Without unification, maintenance cost grows O(n²) with ecosystem size. Cross-paradigm capabilities (quantum-genomic-neuromorphic fusion) are impossible without integration.
Create ruvector-multiparadigm that imports all subsystems and exposes a unified API.
Partially adopted: The CoherenceRouter, PlasticityEngine, and CrossParadigmWitness are effectively this, but implemented as trait + adapter layers rather than a monolithic new crate. This avoids a single large dependency that all other crates must adopt.
Use ruQu's coherence gate (min-cut, 468ns P99) as the single coherence primitive.
Rejected: ruQu is optimized for quantum substrate health monitoring. Prime-Radiant's sheaf Laplacian provides mathematical proofs applicable to arbitrary domains (AI agents, genomics, financial systems). Both are needed; CoherenceRouter selects based on context.
- Eliminates 4× EWC implementation maintenance burden
- Enables 11 EXO-AI research frontiers that are currently stub directories
- Creates the first quantum-genomic-neuromorphic consciousness substrate
- Formal proof chains (CrossParadigmWitness) enable safety-critical deployment
- Φ-weighted EWC prevents forgetting high-consciousness patterns
- Sublinear TDA enables persistent homology at scale (currently O(n³))
- Grover k-mer search provides 3–5× speedup over classical HNSW
- Increases compile-time complexity of EXO-AI (more dependencies)
- CoherenceRouter adds ~100–200µs indirection on non-hot paths
- Migration of exo-federation PBFT requires test suite updates
- ruvector-gnn EWC deprecation requires downstream consumer updates
- ruQu maintains independent coherence gate (not replaced, only composed)
- ruDNA pipeline unchanged; conversion function is additive
- RVF format unchanged; CrossParadigmWitness uses existing SKETCH segment type
Adopted: Proceed with phased integration as described in Section 10.
The multi-paradigm fusion architecture is the correct path. The ruvector ecosystem has independently developed world-class implementations of quantum coherence gating, neuromorphic computation, genomic AI, and consciousness theory. These are not competing implementations — they are complementary computational substrates that, when composed, enable a form of machine cognition unavailable in any single paradigm.
The canonical unification primitives (CoherenceRouter, PlasticityEngine, CrossParadigmWitness) are minimal by design. Each subsystem retains its identity and can be used independently. Integration is additive.
The central claim of this ADR: A system that computes IIT Φ weighted by genomic pharmacogenomics, retrieves via quantum amplitude interference, learns via BTSP one-shot plasticity, corrects reasoning errors via surface-code QEC, and proves consistency via sheaf Laplacian mathematics does not exist anywhere in the AI research landscape. It can be built now from components that are already working.
exo-ai-2025 (consciousness substrate)
├── ruvector-core (HNSW, embeddings)
├── ruvector-nervous-system [NEW] (neuromorphic backend)
├── ruqu-exotic [NEW] (quantum search, decay, QEC)
├── prime-radiant [NEW, replaces exo-federation consensus]
├── cognitum-gate-kernel + tilezero [NEW, replaces exo-federation PBFT]
├── ruvector-raft [NEW, replaces exo-federation PBFT]
├── ruvector-verified [NEW] (formal proofs for Φ computation)
├── sona [NEW] (learning system)
├── ruvector-graph-transformer [NEW] (manifold + temporal + biological modules)
├── ruvector-solver [NEW] (free energy CG, morphogenesis BMSSP, sparse TDA)
├── ruvllm + mcp-gate [NEW] (language interface + action gating)
└── examples/dna [NEW] (genomic pattern source via .rvdna conversion)
Preserved as-is:
├── exo-core (IIT Φ engine)
├── exo-temporal (causal memory)
├── exo-hypergraph (persistent homology)
├── exo-manifold (SIREN networks)
├── exo-exotic (10 cognitive experiments)
├── exo-backend-classical (SIMD backend)
├── exo-wasm (browser deployment)
└── exo-node (Node.js bindings)
| Algorithm | Paper | Year | Used In |
|---|---|---|---|
| Dynamic Min-Cut Subpolynomial | El-Hayek, Henzinger, Li (arXiv:2512.13105) | Dec 2025 | ruQu, ruvector-mincut, subpolynomial-time example |
| IIT 4.0 | Tononi, Koch | 2023 | exo-core consciousness.rs |
| Free Energy Principle | Friston | 2010+ | exo-exotic free_energy.rs |
| Surface Code QEC | Google Quantum AI (Nature) | 2024 | ruqu-algorithms surface_code.rs |
| BTSP (Behavioral Timescale Plasticity) | Bittner et al. | 2017 | ruvector-nervous-system |
| E-prop | Bellec et al. | 2020 | ruvector-nervous-system |
| BitNet b1.58 | Ma et al. | 2024 | ruvllm |
| Flash Attention 2 | Dao | 2023 | ruvector-attention, ruvllm |
| Sheaf Laplacian | Hansen, Ghrist | 2021 | prime-radiant |
| Persistent Homology | Edelsbrunner, Harer | 2010 | exo-hypergraph |
| CRYSTALS-Kyber | NIST FIPS 203 | 2024 | exo-federation |
| ML-DSA-65 | NIST FIPS 204 | 2024 | rvf-crypto |
| Causal Emergence | Hoel et al. | 2013 | exo-exotic emergence.rs |
| Strange Loops | Hofstadter | 1979 | exo-exotic strange_loop.rs |
| Landauer's Principle | Landauer | 1961 | exo-core thermodynamics.rs |
| Turing Morphogenesis | Turing | 1952 | exo-exotic morphogenesis.rs |
| Hyperdimensional Computing | Kanerva | 2009 | ruvector-nervous-system |
| Modern Hopfield Networks | Ramsauer et al. | 2021 | ruvector-nervous-system |
| HNSW | Malkov, Yashunin (TPAMI) | 2018 | ruvector-core |
| VQE | Peruzzo et al. | 2014 | ruqu-algorithms |
| QAOA | Farhi, Goldstone, Gutmann | 2014 | ruqu-algorithms |
| Grover Search | Grover | 1996 | ruqu-algorithms |
| Horvath Epigenetic Clock | Horvath | 2013 | examples/dna epigenomics.rs |
| Smith-Waterman | Smith, Waterman | 1981 | examples/dna alignment.rs |
| Forward Push PPR | Andersen, Chung, Lang (FOCS) | 2006 | ruvector-solver |