NeuralClaw uses a four-layer reasoning architecture that automatically routes queries to the appropriate complexity level. Simple questions get instant answers; complex ones trigger multi-step planning with self-critique.
Input
β
βΌ
ββββββββββββββββ Match? βββββββββββββββββββ
β Fast Path ββββββ Yes βββββΆβ Instant Response β
β (reflexive) β βββββββββββββββββββ
ββββββββ¬ββββββββ
β No
βΌ
ββββββββββββββββ Complex? βββββββββββββββββββ
β Deliberative ββββββ No ββββββΆβ LLM Single-Call β
β (standard) β βββββββββββββββββββ
ββββββββ¬ββββββββ
β Yes
βΌ
ββββββββββββββββ βββββββββββββββββββ
β Reflective ββββββββββββββββΆβ Plan β Execute β β
β (multi-step) β β Critique β Reviseβ
ββββββββ¬ββββββββ βββββββββββββββββββ
β
βΌ (background)
ββββββββββββββββ
βMeta-Cognitive β
β (analysis) β
ββββββββββββββββ
File: cortex/reasoning/fast_path.py
Pattern-matched instant responses that don't need an LLM call. Handles greetings, goodbyes, help commands, version queries, etc.
from neuralclaw.cortex.reasoning.fast_path import FastPathReasoner
fast = FastPathReasoner(bus, agent_name="NeuralClaw")
result = await fast.try_fast_path(signal, memory_context)
if result:
# Matched! Return instantly without LLM call
print(result.content)Examples that trigger fast path:
- "Hello" β Greeting response
- "Thanks" β Acknowledgment
- "What can you do?" β Capability overview
/helpβ Help text
File: cortex/reasoning/deliberate.py
Standard LLM reasoning with full context. Constructs a rich prompt with:
- The user's message
- Retrieved memory context (episodes + facts)
- Agent persona and calibration modifiers
- Conversation history (last 20 messages)
- Available tool definitions
from neuralclaw.cortex.reasoning.deliberate import DeliberativeReasoner
deliberate = DeliberativeReasoner(bus, persona="You are NeuralClaw...")
deliberate.set_provider(provider_router)
envelope = await deliberate.reason(
signal=signal,
memory_ctx=memory_context,
tools=skill_tools,
conversation_history=history[-20:],
)
print(envelope.response)
print(envelope.confidence)File: cortex/reasoning/reflective.py
For complex queries, NeuralClaw uses a reflective process:
1. DECOMPOSE β Break the problem into sub-tasks
2. EXECUTE β Run each sub-task through deliberative reasoning
3. SELF-CRITIQUE β Evaluate the quality of each result
4. REVISE β Fix any issues found during critique
5. SYNTHESIZE β Combine sub-results into a final answer
The reflective layer automatically activates when:
- The query contains multiple questions
- The topic requires multi-step reasoning
- The intent classification flags high complexity
from neuralclaw.cortex.reasoning.reflective import ReflectiveReasoner
reflective = ReflectiveReasoner(bus, deliberate_reasoner)
# Check if reflection is needed
if reflective.should_reflect(signal, memory_ctx):
envelope = await reflective.reflect(
signal=signal,
memory_ctx=memory_ctx,
tools=tools,
conversation_history=history,
)File: cortex/reasoning/meta.py
Runs in the background after each interaction. Analyzes NeuralClaw's own performance to detect:
- Success rates β How often is the agent helpful?
- Capability gaps β What topics does it struggle with?
- Performance trends β Is it getting better or worse?
from neuralclaw.cortex.reasoning.meta import MetaCognitive
meta = MetaCognitive(bus=bus)
# Record an interaction
meta.record_interaction(
category="conversation",
success=True,
confidence=0.85,
)
# Run analysis when enough data is collected
if meta.should_analyze:
report = await meta.analyze()
print(f"Success rate: {report.overall_success_rate:.0%}")
print(f"Capability gaps: {report.capability_gaps}")The gateway (gateway.py) handles routing automatically:
- Try fast path β If matched, return instantly
- Check procedural memory β Look for matching workflow templates
- Check complexity β
reflective.should_reflect()evaluates the signal - Route accordingly:
- Simple β Deliberative (single LLM call)
- Complex β Reflective (multi-step with self-critique)
- Post-process:
- Store in memory
- Run metabolism/distiller
- Update meta-cognitive stats
You don't need to configure any of this β it happens automatically.
The Evolution Cortex calibrator adjusts reasoning behavior based on learned preferences:
- Formality β Casual vs. professional tone
- Verbosity β Concise vs. detailed responses
- Emoji usage β Based on user interaction patterns
These modifiers are injected into the deliberative/reflective prompt automatically.