AI development tools (Claude Code, Cursor, Windsurf, Copilot) each build their own "contextual debt" — session history, rules files, memory banks — that is invisible to other tools. When engineers switch harnesses, the "Why" behind architectural decisions is lost.
Synapse transforms Cortex into a Universal Context Broker: any tool can write structured context to Harper, and any other tool can read it in its native format. Harper becomes the single source of truth for development context.
INGEST (Tool → Harper) EMIT (Harper → Tool)
CLAUDE.md ──────┐ ┌──▶ CLAUDE.md
.cursor/rules/ ─┤ ┌─────────────────┐ ├──▶ .cursor/rules/*.mdc
.windsurf/ ─┤──▶ │ SynapseIngest │ ├──▶ .windsurf/rules/*.md
copilot-inst. ─┤ │ parse → │ ├──▶ copilot-instructions.md
Manual / Slack ─┘ │ classify → │ │
│ embed → │ │ ┌──────────────┐
│ store │ └────│ SynapseEmit │
└──────┬──────────┘ │ query → │
│ │ group → │
▼ │ format │
┌────────────────────────┐ └──────┬───────┘
│ SynapseEntry Table │◀──────────────┘
│ (HNSW vector idx) │
│ │
│ Types: intent | │
│ constraint | │
│ artifact | history │
└───────────┬────────────┘
│
┌───────────▼────────────┐
│ SynapseSearch │
│ (semantic query) │
└───────────┬────────────┘
│ MCP JSON-RPC
┌────────────────┼────────────────┐
▼ ▼ ▼
Claude Desktop Cursor Any MCP Client
CLI: synapse sync ──▶ SynapseIngest
synapse emit ──▶ SynapseEmit
synapse search ──▶ SynapseSearch
Single table with type discriminator (same pattern as Memory's classification field). Enables cross-type vector search and simple MCP exposure.
type SynapseEntry @table {
id: ID @primaryKey
projectId: String @indexed # Scopes entries to a project
type: String @indexed # intent | constraint | artifact | history
content: String # Full context text
source: String
@indexed # claude_code | cursor | windsurf | copilot | manual | slack
sourceFormat: String # markdown | mdc | json
embedding: [Float] @indexed(type: "HNSW", distance: "cosine")
summary: String # LLM-generated one-liner
status: String @indexed # active | superseded | archived
references: [String] # Memory record IDs this traces back to
tags: [String] # Freeform labels
entities: Any # { people, projects, technologies, topics }
parentId: String @indexed # Self-referential: constraint → intent it serves
createdAt: Date @indexed
updatedAt: Date @indexed
metadata: Any # Tool-specific data (filePath, globs, etc.)
}| Type | Purpose | Example |
|---|---|---|
intent |
The "Why" | "Chose PostgreSQL over DynamoDB for complex joins in reporting" |
constraint |
Musts / Must-Nots | "MUST NOT use any ORM — raw SQL only" |
artifact |
References | "Architecture diagram at docs/arch.png" |
history |
Failed paths | "Tried Redis Streams for event sourcing, abandoned due to durability" |
All added to resources.js following existing patterns.
Same pattern as existing MemoryTable. Uses renamed import to avoid naming conflict:
const { SynapseEntry: SynapseEntryBase } = tables;
export class SynapseEntry extends SynapseEntryBase { ... }Mirrors MemorySearch. Key differences:
- Requires
projectId(mandatory scoping) - Defaults to
status: 'active'(excludes superseded/archived) - Filters on
typeandsource
Accepts { source, content, projectId, parentId?, references? }:
- Validates input
- Calls source-specific parser to split content into entries
- For each entry: classify (Claude Haiku) + embed (all-MiniLM-L6-v2 via @xenova/transformers) in parallel
- Stores as SynapseEntry records
Parsers (one per tool):
parseClaudeCode(content)— splits CLAUDE.md on##headingsparseCursor(content)— extracts YAML frontmatter + markdown body from .mdcparseWindsurf(content)— splits .md rules on##headingsparseCopilot(content)— passes through as single entry- Default — passes through unchanged
Accepts { target, projectId, types?, limit? }:
- Queries active SynapseEntry records for the project
- Calls target-specific emitter to format output
Emitters (one per tool):
emitClaudeCode(entries)— grouped markdown with## Intents,## Constraints, etc.emitCursor(entries)— array of{ filename, content }with YAML frontmatter per fileemitWindsurf(entries)— array of{ filename, content }as plain .md filesemitCopilot(entries)— same as Claude Code formatemitMarkdown(entries)— generic markdown (default)
New classifySynapseEntry(text) function using same pattern as classifyMessage():
- Model: Claude Haiku 3.5
- Prompt classifies into
intent | constraint | artifact | history - Extracts entities and tags
- Fallback type:
intent(broadest category)
Zero-dependency Node.js CLI using process.argv and fetch:
| Command | Action |
|---|---|
synapse sync |
Discovers CLAUDE.md, .cursor/rules/, .windsurf/rules/, etc. and POSTs each to /SynapseIngest |
synapse emit --target cursor |
POSTs to /SynapseEmit and writes files to disk |
synapse search <query> |
POSTs to /SynapseSearch and displays results |
synapse watch |
Watches context files via fs.watch and auto-syncs on change (2s debounce) |
synapse status |
Shows entry counts by type and source |
Env vars: SYNAPSE_ENDPOINT, SYNAPSE_PROJECT, SYNAPSE_AUTH
No additional work needed. Harper MCP server auto-exposes the SynapseEntry table. Any MCP client can immediately:
- List/read entries via
resources/listandresources/read - Semantic search via
POST /SynapseSearch - Ingest context via
POST /SynapseIngest
Soft reference via references: [String] field (array of Memory IDs). Not a Harper @relationship — keeps schemas decoupled. A Synapse intent might reference the Slack messages (Memory records) that informed the decision.
| File | Action |
|---|---|
schema.graphql |
Add SynapseEntry type definition |
resources.js |
Add SynapseEntry table ext, SynapseSearch, SynapseIngest, SynapseEmit, classifySynapseEntry, parsers, emitters (~350 lines) |
bin/synapse.js |
Create CLI (new file) |
package.json |
Add "bin" field |
.env.example |
Add SYNAPSE_ENDPOINT, SYNAPSE_PROJECT, SYNAPSE_AUTH |
test/synapse-classify.test.js |
New — tests for classifySynapseEntry |
test/synapse-search.test.js |
New — tests for SynapseSearch |
test/synapse-ingest.test.js |
New — tests for SynapseIngest + parsers |
test/synapse-emit.test.js |
New — tests for SynapseEmit + emitters |
- Add SynapseEntry type to
schema.graphql - Add table destructure and SynapseEntry table extension to
resources.js - Verify with
npm run dev—/SynapseEntry/endpoint responds
- Add Synapse constants,
classifySynapseEntry(), fallback function - Add
SynapseSearchresource class - Write
test/synapse-classify.test.jsandtest/synapse-search.test.js
- Add parsers object (all tool-specific parsers)
- Add
SynapseIngestresource class - Add emitters object (all tool-specific emitters)
- Add
SynapseEmitresource class - Write
test/synapse-ingest.test.jsandtest/synapse-emit.test.js
- Create
bin/synapse.jswith sync, emit, search, watch, status commands - Add
binfield topackage.json - Add Synapse env vars to
.env.example
- Update README.md with Synapse section
- Update docs/architecture.md with Synapse data flow
- Update CLAUDE.md with new files and conventions
After each phase:
npm test— all existing + new tests passnpm run dev— endpoints respond atlocalhost:9926- Manual: POST to
/SynapseIngestwith a CLAUDE.md, then/SynapseSearchto retrieve, then/SynapseEmit --target cursorto format - After Phase 4:
synapse sync && synapse search "architecture" && synapse emit --target cursorin the Cortex project itself