Skip to content

Commit 67f237d

Browse files
committed
Rename SemMem to MemoryAccess, JSON output format, post-compaction insight hooks
- Rename SemMemApp class and all references to MemoryAccessApp - Update all legacy naming (sem-mem, Sem-Mem, brainspace) to memory-access - Convert all MCP tool output from plain text to structured JSON - Add UserPromptSubmit hook for automatic post-compaction insight storage - Rework PreCompact hook to use marker file + <pending-insights> block - Update tests to assert on JSON output - Add publishing workflow docs to CLAUDE.md
1 parent 5096add commit 67f237d

File tree

10 files changed

+173
-90
lines changed

10 files changed

+173
-90
lines changed

CLAUDE.md

Lines changed: 9 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,9 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
44

55
## What This Is
66

7-
A semantic memory MCP server that stores intent-based knowledge for AI agents. Text is decomposed into atomic insights, classified into semantic frames, embedded as vectors, and stored in SQLite with a subject-indexed knowledge graph.
7+
**memory-access** — a semantic memory MCP server that stores intent-based knowledge for AI agents. Text is decomposed into atomic insights, classified into semantic frames, embedded as vectors, and stored in SQLite with a subject-indexed knowledge graph.
8+
9+
> **Naming:** The canonical name is `memory-access`. All old references to `sem-mem`, `semantic-memory`, `SemMem`, or `brainspace` are deprecated and should be updated to `memory-access` / `MemoryAccessApp`.
810
911
## Commands
1012

@@ -37,7 +39,7 @@ uv run memory-access
3739
- **`normalizer.py`** — LLM decomposition/classification via Anthropic API (or Bedrock). Uses `DECOMPOSE_PROMPT` and `CLASSIFY_PROMPT`
3840
- **`embeddings.py`** — Embedding generation (OpenAI or Bedrock Titan), L2-normalized float32 vectors. `create_embedding_engine()` factory selects provider.
3941
- **`storage.py`**`InsightStore` class: SQLite persistence, migration system, subject indexing, knowledge graph queries
40-
- **`server.py`**`SemMemApp` orchestrator + MCP tool definitions (9 tools: `store_insight`, `search_insights`, `list_insights`, `update_insight`, `forget`, `search_by_subject`, `related_insights`, `add_subject_relation`, `get_subject_relations`)
42+
- **`server.py`**`MemoryAccessApp` orchestrator + MCP tool definitions (9 tools: `store_insight`, `search_insights`, `list_insights`, `update_insight`, `forget`, `search_by_subject`, `related_insights`, `add_subject_relation`, `get_subject_relations`)
4143

4244
### Database
4345

@@ -70,6 +72,10 @@ Migrations are Python functions in `storage.py` (named `_migrate_NNN_*`), tracke
7072
- `BEDROCK_EMBEDDING_MODEL` — Bedrock embedding model ID (default: `amazon.titan-embed-text-v2:0`)
7173
- `BEDROCK_LLM_MODEL` — Bedrock Claude model ID (default: `us.anthropic.claude-haiku-4-5-20251001-v1:0`)
7274

75+
## Publishing
76+
77+
The git workflow automatically publishes to PyPI. To release a new version, just push a commit to the `main` branch of the memory-access repo. The GitHub Actions release workflow bumps versions in both `pyproject.toml` and `.claude-plugin/plugin.json` and publishes.
78+
7379
## Plugin
7480

75-
This repo is also a Claude Code plugin (`claude plugin install memory-access@emmahyde`). Plugin files live at the repo root: `.claude-plugin/`, `skills/`, `hooks/`. Includes a `using-semantic-memory` skill and a `PreCompact` hook.
81+
This repo is also a Claude Code plugin (`claude plugin install memory-access@emmahyde`). Plugin files live at the repo root: `.claude-plugin/`, `skills/`, `hooks/`. Includes a `using-semantic-memory` skill, a `PreCompact` hook for insight preservation, and a `UserPromptSubmit` hook for post-compaction insight storage.

commands/setup-memory-access.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Interactive wizard to install and configure the memory-access MCP s
44
allowed-tools: ["Bash", "Read", "Edit", "Write", "AskUserQuestion"]
55
---
66

7-
# Setup Sem-Mem
7+
# Setup Memory-Access
88

99
Walk the user through installing and configuring the memory-access semantic memory system. Execute each step sequentially, reporting progress as you go.
1010

docs/plans/2026-02-07-knowledge-bases.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -438,21 +438,21 @@ git commit -m "feat: add Ingestor with crawl → split → normalize → embed
438438

439439
---
440440

441-
### Task 5: SemMemApp KB Methods + MCP Tools
441+
### Task 5: MemoryAccessApp KB Methods + MCP Tools
442442

443443
**Files:**
444444
- Modify: `src/memory_access/server.py`
445445
- Modify: `tests/test_server.py`
446446

447-
**Step 1: Add KB methods to `SemMemApp`**
447+
**Step 1: Add KB methods to `MemoryAccessApp`**
448448

449449
Add to the class:
450450
- `search_knowledge_base(query, kb_name="", limit=5) -> str` — embed query, search kb_chunks (optionally filtered by KB name via `get_kb_by_name`), format results
451451
- `list_knowledge_bases() -> str` — list all KBs with descriptions and chunk counts
452452

453453
**Step 2: Update `create_app` to accept crawl service config**
454454

455-
Add `crawl_service` parameter to `create_app()`. Store on `SemMemApp` for use by CLI (not by MCP tools directly — ingestion happens via CLI, not MCP).
455+
Add `crawl_service` parameter to `create_app()`. Store on `MemoryAccessApp` for use by CLI (not by MCP tools directly — ingestion happens via CLI, not MCP).
456456

457457
**Step 3: Add MCP tools to `create_mcp_server`**
458458

@@ -472,7 +472,7 @@ async def list_knowledge_bases() -> str:
472472

473473
**Step 4: Add tests to `tests/test_server.py`**
474474

475-
Test the new `SemMemApp` methods with mocked store/embeddings. Verify `search_knowledge_base` formats results correctly. Verify `list_knowledge_bases` returns formatted list.
475+
Test the new `MemoryAccessApp` methods with mocked store/embeddings. Verify `search_knowledge_base` formats results correctly. Verify `list_knowledge_bases` returns formatted list.
476476

477477
**Step 5: Commit**
478478

hooks/hooks.json

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
{
2-
"description": "Sem-Mem hooks — preserves knowledge before context compaction",
2+
"description": "memory-access hooks — preserves knowledge before and after context compaction",
33
"hooks": {
44
"PreCompact": [
55
{
@@ -12,6 +12,18 @@
1212
}
1313
]
1414
}
15+
],
16+
"UserPromptSubmit": [
17+
{
18+
"matcher": "*",
19+
"hooks": [
20+
{
21+
"type": "command",
22+
"command": "bash ${CLAUDE_PLUGIN_ROOT}/hooks/scripts/check-pending-insights.sh",
23+
"timeout": 5
24+
}
25+
]
26+
}
1527
]
1628
}
1729
}
Lines changed: 26 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,26 @@
1+
#!/bin/bash
2+
# UserPromptSubmit hook: Check for pending insights from a pre-compaction cycle.
3+
#
4+
# If the marker file exists, a compaction recently occurred and the compaction
5+
# summary should contain a <pending-insights> block. Emit a system message
6+
# telling the agent to extract and store them, then clean up the marker.
7+
8+
set -euo pipefail
9+
10+
MARKER_FILE="$HOME/.claude/memory-access/insights-pending"
11+
12+
if [ ! -f "$MARKER_FILE" ]; then
13+
# No pending insights — no-op
14+
echo '{"continue": true}'
15+
exit 0
16+
fi
17+
18+
# Remove marker immediately to prevent re-triggering on subsequent messages
19+
rm -f "$MARKER_FILE"
20+
21+
cat <<'INSTRUCTION'
22+
{
23+
"continue": true,
24+
"systemMessage": "POST-COMPACTION INSIGHT STORAGE: A compaction just occurred and your context summary should contain a <pending-insights> block with insights to preserve.\n\nYour task: spawn a BACKGROUND agent to store these insights. Do this silently without disrupting the user's request.\n\n1. Parse the <pending-insights> block from your current context\n2. For each insight, call `store_insight` with the text, domain, and any git context fields\n3. For any structural relationships mentioned, call `add_subject_relation`\n4. If no <pending-insights> block is found in your context, skip silently\n\nUse: Task(subagent_type=\"general-purpose\", model=\"haiku\", run_in_background=true, prompt=\"...\")\n\nIMPORTANT: Do not let this interfere with the user's actual request. Handle it as a background task and proceed with whatever the user asked."
25+
}
26+
INSTRUCTION

hooks/scripts/pre-compact.sh

Lines changed: 14 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -1,32 +1,27 @@
11
#!/bin/bash
2-
# PreCompact hook: Extract and store insights from the current conversation
3-
# before context compaction occurs. This preserves knowledge that would
4-
# otherwise be lost when the conversation window is compacted.
2+
# PreCompact hook: Preserve knowledge before context compaction.
53
#
6-
# Strategy: Read hook input, output a system message instructing Claude
7-
# to store key insights via memory-access MCP tools before compaction.
4+
# Strategy:
5+
# 1. Create a marker file so a post-compaction hook knows insights are pending
6+
# 2. Instruct the LLM to include insights in a structured <pending-insights>
7+
# block within the compaction summary — these survive compaction and get
8+
# stored automatically on the next user message.
89

910
set -euo pipefail
1011

1112
# Read hook input from stdin
1213
input=$(cat)
1314

14-
# Extract transcript path from hook input
15-
transcript_path=$(echo "$input" | jq -r '.transcript_path // empty')
15+
# Create marker file so post-compaction UserPromptSubmit hook can detect pending insights
16+
MARKER_DIR="$HOME/.claude/memory-access"
17+
MARKER_FILE="$MARKER_DIR/insights-pending"
18+
mkdir -p "$MARKER_DIR"
19+
touch "$MARKER_FILE"
1620

17-
if [ -z "$transcript_path" ] || [ ! -f "$transcript_path" ]; then
18-
# No transcript available — provide guidance without transcript content
19-
cat <<'GUIDANCE'
20-
{
21-
"systemMessage": "IMPORTANT — Pre-compaction knowledge preservation: Before this context is compacted, review the conversation for key insights, decisions, discoveries, and solutions. For each significant finding, call the `store_insight` MCP tool with appropriate domain tags and git context (repo, pr, author, project, task) if applicable. Focus on:\n1. Non-obvious technical decisions and their rationale\n2. Bug fixes and their root causes\n3. Architectural patterns discovered or established\n4. Solutions to problems that took multiple attempts\n5. Key facts about the codebase that were expensive to discover\n\nStore these BEFORE compaction occurs so they persist in semantic memory."
22-
}
23-
GUIDANCE
24-
exit 0
25-
fi
26-
27-
# Transcript exists — include a summary directive
21+
# System message instructs the LLM to embed insights in a structured block
22+
# within the compaction summary. The post-compaction hook will trigger storage.
2823
cat <<'GUIDANCE'
2924
{
30-
"systemMessage": "IMPORTANT — Pre-compaction knowledge preservation: Before this context is compacted, review the conversation for key insights, decisions, discoveries, and solutions that should be preserved in semantic memory. For each significant finding, call the `store_insight` MCP tool with:\n- Descriptive text capturing the insight\n- Relevant domain tags (e.g., 'python,asyncio' or 'react,hooks')\n- Git context if applicable (repo, pr, author, project, task)\n- Source indicating this session\n\nPrioritize storing:\n1. Non-obvious technical decisions and WHY they were made\n2. Bug root causes and their fixes\n3. Architectural patterns discovered or established\n4. Solutions that took multiple attempts to find\n5. Key codebase facts that were expensive to discover\n6. Problem-resolution pairs (what broke and how it was fixed)\n\nAlso call `add_subject_relation` for any structural relationships discovered (e.g., repo contains project, person works_on project).\n\nDo this NOW before compaction loses this context."
25+
"systemMessage": "IMPORTANT — Pre-compaction knowledge preservation.\n\nYou MUST include a <pending-insights> block in your compaction summary containing insights worth preserving. Format each insight as a line with text and domain:\n\n<pending-insights>\n- text: \"Description of the insight\" | domain: \"comma,separated,tags\"\n- text: \"Another insight\" | domain: \"relevant,domains\"\n</pending-insights>\n\nWhat to include:\n1. Non-obvious technical decisions and WHY they were made\n2. Bug root causes and their fixes (problem-resolution pairs)\n3. Architectural patterns discovered or established\n4. Solutions that took multiple attempts to find\n5. Key codebase facts that were expensive to discover\n6. Structural relationships (repo contains project, person works_on project)\n\nInclude git context (repo, pr, author, project) as additional fields if applicable:\n- text: \"...\" | domain: \"...\" | repo: \"org/repo\" | project: \"project-name\"\n\nThis block will be automatically processed after compaction to store insights in semantic memory. Do NOT skip this block — it is the ONLY way insights survive compaction."
3126
}
3227
GUIDANCE

skills/using-semantic-memory/SKILL.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,9 @@ name: using-semantic-memory
33
description: This skill should be used when the user asks to "store a memory", "remember this", "save this insight", "search memories", "find related insights", "what do I know about", "connect these concepts", "add a relationship", "traverse the knowledge graph", or when working with the memory-access MCP tools. Also activates when storing learnings, debugging knowledge, or building on prior insights.
44
---
55

6-
# Using Sem-Mem
6+
# Using Memory-Access
77

8-
Sem-Mem is a persistent knowledge graph MCP server that stores insights as normalized semantic frames with embeddings and typed subject relations. Use it to build durable knowledge that survives context compaction and spans sessions.
8+
Memory-Access is a persistent knowledge graph MCP server that stores insights as normalized semantic frames with embeddings and typed subject relations. Use it to build durable knowledge that survives context compaction and spans sessions.
99

1010
## When to Use
1111

0 commit comments

Comments
 (0)