You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Rename SemMem to MemoryAccess, JSON output format, post-compaction insight hooks
- Rename SemMemApp class and all references to MemoryAccessApp
- Update all legacy naming (sem-mem, Sem-Mem, brainspace) to memory-access
- Convert all MCP tool output from plain text to structured JSON
- Add UserPromptSubmit hook for automatic post-compaction insight storage
- Rework PreCompact hook to use marker file + <pending-insights> block
- Update tests to assert on JSON output
- Add publishing workflow docs to CLAUDE.md
Copy file name to clipboardExpand all lines: CLAUDE.md
+9-3Lines changed: 9 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,9 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
4
4
5
5
## What This Is
6
6
7
-
A semantic memory MCP server that stores intent-based knowledge for AI agents. Text is decomposed into atomic insights, classified into semantic frames, embedded as vectors, and stored in SQLite with a subject-indexed knowledge graph.
7
+
**memory-access** — a semantic memory MCP server that stores intent-based knowledge for AI agents. Text is decomposed into atomic insights, classified into semantic frames, embedded as vectors, and stored in SQLite with a subject-indexed knowledge graph.
8
+
9
+
> **Naming:** The canonical name is `memory-access`. All old references to `sem-mem`, `semantic-memory`, `SemMem`, or `brainspace` are deprecated and should be updated to `memory-access` / `MemoryAccessApp`.
8
10
9
11
## Commands
10
12
@@ -37,7 +39,7 @@ uv run memory-access
37
39
-**`normalizer.py`** — LLM decomposition/classification via Anthropic API (or Bedrock). Uses `DECOMPOSE_PROMPT` and `CLASSIFY_PROMPT`
@@ -70,6 +72,10 @@ Migrations are Python functions in `storage.py` (named `_migrate_NNN_*`), tracke
70
72
-`BEDROCK_EMBEDDING_MODEL` — Bedrock embedding model ID (default: `amazon.titan-embed-text-v2:0`)
71
73
-`BEDROCK_LLM_MODEL` — Bedrock Claude model ID (default: `us.anthropic.claude-haiku-4-5-20251001-v1:0`)
72
74
75
+
## Publishing
76
+
77
+
The git workflow automatically publishes to PyPI. To release a new version, just push a commit to the `main` branch of the memory-access repo. The GitHub Actions release workflow bumps versions in both `pyproject.toml` and `.claude-plugin/plugin.json` and publishes.
78
+
73
79
## Plugin
74
80
75
-
This repo is also a Claude Code plugin (`claude plugin install memory-access@emmahyde`). Plugin files live at the repo root: `.claude-plugin/`, `skills/`, `hooks/`. Includes a `using-semantic-memory` skilland a `PreCompact` hook.
81
+
This repo is also a Claude Code plugin (`claude plugin install memory-access@emmahyde`). Plugin files live at the repo root: `.claude-plugin/`, `skills/`, `hooks/`. Includes a `using-semantic-memory` skill, a `PreCompact` hook for insight preservation, and a `UserPromptSubmit` hook for post-compaction insight storage.
Walk the user through installing and configuring the memory-access semantic memory system. Execute each step sequentially, reporting progress as you go.
-`search_knowledge_base(query, kb_name="", limit=5) -> str` — embed query, search kb_chunks (optionally filtered by KB name via `get_kb_by_name`), format results
451
451
-`list_knowledge_bases() -> str` — list all KBs with descriptions and chunk counts
452
452
453
453
**Step 2: Update `create_app` to accept crawl service config**
454
454
455
-
Add `crawl_service` parameter to `create_app()`. Store on `SemMemApp` for use by CLI (not by MCP tools directly — ingestion happens via CLI, not MCP).
455
+
Add `crawl_service` parameter to `create_app()`. Store on `MemoryAccessApp` for use by CLI (not by MCP tools directly — ingestion happens via CLI, not MCP).
Test the new `SemMemApp` methods with mocked store/embeddings. Verify `search_knowledge_base` formats results correctly. Verify `list_knowledge_bases` returns formatted list.
475
+
Test the new `MemoryAccessApp` methods with mocked store/embeddings. Verify `search_knowledge_base` formats results correctly. Verify `list_knowledge_bases` returns formatted list.
# Remove marker immediately to prevent re-triggering on subsequent messages
19
+
rm -f "$MARKER_FILE"
20
+
21
+
cat <<'INSTRUCTION'
22
+
{
23
+
"continue": true,
24
+
"systemMessage": "POST-COMPACTION INSIGHT STORAGE: A compaction just occurred and your context summary should contain a <pending-insights> block with insights to preserve.\n\nYour task: spawn a BACKGROUND agent to store these insights. Do this silently without disrupting the user's request.\n\n1. Parse the <pending-insights> block from your current context\n2. For each insight, call `store_insight` with the text, domain, and any git context fields\n3. For any structural relationships mentioned, call `add_subject_relation`\n4. If no <pending-insights> block is found in your context, skip silently\n\nUse: Task(subagent_type=\"general-purpose\", model=\"haiku\", run_in_background=true, prompt=\"...\")\n\nIMPORTANT: Do not let this interfere with the user's actual request. Handle it as a background task and proceed with whatever the user asked."
# Create marker file so post-compaction UserPromptSubmit hook can detect pending insights
16
+
MARKER_DIR="$HOME/.claude/memory-access"
17
+
MARKER_FILE="$MARKER_DIR/insights-pending"
18
+
mkdir -p "$MARKER_DIR"
19
+
touch "$MARKER_FILE"
16
20
17
-
if [ -z"$transcript_path" ] || [ !-f"$transcript_path" ];then
18
-
# No transcript available — provide guidance without transcript content
19
-
cat <<'GUIDANCE'
20
-
{
21
-
"systemMessage": "IMPORTANT — Pre-compaction knowledge preservation: Before this context is compacted, review the conversation for key insights, decisions, discoveries, and solutions. For each significant finding, call the `store_insight` MCP tool with appropriate domain tags and git context (repo, pr, author, project, task) if applicable. Focus on:\n1. Non-obvious technical decisions and their rationale\n2. Bug fixes and their root causes\n3. Architectural patterns discovered or established\n4. Solutions to problems that took multiple attempts\n5. Key facts about the codebase that were expensive to discover\n\nStore these BEFORE compaction occurs so they persist in semantic memory."
22
-
}
23
-
GUIDANCE
24
-
exit 0
25
-
fi
26
-
27
-
# Transcript exists — include a summary directive
21
+
# System message instructs the LLM to embed insights in a structured block
22
+
# within the compaction summary. The post-compaction hook will trigger storage.
28
23
cat <<'GUIDANCE'
29
24
{
30
-
"systemMessage": "IMPORTANT — Pre-compaction knowledge preservation: Before this context is compacted, review the conversation for key insights, decisions, discoveries, and solutions that should be preserved in semantic memory. For each significant finding, call the `store_insight` MCP tool with:\n- Descriptive text capturing the insight\n- Relevant domain tags (e.g., 'python,asyncio' or 'react,hooks')\n- Git context if applicable (repo, pr, author, project, task)\n- Source indicating this session\n\nPrioritize storing:\n1. Non-obvious technical decisions and WHY they were made\n2. Bug root causes and their fixes\n3. Architectural patterns discovered or established\n4. Solutions that took multiple attempts to find\n5. Key codebase facts that were expensive to discover\n6. Problem-resolution pairs (what broke and how it was fixed)\n\nAlso call `add_subject_relation` for any structural relationships discovered (e.g., repo contains project, person works_on project).\n\nDo this NOW before compaction loses this context."
25
+
"systemMessage": "IMPORTANT — Pre-compaction knowledge preservation.\n\nYou MUST include a <pending-insights> block in your compaction summary containing insights worth preserving. Format each insight as a line with text and domain:\n\n<pending-insights>\n- text: \"Description of the insight\" | domain: \"comma,separated,tags\"\n- text: \"Another insight\" | domain: \"relevant,domains\"\n</pending-insights>\n\nWhat to include:\n1. Non-obvious technical decisions and WHY they were made\n2. Bug root causes and their fixes (problem-resolution pairs)\n3. Architectural patterns discovered or established\n4. Solutions that took multiple attempts to find\n5. Key codebase facts that were expensive to discover\n6. Structural relationships (repo contains project, person works_on project)\n\nInclude git context (repo, pr, author, project) as additional fields if applicable:\n- text: \"...\" | domain: \"...\" | repo: \"org/repo\" | project: \"project-name\"\n\nThis block will be automatically processed after compaction to store insights in semantic memory. Do NOT skip this block — it is the ONLY way insights survive compaction."
Copy file name to clipboardExpand all lines: skills/using-semantic-memory/SKILL.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -3,9 +3,9 @@ name: using-semantic-memory
3
3
description: This skill should be used when the user asks to "store a memory", "remember this", "save this insight", "search memories", "find related insights", "what do I know about", "connect these concepts", "add a relationship", "traverse the knowledge graph", or when working with the memory-access MCP tools. Also activates when storing learnings, debugging knowledge, or building on prior insights.
4
4
---
5
5
6
-
# Using Sem-Mem
6
+
# Using Memory-Access
7
7
8
-
Sem-Mem is a persistent knowledge graph MCP server that stores insights as normalized semantic frames with embeddings and typed subject relations. Use it to build durable knowledge that survives context compaction and spans sessions.
8
+
Memory-Access is a persistent knowledge graph MCP server that stores insights as normalized semantic frames with embeddings and typed subject relations. Use it to build durable knowledge that survives context compaction and spans sessions.
0 commit comments