Skip to content

Latest commit

 

History

History
216 lines (143 loc) · 6.12 KB

File metadata and controls

216 lines (143 loc) · 6.12 KB

Three-Layer Memory Skill

中文说明

Build a memory system for AI agents that is:

  • structured instead of ad hoc
  • searchable instead of bloated
  • layered instead of stuffing everything into one prompt

This repo packages a reusable skill for OpenClaw/Codex-style workspaces with:

  • MEMORY.md for curated long-term memory
  • memory/YYYY-MM-DD.md for raw daily context
  • three scheduled sync layers
  • qmd-first semantic retrieval
  • ripgrep fallback when qmd is unavailable
  • macOS launchd support for local index refresh

Why This Exists

Most agent memory setups fail in one of two ways:

  1. everything gets dumped into one giant memory file
  2. daily logs exist, but retrieval is so weak that nobody actually uses them

This skill separates memory by lifecycle:

  • Daily captures raw source material
  • Weekly compounds it into durable memory
  • Micro catches fresh context before it gets lost

The result is a system that remembers more without turning every session into a full-history replay.

At a Glance

flowchart TD
    A["Recent sessions / activity"] --> B["L3 Micro-Sync<br/>append one brief note"]
    A --> C["L1 Daily Context Sync<br/>write structured daily log"]
    B --> D["memory/YYYY-MM-DD.md"]
    C --> D
    D --> E["L2 Weekly Memory Compound<br/>distill durable patterns"]
    E --> F["MEMORY.md"]
    D --> G["qmd query / vsearch"]
    F --> G
    G --> H["Targeted recall instead of full-file reads"]
Loading

Architecture

Layer 1: Daily Context Sync

  • schedule: every day at 23:00
  • source: the day's session history
  • output: structured raw notes in memory/YYYY-MM-DD.md

Daily is the main capture layer. Its job is to avoid losing the day. Its daily output should also include a compact ## Tags section containing inline hashtag tags such as #system #investment #travel.

Layer 2: Weekly Memory Compound

  • schedule: Sunday 22:00
  • source: the last 7 daily logs
  • output: updates to MEMORY.md

Weekly is the compounding layer. Its job is to turn raw notes into durable memory.

Layer 3: Hourly Micro-Sync

  • schedule: 10:00 / 13:00 / 16:00 / 19:00 / 22:00
  • source: meaningful activity from the last 3 hours
  • output: at most one brief note appended to today's daily log

Micro is the safety net. Its job is to stop fresh context from falling through the cracks. When useful, it can refresh the day's tags, but those tags should stay in inline hashtag form rather than heading-style labels.

Retrieval

Never bulk-read MEMORY.md or the whole memory/ tree by default.

The retrieval order is:

  1. qmd query
  2. targeted snippet fetch
  3. ripgrep fallback
  4. full-file reads only when targeted retrieval fails

What You Get

  • SKILL.md
    • the skill entrypoint and workflow
  • agents/openai.yaml
    • metadata for OpenAI/Codex-style skill loaders
  • scripts/
    • search, snippet, refresh, and weekly finalize helpers
  • references/
    • architecture notes, qmd setup, AGENTS snippet, example jobs
  • templates/launchd/
    • macOS launchd template for scheduled qmd refresh

Quick Start

1. Run the installer

bash install.sh /path/to/your/workspace

The installer is non-destructive:

  • copies scripts into <workspace>/tools
  • copies references into <workspace>/.three-layer-memory/references
  • copies launchd templates into <workspace>/.three-layer-memory/templates/launchd
  • creates memory/ and logs/
  • creates MEMORY.md only if it does not already exist

It does not automatically overwrite your existing AGENTS.md or scheduler config.

2. Install qmd (optional but recommended)

See references/setup-qmd.md.

If qmd is available, the scripts use semantic retrieval first. If not, they fall back to ripgrep.

3. Copy or merge the memory rules

Use:

  • references/agents-snippet.md

to merge the mandatory retrieval-first rule into your agent instructions.

4. Install the three jobs into your runtime scheduler

Use:

  • references/jobs.example.json

Important:

  • this file is a template
  • in many runtimes, it is not the live scheduler source by itself
  • you may need to import or merge it into your actual runtime jobs store

5. Install qmd refresh scheduling if needed

The recurring qmd refresh is configured separately from the three agent jobs.

For macOS, use:

  • templates/launchd/com.openclaw.qmd-refresh.plist.template

6. If you prefer manual installation instead of install.sh

Recommended scripts:

  • scripts/memory_query.sh
  • scripts/memory_get.sh
  • scripts/search_memory.sh
  • scripts/refresh_memory_index.sh
  • scripts/qmd_refresh.sh
  • scripts/weekly_memory_finalize.sh

Recommended Validation Order

Run these in order:

  1. bash scripts/memory_query.sh "memory"
  2. bash scripts/memory_get.sh 2026-03-07.md:1 5
  3. bash scripts/refresh_memory_index.sh
  4. manually run Micro
  5. manually run Daily
  6. manually run Weekly last

This catches retrieval and indexing issues before anything starts rewriting MEMORY.md.

Good Fit

This skill is a good fit if you want:

  • a memory system for a real working agent
  • durable recall without stuffing everything into context
  • a local-first retrieval path
  • a workflow that can run on laptops and servers

Not the Goal

This repo is not trying to be:

  • a fully managed hosted memory product
  • a generic vector database wrapper
  • a plugin that deeply rewires your runtime

It is a skill-first, file-first architecture that stays inspectable.

Public Repo Notes

  • This repo contains no private runtime state, secrets, or workspace-specific logs.
  • qmd is optional.
  • The scripts are intentionally small and inspectable.
  • the three agent layers and the qmd refresh schedule are usually configured in different places

Acknowledgements

This project was informed by ideas shared publicly by:

The implementation here is an adapted, engineering-focused packaging of those ideas for a reusable memory skill. It is not claimed as the original source of the underlying concept.

License

MIT. See LICENSE.