Skip to content

FFatTiger/lore

Repository files navigation

Lore — one memory system across all your agents

中文 README · Quick Start · Manual Setup · Connect Agents · CLI Options · Daily Use · Development

1. Screenshots

Recall Analytics

Recall Workbench Memory Browser Dream Diary
Recall Workbench Memory Browser Dream Diary

2. Design philosophy

Lore is a long-term memory system for AI agents. It gives an agent a durable memory graph, a fixed startup baseline, per-prompt recall, explicit read tracking, and cautious write tools.

Supported runtimes:

Runtime Integration Notes
Pi pi-extension/ Best fit. Pi leaves long-term memory to extensions and keeps its system prompt compact, so Lore can act as the primary memory layer with little prompt competition.
Claude Code claudecode-plugin/ MCP tools, session-start boot injection, per-prompt recall injection, and guidance rules.
Codex codex-plugin/ Local marketplace plugin, MCP config, and optional hooks for boot / recall injection.
OpenClaw openclaw-plugin/ Runtime plugin with boot, recall, and Lore tools.
Hermes hermes-plugin/ MemoryProvider plugin with Lore tools and recall support.
Generic MCP clients /api/mcp Streamable HTTP MCP endpoint for clients that can connect to remote tools.

Most agent memory systems stop at retrieval. Lore focuses on the full memory lifecycle:

  • Boot baseline — every session starts with stable identity, workflow, user, and runtime memories.
  • Recall before reply — the agent receives a small <recall> block with relevant candidates before answering.
  • Read before trust — recalled candidates are cues; the agent opens the memory node before relying on it.
  • URI-first graph — memories live at durable URIs such as core://agent, preferences://user, and project://my_project.
  • Disclosure triggers — each memory carries a natural-language condition that explains when it should surface.
  • Policy-guided writes — priority budgets, read-before-modify checks, and boot-node protection keep the graph stable.
  • Dream maintenance — scheduled review can inspect recall quality, structure, and stale nodes with rollback history.

Lore is built for agents that need continuity across sessions, tools, and runtimes.


3. Quick Start

1. Run the install script

curl -fsSL https://raw.githubusercontent.com/FFatTiger/lore/main/scripts/install.sh | bash

Chinese installer output:

curl -fsSL https://raw.githubusercontent.com/FFatTiger/lore/main/scripts/install.zh.sh | bash

This single command starts the Lore server (Docker Compose), connects all 5 agent runtimes, and creates ~/.lore/config.json. Re-run anytime to update. Docker Compose pull/start output is shown; other installer subcommands stay quiet.

Flag Description
--pre Pre-release channel (pre-latest Docker tag)
--dev Dev channel (dev-latest Docker tag)
--channels CH,... Specific runtimes: claudecode, codex, pi, openclaw, hermes
--base-url URL External server URL (skips local Docker)
--skip-docker Don't start or manage Docker
--force Force reinstall even if version unchanged

2. Complete first-run setup

After the server is running, open:

http://127.0.0.1:18901/setup

Complete the setup flow:

  1. Embedding setup — configure an OpenAI-compatible embedding endpoint.
    • Embedding Base URL, for example http://host.docker.internal:8090/v1
    • Embedding API Key
    • Embedding Model, for example text-embedding-3-small
  2. View LLM setup — configure the model used by view refinement and Dream.
    • View LLM Base URL
    • View LLM API Key
    • View LLM Model, for example deepseek-v4-flash
  3. Global boot memories — review or save defaults for:
    • core://agent
    • core://soul
    • preferences://user
  4. Channel agent memories — review or save defaults for runtime-specific memories:
    • core://agent/claudecode
    • core://agent/codex
    • core://agent/openclaw
    • core://agent/hermes
    • core://agent/pi

The Skip button saves the default value for an empty boot node and moves forward.

3. Configure optional runtime settings

Open /settings after setup for:

  • recall scoring weights and thresholds
  • View LLM for view refinement and Dream
  • Dream schedule
  • backup settings
  • write policy settings

Embedding is required for semantic recall and index rebuilds. View LLM is required during setup so Dream and view refinement are ready when you enable them.


4. Manual Setup

Docker Compose

Create a docker-compose.yml:

services:
  postgres:
    image: fffattiger/pgvector-zhparser:pg16
    restart: unless-stopped
    environment:
      TZ: ${TZ:-Asia/Shanghai}
      POSTGRES_DB: ${POSTGRES_DB:-lore}
      POSTGRES_USER: ${POSTGRES_USER:-lore}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-change-me}
    ports:
      - "${POSTGRES_PORT:-55439}:5432"
    volumes:
      - ${POSTGRES_DATA_DIR:-./data/postgres}:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER:-lore} -d ${POSTGRES_DB:-lore}"]
      interval: 10s
      timeout: 5s
      retries: 10

  web:
    image: fffattiger/lore:latest
    restart: unless-stopped
    pull_policy: always
    depends_on:
      postgres:
        condition: service_healthy
    environment:
      TZ: ${TZ:-Asia/Shanghai}
      DATABASE_URL: postgresql://${POSTGRES_USER:-lore}:${POSTGRES_PASSWORD}@postgres:5432/${POSTGRES_DB:-lore}
      API_TOKEN: ${API_TOKEN:-}
    ports:
      - "${WEB_PORT:-18901}:18901"
    volumes:
      - ${SNAPSHOT_DATA_DIR:-./data/snapshots}:/app/snapshots
docker compose up -d
curl http://127.0.0.1:18901/api/health

Source build

git clone https://github.com/FFatTiger/lore.git
cd lore
docker compose up -d --build

5. Connect agents

The Quick Start install script handles this automatically. To connect agents to an external server, use --base-url. After installing, restart each agent runtime.

CLI options

# Stable (default)
curl -fsSL https://raw.githubusercontent.com/FFatTiger/lore/main/scripts/install.sh | bash

# Chinese output
curl -fsSL https://raw.githubusercontent.com/FFatTiger/lore/main/scripts/install.zh.sh | bash

# Pre-release channel
curl -fsSL https://raw.githubusercontent.com/FFatTiger/lore/main/scripts/install.sh | bash -s -- --pre

# Dev channel
curl -fsSL https://raw.githubusercontent.com/FFatTiger/lore/main/scripts/install.sh | bash -s -- --dev

# External server (skip local Docker)
curl -fsSL https://raw.githubusercontent.com/FFatTiger/lore/main/scripts/install.sh | bash -s -- --base-url http://192.168.1.100:18901 --api-token my-token

# Specific channels only
curl -fsSL https://raw.githubusercontent.com/FFatTiger/lore/main/scripts/install.sh | bash -s -- --channels claudecode,codex

Full options:

Flag Description
--base-url URL Lore server base URL (auto-starts Docker if omitted)
--api-token TOKEN Lore API token
--channels CH,... Comma-separated: claudecode, codex, pi, openclaw, hermes. Default: all 5
--dev Use dev channel (dev-latest Docker tag)
--pre Use pre-release channel (pre-latest Docker tag)
--skip-docker Don't start or update Docker containers
--force Force reinstall even if version unchanged

Re-run the install script anytime to update. If Docker was auto-started on first install, it will be updated automatically.

What each runtime gets

Runtime Integration
Claude Code Marketplace plugin, MCP tools, SessionStart boot injection, per-prompt recall, guidance via CLAUDE.md @~/.claude/lore-guidance.md
Codex Local marketplace plugin, MCP config, boot/recall hooks
Pi Extension tools, startup boot + recall context
OpenClaw Runtime plugin with boot, recall, and Lore tools
Hermes MemoryProvider plugin, tools, recall support
Generic MCP http://your-host:18901/api/mcp?client_type=mcp

Claude Code note: Claude Code has a built-in auto-memory feature. The install script does not disable it — if you want Lore as your only memory system, set CLAUDE_CODE_DISABLE_AUTO_MEMORY=1 or "autoMemoryEnabled": false in ~/.claude/settings.json.

Codex note: Restart Codex after installing. Open /hooks and trust Lore hooks if prompted. If /plugins still shows Lore as installable, install it manually there; the script has already configured MCP and user-level hooks.


6. Daily use

Once connected, the agent workflow is:

  1. load boot memories at session start
  2. receive <recall> candidates before user prompts
  3. open relevant nodes with lore_get_node
  4. create or update durable memories when something should survive the session
  5. use the Web UI to inspect recall quality, memory history, settings, backup, and Dream maintenance

Useful UI pages:

  • /memory — browse and edit the memory graph
  • /recall — inspect retrieval stages and scoring
  • /dream — run structural maintenance
  • /settings — configure runtime behavior

7. Development

cd web
cp .env.local.example .env.local
npm install
npm run dev

Requires Node.js 20+ and PostgreSQL with the vector extension.

About

Self-hosted long-term memory for most AI agents.

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors