- Feat: add integrated self-improvement governance flow (
agent:bootstrap,command:new/reset, governance tools, and.learningsfile bootstrap). - Feat: add
memoryReflectionsession strategy with inheritance/derived injection, reflection persistence, and dedicated reflection-agent support. - Fix: keep session-strategy compatibility by mapping legacy
sessionMemory.enabledtosystemSessionMemory/noneand trimming reflection input toward recent conversation tail. - Fix: retry early transient upstream reflection failures once and broaden session recovery search paths to real OpenClaw agent session directories.
- Docs: update README / README_CN for session strategy, self-improvement, memoryReflection, mdMirror, and reflection fallback behavior.
- Tests: add targeted coverage for reflection retry classification and session recovery path resolution.
PRs: #43, #2
- Fix: strip OpenClaw
Conversation info/Sendermetadata noise before auto-capture matching and adaptive retrieval normalization, reducing false captures and noisy retrieval triggers. - Fix: parse
autoRecallMinRepeatedfrom plugin config so repeated-memory suppression works when configured.
PR: #50
- Fix:
memory-pro importnow preserves provided IDs and is idempotent (skips if ID already exists).
Access Reinforcement for Time Decay
- Feat: Access reinforcement — frequently manually recalled memories decay more slowly (spaced-repetition style)
- New:
AccessTrackerwith debounced metadata write-back (records accessCount / lastAccessedAt) - New: Config options under
retrieval:reinforcementFactor(default: 0.5) andmaxHalfLifeMultiplier(default: 3) - New:
MemoryStore.getById()pure-read helper for efficient metadata lookup
PR: #37
Breaking changes: None. Backward compatible (set reinforcementFactor: 0 to disable).
Storage Path Validation & Better Error Messages
- Fix: Validate
dbPathat startup — resolve symlinks, auto-create missing directories, check write permissions (#26, #27) - Fix: Write/connection failures now include
errno, resolved path, and actionable fix suggestions instead of generic errors (#28) - New: Exported
validateStoragePath()utility for external tooling and diagnostics
Breaking changes: None. Backward compatible.
Long Context Chunking
- Feats: Added automatic chunking for documents exceeding embedding context limits
- Feats: Smart semantic-aware chunking at sentence boundaries with configurable overlap
- Feats: Chunking adapts to different embedding model context limits (Jina, OpenAI, Gemini, etc.)
- Feats: Parallel chunk embedding with averaged result for better semantic preservation
- Fixes: Handles "Input length exceeds context length" errors gracefully
- Docs: Added comprehensive documentation in docs/long-context-chunking.md
Breaking changes: None. Backward compatible with existing configurations.
- Fix: reduce auto-capture noise by skipping memory-management prompts (delete/forget/cleanup memory entries).
- Improve: broaden English decision triggers so statements like "we decided / going forward we will use" are captured as decisions.
- UX: show memory IDs in
memory-pro listandmemory-pro searchoutput, so users can delete entries without switching to JSON. - UX: include IDs in agent tool outputs (
memory_recall,memory_list) for easier debugging andmemory_forgetfollow-ups.
- Fix: sync
openclaw.plugin.jsonversion withpackage.json, so the OpenClaw plugin info shows the correct version.
- Fix: adaptive-retrieval now strips OpenClaw-injected timestamp prefixes like
[Mon YYYY-MM-DD HH:MM ...] ...to avoid skewing length-based heuristics. - Improve: expanded SKIP/FORCE keyword patterns with Traditional Chinese variants.
- Feat: expand memory capture triggers to support Traditional Chinese (繁體中文) in addition to Simplified Chinese, and improve category detection keywords.
- Docs: add troubleshooting note for LanceDB/Arrow returning
BigIntnumeric columns, and confirm the plugin coerces numeric fields viaNumber(...)for compatibility.
- Fix: coerce LanceDB/Arrow numeric columns that may arrive as
BigInt(timestamp,importance,_distance,_score) intoNumber(...)to avoid runtime errors like "Cannot mix BigInt and other types" on LanceDB 0.26+.
- Fix: Force
encoding_format: "float"for OpenAI-compatible embedding requests to avoid base64/float ambiguity and dimension mismatch issues with some providers/gateways. - Feat: Add Voyage AI (
voyage) as a supported rerank provider, usingtop_kandAuthorization: Bearerheader. - Refactor: Harden rerank response parser to accept both
results[]/data[]payload shapes andrelevance_score/scorefield names across all providers.
- Fix: ghost memories stuck in autoRecall after deletion (#15). BM25-only results from stale FTS index are now validated via
store.hasId()before inclusion in fused results. Removed the BM25-only floor score of 0.5 that allowed deleted entries to survivehardMinScorefiltering. - Fix: HEARTBEAT pattern now matches anywhere in the prompt (not just at start), preventing autoRecall from triggering on prefixed HEARTBEAT messages.
- Add:
autoRecallMinLengthconfig option to set a custom minimum prompt length for autoRecall (default: 15 chars English, 6 CJK). Prompts shorter than this threshold are skipped. - Add:
ping,pong,test,debugadded to skip patterns in adaptive retrieval.
- Change: set
autoRecalldefault tofalseto avoid the model echoing injected<relevant-memories>blocks.
- Fix: avoid blocking OpenClaw gateway startup on external network calls by running startup self-checks in the background with timeouts.
- Change: update default
retrieval.rerankModeltojina-reranker-v3(still fully configurable).
- Add: JSONL distill extractor supports optional agent allowlist via env var
OPENCLAW_JSONL_DISTILL_ALLOWED_AGENT_IDS(default off / compatible).
- Fix: resolve
agentIdfrom hook context (ctx?.agentId) forbefore_agent_startandagent_end, restoring per-agent scope isolation when using multi-agent setups.
- Fix: auto-recall injection now correctly skips cron prompts wrapped as
[cron:...] run ...(reduces token usage for cron jobs). - Fix: JSONL distill extractor filters more transcript/system noise (BOOT.md, HEARTBEAT, CLAUDE_CODE_DONE, queued blocks) to avoid polluting distillation batches.
- Add: optional JSONL session distillation workflow (incremental cursor + batch format) via
scripts/jsonl_distill.py. - Docs: document the JSONL distiller setup in README (EN) and README_CN (ZH).
- Fix:
embedding.dimensionsis now parsed robustly (number / numeric string / env-var string), so it properly overrides hardcoded model dims (fixes Ollamanomic-embed-textdimension mismatch).
- Fix:
memory-pro reembedno longer crashes (missingclampInthelper).
- Fix: pass through
embedding.dimensionsto the OpenAI-compatible/embeddingsrequest payload when explicitly configured. - Chore: unify plugin version fields (
openclaw.plugin.jsonnow matchespackage.json).
- Fix: CLI command namespace updated to
memory-pro.
- Initial npm release.