Problem
memory-lancedb-pro's LanceDB storage grows indefinitely with no built-in way to manage data lifecycle. Currently at 127MB after 20 days of use with 4 agents and autoCapture enabled. While manageable now, there's no mechanism to prevent unbounded growth over months of operation.
Missing capabilities:
- No retention policy — No way to auto-expire old, low-importance memories after N days
- No bulk prune —
memory_forget works on individual memories (by ID or search query), but there's no way to batch-delete by age, importance threshold, or scope
- No
table.optimize() exposure — LanceDB's native table.optimize({ cleanupOlderThan }) is never called, so deleted data versions accumulate on disk (Lance MVCC). After many memory_forget calls, disk usage won't decrease without manual optimization.
Proposed solution
1. memory_prune tool
A new agent-facing tool for batch cleanup:
memory_prune({
olderThanDays: 90, // Delete memories created before N days ago
maxImportance: 0.3, // Only prune low-importance memories
scope: "global", // Target scope
dryRun: true // Preview before executing (default: true)
})
2. memory_optimize tool (or automatic)
Expose LanceDB's table.optimize() to reclaim disk space after deletions:
memory_optimize({
cleanupOlderThan: "7d" // Clean up Lance versions older than 7 days
})
Alternatively, auto-run table.optimize() after a threshold number of deletes.
3. Optional: config-based retention policy
{
"retention": {
"enabled": true,
"maxAgeDays": 180,
"minImportance": 0.3,
"optimizeAfterPrune": true
}
}
Memories below minImportance and older than maxAgeDays would be automatically pruned on a configurable schedule (e.g. weekly).
Context
- The Weibull decay model already tracks memory aging via
access_count and last_accessed_at, so the infrastructure for importance-based expiry partially exists
- LanceDB JS SDK exposes
table.optimize({ cleanupOlderThan: Date }) which handles compact + cleanup in one call
- The plugin already imports
@lancedb/lancedb and has full table access in store.ts
Environment
- memory-lancedb-pro v1.1.0-beta.10
- OpenClaw 2026.3.23-2
- LanceDB
@lancedb/lancedb (bundled)
- Jina embeddings + reranker
- 4 agents, autoCapture + smartExtraction enabled
- macOS arm64
Problem
memory-lancedb-pro's LanceDB storage grows indefinitely with no built-in way to manage data lifecycle. Currently at 127MB after 20 days of use with 4 agents and autoCapture enabled. While manageable now, there's no mechanism to prevent unbounded growth over months of operation.
Missing capabilities:
memory_forgetworks on individual memories (by ID or search query), but there's no way to batch-delete by age, importance threshold, or scopetable.optimize()exposure — LanceDB's nativetable.optimize({ cleanupOlderThan })is never called, so deleted data versions accumulate on disk (Lance MVCC). After manymemory_forgetcalls, disk usage won't decrease without manual optimization.Proposed solution
1.
memory_prunetoolA new agent-facing tool for batch cleanup:
2.
memory_optimizetool (or automatic)Expose LanceDB's
table.optimize()to reclaim disk space after deletions:Alternatively, auto-run
table.optimize()after a threshold number of deletes.3. Optional: config-based retention policy
{ "retention": { "enabled": true, "maxAgeDays": 180, "minImportance": 0.3, "optimizeAfterPrune": true } }Memories below
minImportanceand older thanmaxAgeDayswould be automatically pruned on a configurable schedule (e.g. weekly).Context
access_countandlast_accessed_at, so the infrastructure for importance-based expiry partially existstable.optimize({ cleanupOlderThan: Date })which handles compact + cleanup in one call@lancedb/lancedband has full table access instore.tsEnvironment
@lancedb/lancedb(bundled)