Replies: 9 comments 4 replies
-
|
@dxcore35 I was just asking PAI about this very thing today. I was wondering if it had a graph / RAG system. If I understand this correctly... this would do exactly that and more correct? It would also help with token usage over time, to my understanding right? |
Beta Was this translation helpful? Give feedback.
-
|
I like that the user data is stored outside of the PAI codebase and outside of one's .claude directory in this model. |
Beta Was this translation helpful? Give feedback.
-
|
This is very interesting. I need to read it a couple more times to get my head wrapped around it. I'm currently using a Langchain Postgres custom implementation that I built mostly so I could offload the work to a local OLAMA server, but not convinced it's good for the long term. |
Beta Was this translation helpful? Give feedback.
-
|
thanks for sharing, this is a fantastic idea, Ive just added and release this in my Knowledge Pack (with acknowledgement) - https://github.com/madeinoz67/madeinoz-knowledge-system. |
Beta Was this translation helpful? Give feedback.
-
|
Interesting idea! I'd suggest the following adjustment: Instead of fully relying on automated scripts running on cron, imo there should be automated checks running on hooks that allow the daily maintenance script to be called through user interactions. I think it's common to not necessarily have (or want) something running 24/7, but it can be easily "patched" by keeping track of when was the maintenance script ran last, and trigger it if we need to (in a fully async/background manner) |
Beta Was this translation helpful? Give feedback.
-
|
This is all great discussion here. I am currently working on this.
No problem mentioning OpenClaw, it's a great project.
I just want our own version that we trust and can control and understand.
Working currently on PAIWorker, which is the Chat-enabled PAI Worker that will be able to take work from our work github, start working on it, proactively do tasks, and yes—do all this 24/7.
OpenClaw made it obvious that this is a requirement for any platform going forward.
PAI thus far has been focused around us, and interactive.
But we need the expanded scope too, of PAI Workers, that can work 24/7 and be proactive.
This was all anticipated here!
https://danielmiessler.com/blog/personal-ai-maturity-model
…On Tue, Feb 10, 2026 at 11:10 AM, virtualian < ***@***.*** > wrote:
@ jgmontoya ( https://github.com/jgmontoya ) , sorry to disagree, but I
believe PAIs (will) need to be "always-on" to be truly agentic and
self-improving. While incremental steps are useful, the 24/7 end state
must be planned.
@ danielmiessler ( https://github.com/danielmiessler ) 's deliberate (or
accidental?) decision to integrate PAI into CLI-based agents is brilliant,
but this architecture has drawbacks. I have installed PAI on my MacBook; I
also use a MacMini. I want to use the same PAI on both, but I haven't
(yet) worked out how to sync my PAI between the two, or have one I can use
on both.
While mentioning OpenClaw might be contentious here, its access via
messaging, web-ui, and TUI, and thus its required always-on architectural
model, solves my access/instance pain points.
I'm not ditching PAI because I like Daniel's ethos, vision, philosophy,
and engineering discipline. Therefore, my mind is racing with potential
solutions, but I haven't reached any conclusions. Sorry!
—
Reply to this email directly, view it on GitHub (
#527 (reply in thread)
) , or unsubscribe (
https://github.com/notifications/unsubscribe-auth/AAAMLXXS5ULNYSLBTPK7KK34LIUJBAVCNFSM6AAAAACTF6UYNOVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTKNZWGA2TAMA
).
You are receiving this because you were mentioned. Message ID: <danielmiessler/Personal_AI_Infrastructure/repo-discussions/527/comments/15760500
@ github. com>
|
Beta Was this translation helpful? Give feedback.
-
|
@dxcore35 This is great. I see you say "Based on":
I assume you are refering to Rohit Sharma article Building a Truly adaptive AI Agents with Long-Term Memory, which describes LangMem. I haven't looked at LangMem, or compared it to your proposal, but did you consider a LangMem-based solution for PAI? It has an MIT License. |
Beta Was this translation helpful? Give feedback.
-
|
@danielmiessler Some "nano-versions" of OpenClaw for reverse-inspiration: https://github.com/gavrielc/nanoclaw |
Beta Was this translation helpful? Give feedback.
-
|
Yes, these were really good. Thank you! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Claude Memory System
📖 For Humans: How This Actually Works
Imagine if you had a super-efficient secretary who followed you around 24/7.
Why is this better than normal AI memory?
Normal AI tries to re-read the entire transcript of your life every time you say "Hello". This is slow, expensive, and confusing.
This system works like your own brain:
🔄 The Full Memory Lifecycle
This isn't just a database; it's a living ecosystem with daily maintenance cycles.
There are two loops keeping the system healthy:
🧠 Core Philosophy
This system solves the three fundamental problems with standard AI memory:
📐 Architecture Overview
The Three-Layer Hierarchy
Layer 1: Resources (Raw Data)
Immutable source of truth.
Layer 2: Items (Atomic Facts)
Individual units of knowledge extracted from resources.
Layer 3: Categories (Evolving Summaries)
Human-readable summaries for fast retrieval.
⚖️ Memory Classification Framework
This is the "engine" of forgetting. Every memory is classified on two dimensions:
Classification Scale (1-5)
IMPORTANCE - How important to user's identity/life:
STABILITY - How likely to change:
Decay Behavior
Examples:
🔍 Smart Retrieval & Token Budget
We don't load everything. We use a tiered retrieval strategy to save context window.
Token Savings:
📊 Performance & Benchmarks
📁 File Structure
Usage
Python API
Storage Locations
brain/system/prompt_registry.jsonlbrain/memory_guidelines.mdtrash/archive/⚡ Performance Optimization
The system is optimized for minimal LLM round-trips while maintaining quality.
Bottleneck Analysis
Problem: Original architecture made ~25 LLM calls per text ingestion:
Solution: Consolidate into single-pass extraction.
Optimizations Applied
Benchmark Results
Context Budget
🛠️ Key Components
storage.pyllm_utils.pymemorize.pyretrieve.pycheckpoint.pygraph_memory.py🎯 Design Principles
📖 Based On
Architecture inspired by @rohit4verse's article on building agents that never forget.
Context-Engineered Human–AI Collaboration for Long-Horizon Tasks:
A Case Study in Governance, Canonical Numerics, and Execution
Control
This version: OSF preprint, DOI https://doi.org/10.17605/OSF.IO/VMK7Y
License
MIT License
Beta Was this translation helpful? Give feedback.
All reactions