Closing the Learning Loop — A Digest Skill for Periodic Signal Review #946
Replies: 3 comments 3 replies
-
|
@jlacour-git Excellent, I'm working on this too. Did you see my questions over here? |
Beta Was this translation helpful? Give feedback.
-
|
@jlacour-git So I see you link to a gist, has a single file, doesn't look like it's a complete Skill. Looking at all of your gists, I see: gist-digest-learnings-skill.md I don't see the the files you laid out at the end of your link (https://gist.github.com/jlacour-git/bb9e8b6e88ce7e6afa20fd4251beca37). I'm wanting to try yours out, but failing to identify where to get the actual Skill files that you've describe. Am I blind or missing something obvious? |
Beta Was this translation helpful? Give feedback.
-
|
Hey @Drizzt321! You're not blind. The gist only had the SKILL.md — the workflow file was missing. Just added it. Here's the complete structure: Both files are now in the gist: https://gist.github.com/jlacour-git/bb9e8b6e88ce7e6afa20fd4251beca37 Quick orientation:
The key design choice: nothing gets applied without explicit approval. The skill proposes, you decide. @virtualian — saw your cross-reference from #908. Responding there! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Following up on our Memory System Audit (#884) — we shared what we found when we audited the learning/memory capture pipeline (28 writers, 2 readers, 9 gaps). One piece we've since built and battle-tested is a DigestLearnings skill that periodically reviews all accumulated learning signals and extracts actionable improvement proposals.
The Problem It Solves
PAI captures learning signals — failure analyses, low-rated interactions, algorithm reflections — into
MEMORY/LEARNING/. But without periodic review, those signals just accumulate. The AI doesn't spontaneously go back and ask "what patterns do these failures reveal?" You need a structured process.How It Works
Three phases: Digest → Classify → Track.
Digest — Watermark-based incremental scan. Reads a JSONL log for the last processed timestamp, only reviews new files. Deduplicates (the capture system over-records — same incident often appears as FAILURE + ALGORITHM + SYSTEM signal). Maps each unique incident to existing rules or identifies genuine gaps.
Classify — Each proposal gets tagged: USER-SAFE (apply directly), SYSTEM-PATCH (needs LOCAL_PATCHES.md + upstream issue), or UPSTREAM-ONLY (file issue, no local change). Decisions collected via structured questions, not a flat report.
Track — Approved changes applied, watermark updated. Full audit trail in
DIGEST-LOG.jsonl.What We Learned After 7 Digests
The first few digests found genuine rule gaps — things like "check before creating files" and "preserve facts during tone edits." But after ~4 runs, the rule set stabilized. Since then, every digest has found 0 new rule proposals. The signals are almost entirely adherence failures of existing rules, not missing rules. This is actually the expected mature state.
The digest skill is now more of a health check than a gap-finder — it confirms the rule set is complete and quantifies the adherence problem.
Full Skill Code
The complete workflow (scan logic, classification scheme, watermark management, and 7-digest results table) is available as a gist:
👉 Gist: DigestLearnings Skill
Connection to Prior Work
Would be curious if others have built similar periodic review mechanisms, or if you've found different approaches to closing the learning loop.
Beta Was this translation helpful? Give feedback.
All reactions