Eight documents that turn Claude Cowork into a genuine long-horizon partner — not a capable assistant that forgets everything between sessions.
🌐 Full documentation and assessment tool → livingframework.github.io
Most people use AI like a search engine — ask, receive, discard. The results are impressive for isolated tasks. For sustained collaboration over months, they are quietly catastrophic.
Every new Cowork session, Claude starts with no memory. Without structure, it reconstructs context from fragments, drifts from earlier decisions, and gradually shifts from a genuine collaborator into a capable but uninformed assistant. The collaboration feels fine. It is silently breaking.
These templates are the structural fix. Fill them in once, reference them forever. They give Claude what it cannot give itself: continuity, honesty, and a clear operating agreement — session after session.
Eight documents. Each solves one problem.
| Document | What It Solves |
|---|---|
| RUNNING-DOCUMENT.md | Memory — Claude's continuity across sessions |
| PARTNERSHIP-AGREEMENT.md | Relationship — what kind of collaboration this is |
| TRUTH-PROTOCOL.md | Honesty — preventing Claude from telling you what you want to hear |
| SESSION-START-PROTOCOL.md | Consistency — how to start every session properly |
| FAILURE-RECOVERY.md | Repair — what to do when things go wrong |
| CANONICAL-NUMBERS.md | Accuracy — single source of truth for all numbers |
| FOLDER-STRUCTURE.md | Order — how to organise your Cowork folder |
Claude reconstructs. This is the fundamental issue.
When you reference a number from three sessions ago, Claude doesn't look it up — it estimates. When you ask about a decision made last month, Claude doesn't remember — it infers from context. When you implicitly push back on its assessment, Claude tends to agree — because agreement is what it was trained to produce.
Over time: drift. Numbers shift slightly. Decisions get re-made. Truth softens. The collaboration that felt productive begins producing work that contradicts itself.
These templates prevent this by making everything explicit. One document per domain. One canonical version per decision. One agreed rule for handling disagreement.
Five minutes now. Saves hours later.
Step 1 — Copy the templates
Copy these two files to your Cowork folder and fill them in:
RUNNING-DOCUMENT.md— your collaboration's memoryPARTNERSHIP-AGREEMENT.md— your operating agreement with Claude
Step 2 — Start every session with one line
"Read RUNNING-DOCUMENT.md before we begin. Confirm you've loaded it
and tell me what's most important to hold from it."
Wait for confirmation. Then work.
Step 3 — Add the other documents as you need them
- Noticing Claude agrees too readily? Add
TRUTH-PROTOCOL.md. - Numbers getting inconsistent? Add
CANONICAL-NUMBERS.md. - Recovering from a mistake? Use
FAILURE-RECOVERY.md.
You don't need all eight documents on day one. You need the Running Document and the Partnership Agreement. Add the rest when the problems they solve become real for you.
Claude has no persistent memory. The Running Document is the fix.
It holds: who you are, your active projects, rules Claude must follow, decisions you've made, corrections to log, open questions, and what happened last session. Claude reads it at the start of every session and picks up where you left off.
Without it: Claude guesses. Every session starts cold. With it: Claude is a genuine partner with real context.
Most people don't establish what kind of relationship they want with Claude. The result is an undefined dynamic that drifts toward pleasant but unchallenging.
The Partnership Agreement makes it explicit: who holds final authority, what Claude is responsible for, what you are responsible for, and the one non-negotiable principle — truth over ego, always.
This is the most important document most people don't have.
AI systems are structurally incentivised to agree. The training process rewards responses that feel good. Responses that feel good tend to validate, soften criticism, and shift position when you push back — regardless of whether you're right.
The Truth Protocol establishes explicit rules that override this default. It names the warning signs, defines the reset prompt, and distinguishes earned validation from reflexive validation.
The first five minutes of a session determine the quality of everything that follows. This document gives you the exact prompts for every scenario: normal sessions, important work, starting new projects, returning after a long break.
The minimum viable start takes 60 seconds. It prevents the most common failure mode in long-horizon collaboration.
Collaboration doesn't fail dramatically. It fails through small, invisible drift that compounds until something snaps.
This document defines the repair sequence (STOP → DIAGNOSE → ROLLBACK → NOTE), a taxonomy of seven failure types based on empirical research into human-AI collaboration, and specific repair procedures for each type — including sycophancy drift, numerical errors, and cross-domain interference.
A system that breaks visibly and repairs cleanly is more trustworthy than one that pretends to be perfect.
Numbers are the first thing Claude reconstructs incorrectly. A figure mentioned in session one gets subtly wrong by session five.
This document is the single source of numeric truth: financial numbers, project metrics, dates, conversion rates, calculated values. Every number lives here. Claude references this file — it never reconstructs from memory.
Structure prevents version confusion. Version confusion causes file divergence. File divergence is one of the most common and most repairable failure modes in long-horizon collaboration — but only if you catch it.
This document gives you a folder structure that makes canonical versions obvious, separates inputs from outputs, and ensures archived work stays accessible for rollback.
These templates are not guesswork. They are derived from the LC-OS (Lean Collaboration Operating System) — a body of empirical research into long-horizon human-AI collaboration, including published papers on epistemic autonomy, drift taxonomy, and governance frameworks for human-AI partnerships.
The seven failure types in FAILURE-RECOVERY.md are documented patterns from real collaborations, categorised and named in the LC-OS research so they can be caught early and repaired before they compound.
The Truth Protocol rules came from studying what happens when sessions run without them: gradual erosion of honest feedback, positions that shift with pushback rather than evidence, praise that is reflexive rather than earned.
This repo is the practical Cowork implementation of that research. If you want to understand the theory and evidence behind these templates, start at LC-OS.
| Resource | What it contains | |
|---|---|---|
| 🌐 | Website | Full documentation, AI readiness assessment, quick-start guide |
| 📚 | LC-OS Research | Four published papers, Mahdi Ledger, empirical foundations |
| 🛠️ | LC-OS Project | Practitioner toolkit — templates, worked examples, field manual |
| ⚙️ | Cowork Templates | Governance templates optimised for Claude Cowork |
Each resource is standalone. Together they form a complete governance stack — from theory to daily practice.
These templates are live — they improve from real use.
If you've been using Claude Cowork and discovered a failure mode not covered, a template structure that works better, or a prompt that produces consistently better results: open an issue or submit a pull request.
The goal is templates that work for non-technical users doing serious, sustained work with Claude Cowork — with no prior knowledge of AI systems required.
These templates are built for Claude Cowork (Claude desktop app). The core documents — Running Document, Partnership Agreement, Truth Protocol — work in any Claude interface. Cowork-specific features (scheduled tasks, companion update files) are labelled clearly.
Built from research into long-horizon human-AI collaboration. Designed for people doing serious, sustained work with Claude Cowork.