|
| 1 | +# CLAUDE.md |
| 2 | + |
| 3 | +This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository. |
| 4 | + |
| 5 | +## Build Commands |
| 6 | + |
| 7 | +```bash |
| 8 | +npm run build # Clean and compile TypeScript |
| 9 | +npm run typecheck # Type check without emitting |
| 10 | +npm run dev # Run in OpenCode plugin dev mode |
| 11 | +npm run test # Run tests (node --import tsx --test tests/*.test.ts) |
| 12 | +``` |
| 13 | + |
| 14 | +## Architecture |
| 15 | + |
| 16 | +This is an OpenCode plugin that optimizes token usage by pruning obsolete tool outputs from conversation context. The plugin is non-destructive—pruning state is kept in memory only, with original session data remaining intact. |
| 17 | + |
| 18 | +### Core Components |
| 19 | + |
| 20 | +**index.ts** - Plugin entry point. Registers: |
| 21 | +- Global fetch wrapper that intercepts LLM requests and replaces pruned tool outputs with placeholder text |
| 22 | +- Event handler for `session.status` idle events triggering automatic pruning |
| 23 | +- `chat.params` hook to cache session model info |
| 24 | +- `context_pruning` tool for AI-initiated pruning |
| 25 | + |
| 26 | +**lib/janitor.ts** - Orchestrates the two-phase pruning process: |
| 27 | +1. Deduplication phase: Fast, zero-cost detection of repeated tool calls (keeps most recent) |
| 28 | +2. AI analysis phase: Uses LLM to semantically identify obsolete outputs |
| 29 | + |
| 30 | +**lib/deduplicator.ts** - Implements duplicate detection by creating normalized signatures from tool name + parameters |
| 31 | + |
| 32 | +**lib/model-selector.ts** - Model selection cascade: config model → session model → fallback models (with provider priority order) |
| 33 | + |
| 34 | +**lib/config.ts** - Config loading with precedence: defaults → global (~/.config/opencode/dcp.jsonc) → project (.opencode/dcp.jsonc) |
| 35 | + |
| 36 | +**lib/prompt.ts** - Builds the analysis prompt with minimized message history for LLM evaluation |
| 37 | + |
| 38 | +### Key Concepts |
| 39 | + |
| 40 | +- **Tool call IDs**: Normalized to lowercase for consistent matching |
| 41 | +- **Protected tools**: Never pruned (default: task, todowrite, todoread, context_pruning) |
| 42 | +- **Batch tool expansion**: When a batch tool is pruned, its child tool calls are also pruned |
| 43 | +- **Strategies**: `deduplication` (fast) and `ai-analysis` (thorough), configurable per trigger (`onIdle`, `onTool`) |
| 44 | + |
| 45 | +### State Management |
| 46 | + |
| 47 | +Plugin maintains in-memory state per session: |
| 48 | +- `prunedIdsState`: Map of session ID → array of pruned tool call IDs |
| 49 | +- `statsState`: Map of session ID → cumulative pruning statistics |
| 50 | +- `toolParametersCache`: Cached tool parameters extracted from LLM request bodies |
| 51 | +- `modelCache`: Cached provider/model info from chat.params hook |
0 commit comments