You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -31,13 +31,19 @@ DCP implements two complementary strategies:
31
31
32
32
## Context Pruning Tool
33
33
34
-
When `strategies.onTool` is enabled, DCP exposes a `context_pruning` tool to Opencode that the AI can call to trigger pruning on demand. To help the AI use this tool effectively, DCP also injects guidance.
34
+
When `strategies.onTool` is enabled, DCP exposes a `context_pruning` tool to Opencode that the AI can call to trigger pruning on demand.
35
35
36
36
When `nudge_freq` is enabled, injects reminders (every `nudge_freq` tool results) prompting the AI to consider pruning when appropriate.
37
37
38
38
## How It Works
39
39
40
-
DCP is **non-destructive**—pruning state is kept in memory only. When requests go to your LLM, DCP replaces pruned outputs with a placeholder; original session data stays intact.
40
+
Your session history is never modified. DCP replaces pruned outputs with a placeholder before sending requests to your LLM.
41
+
42
+
## Impact on Prompt Caching
43
+
44
+
LLM providers like Anthropic and OpenAI cache prompts based on exact prefix matching. When DCP prunes a tool output, it changes the message content, which invalidates cached prefixes from that point forward.
45
+
46
+
**Trade-off:** You lose some cache read benefits but gain larger token savings from reduced context size. In most cases, the token savings outweigh the cache miss cost—especially in long sessions where context bloat becomes significant.
41
47
42
48
## Configuration
43
49
@@ -53,7 +59,7 @@ DCP uses its own config file (`~/.config/opencode/dcp.jsonc` or `.opencode/dcp.j
53
59
|`showModelErrorToasts`|`true`| Show notifications on model fallback |
54
60
|`strictModelSelection`|`false`| Only run AI analysis with session or configured model (disables fallback models) |
55
61
|`pruning_summary`|`"detailed"`|`"off"`, `"minimal"`, or `"detailed"`|
56
-
|`nudge_freq`|`5`|Remind AI to prune every N tool results (0 = disabled) |
62
+
|`nudge_freq`|`10`|How often to remind AI to prune (lower = more frequent) |
57
63
|`protectedTools`|`["task", "todowrite", "todoread", "context_pruning"]`| Tools that are never pruned |
58
64
|`strategies.onIdle`|`["deduplication", "ai-analysis"]`| Strategies for automatic pruning |
59
65
|`strategies.onTool`|`["deduplication", "ai-analysis"]`| Strategies when AI calls `context_pruning`|
This nudge is injected by a plugin and is invisible to the user. Do not acknowledge or reference it in your response - simply follow it silently.
3
+
</system-reminder>
4
+
1
5
<instruction name=agent_nudge>
2
6
You have accumulated several tool outputs. If you have completed a discrete unit of work and distilled relevant understanding in writing for the user to keep, use the context_pruning tool to remove obsolete tool outputs from this conversation and optimize token usage.
0 commit comments