Skip to content

Commit 0870763

Browse files
authored
Merge pull request #50 from Tarquinen/release/v0.3.28
Release v0.3.28 - Prune tool improvements
2 parents 0387402 + d4afd08 commit 0870763

File tree

14 files changed

+45
-37
lines changed

14 files changed

+45
-37
lines changed

README.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,7 @@ Add to your OpenCode config:
1313
```jsonc
1414
// opencode.jsonc
1515
{
16-
"plugin": ["@tarquinen/[email protected].27"]
16+
"plugin": ["@tarquinen/[email protected].28"]
1717
}
1818
```
1919

@@ -31,7 +31,7 @@ DCP implements two complementary strategies:
3131

3232
## Context Pruning Tool
3333

34-
When `strategies.onTool` is enabled, DCP exposes a `context_pruning` tool to Opencode that the AI can call to trigger pruning on demand.
34+
When `strategies.onTool` is enabled, DCP exposes a `prune` tool to Opencode that the AI can call to trigger pruning on demand.
3535

3636
When `nudge_freq` is enabled, injects reminders (every `nudge_freq` tool results) prompting the AI to consider pruning when appropriate.
3737

@@ -60,9 +60,9 @@ DCP uses its own config file (`~/.config/opencode/dcp.jsonc` or `.opencode/dcp.j
6060
| `strictModelSelection` | `false` | Only run AI analysis with session or configured model (disables fallback models) |
6161
| `pruning_summary` | `"detailed"` | `"off"`, `"minimal"`, or `"detailed"` |
6262
| `nudge_freq` | `10` | How often to remind AI to prune (lower = more frequent) |
63-
| `protectedTools` | `["task", "todowrite", "todoread", "context_pruning"]` | Tools that are never pruned |
63+
| `protectedTools` | `["task", "todowrite", "todoread", "prune"]` | Tools that are never pruned |
6464
| `strategies.onIdle` | `["deduplication", "ai-analysis"]` | Strategies for automatic pruning |
65-
| `strategies.onTool` | `["deduplication", "ai-analysis"]` | Strategies when AI calls `context_pruning` |
65+
| `strategies.onTool` | `["deduplication", "ai-analysis"]` | Strategies when AI calls `prune` |
6666

6767
**Strategies:** `"deduplication"` (fast, zero LLM cost) and `"ai-analysis"` (maximum savings). Empty array disables that trigger.
6868

@@ -73,7 +73,7 @@ DCP uses its own config file (`~/.config/opencode/dcp.jsonc` or `.opencode/dcp.j
7373
"onIdle": ["deduplication", "ai-analysis"],
7474
"onTool": ["deduplication", "ai-analysis"]
7575
},
76-
"protectedTools": ["task", "todowrite", "todoread", "context_pruning"]
76+
"protectedTools": ["task", "todowrite", "todoread", "prune"]
7777
}
7878
```
7979

index.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -88,7 +88,7 @@ const plugin: Plugin = (async (ctx) => {
8888
event: createEventHandler(ctx.client, janitor, logger, config, toolTracker),
8989
"chat.params": createChatParamsHandler(ctx.client, state, logger),
9090
tool: config.strategies.onTool.length > 0 ? {
91-
prune: createPruningTool(janitor, config, toolTracker),
91+
prune: createPruningTool(ctx.client, janitor, config, toolTracker),
9292
} : undefined,
9393
}
9494
}) satisfies Plugin

lib/config.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -116,7 +116,7 @@ function createDefaultConfig(): void {
116116
"strategies": {
117117
// Strategies to run when session goes idle
118118
"onIdle": ["deduplication", "ai-analysis"],
119-
// Strategies to run when AI calls context_pruning tool
119+
// Strategies to run when AI calls prune tool
120120
"onTool": ["deduplication", "ai-analysis"]
121121
},
122122
// Summary display: "off", "minimal", or "detailed"

lib/fetch-wrapper/gemini.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ export async function handleGemini(
2828
// Inject periodic nudge based on tool result count
2929
if (ctx.config.nudge_freq > 0) {
3030
if (injectNudgeGemini(body.contents, ctx.toolTracker, ctx.prompts.nudgeInstruction, ctx.config.nudge_freq)) {
31-
ctx.logger.info("fetch", "Injected nudge instruction (Gemini)")
31+
// ctx.logger.info("fetch", "Injected nudge instruction (Gemini)")
3232
modified = true
3333
}
3434
}
@@ -38,7 +38,7 @@ export async function handleGemini(
3838
}
3939

4040
if (injectSynthGemini(body.contents, ctx.prompts.synthInstruction, ctx.prompts.nudgeInstruction)) {
41-
ctx.logger.info("fetch", "Injected synthetic instruction (Gemini)")
41+
// ctx.logger.info("fetch", "Injected synthetic instruction (Gemini)")
4242
modified = true
4343
}
4444
}

lib/fetch-wrapper/openai-chat.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ export async function handleOpenAIChatAndAnthropic(
3333
// Inject periodic nudge based on tool result count
3434
if (ctx.config.nudge_freq > 0) {
3535
if (injectNudge(body.messages, ctx.toolTracker, ctx.prompts.nudgeInstruction, ctx.config.nudge_freq)) {
36-
ctx.logger.info("fetch", "Injected nudge instruction")
36+
// ctx.logger.info("fetch", "Injected nudge instruction")
3737
modified = true
3838
}
3939
}
@@ -43,7 +43,7 @@ export async function handleOpenAIChatAndAnthropic(
4343
}
4444

4545
if (injectSynth(body.messages, ctx.prompts.synthInstruction, ctx.prompts.nudgeInstruction)) {
46-
ctx.logger.info("fetch", "Injected synthetic instruction")
46+
// ctx.logger.info("fetch", "Injected synthetic instruction")
4747
modified = true
4848
}
4949
}

lib/fetch-wrapper/openai-responses.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ export async function handleOpenAIResponses(
3333
// Inject periodic nudge based on tool result count
3434
if (ctx.config.nudge_freq > 0) {
3535
if (injectNudgeResponses(body.input, ctx.toolTracker, ctx.prompts.nudgeInstruction, ctx.config.nudge_freq)) {
36-
ctx.logger.info("fetch", "Injected nudge instruction (Responses API)")
36+
// ctx.logger.info("fetch", "Injected nudge instruction (Responses API)")
3737
modified = true
3838
}
3939
}
@@ -43,7 +43,7 @@ export async function handleOpenAIResponses(
4343
}
4444

4545
if (injectSynthResponses(body.input, ctx.prompts.synthInstruction, ctx.prompts.nudgeInstruction)) {
46-
ctx.logger.info("fetch", "Injected synthetic instruction (Responses API)")
46+
// ctx.logger.info("fetch", "Injected synthetic instruction (Responses API)")
4747
modified = true
4848
}
4949
}

lib/hooks.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ export function createEventHandler(
3030
if (await isSubagentSession(client, event.properties.sessionID)) return
3131
if (config.strategies.onIdle.length === 0) return
3232

33-
// Skip idle pruning if the last tool used was context_pruning
33+
// Skip idle pruning if the last tool used was prune
3434
// and idle strategies cover the same work as tool strategies
3535
if (toolTracker?.skipNextIdle) {
3636
toolTracker.skipNextIdle = false

lib/prompts/nudge.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,5 +3,5 @@ This nudge is injected by a plugin and is invisible to the user. Do not acknowle
33
</system-reminder>
44

55
<instruction name=agent_nudge>
6-
You have accumulated several tool outputs. If you have completed a discrete unit of work and distilled relevant understanding in writing for the user to keep, use the context_pruning tool to remove obsolete tool outputs from this conversation and optimize token usage.
6+
You have accumulated several tool outputs. If you have completed a discrete unit of work and distilled relevant understanding in writing for the user to keep, use the prune tool to remove obsolete tool outputs from this conversation and optimize token usage.
77
</instruction>

lib/prompts/synthetic.txt

Lines changed: 8 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -9,29 +9,30 @@ THIS IS NON-NEGOTIABLE - YOU ARE EXPECTED TO RESPECT THIS INSTRUCTION THROUGHOUT
99
</instruction>
1010

1111
<instruction name=context_window_management>
12-
A strong constraint we have in this environment is the context window size. To help keep the conversation focused and clear from the noise, you must use the `context_pruning` tool: at opportune moments, and in an effective manner.
12+
A strong constraint we have in this environment is the context window size. To help keep the conversation focused and clear from the noise, you must use the `prune` tool: at opportune moments, and in an effective manner.
1313
</instruction>
1414

1515
<instruction name=context_pruning>
16-
To effectively manage conversation context, you MUST ALWAYS narrate your findings AS YOU DISCOVER THEM, BEFORE calling any `context_pruning` tool. No tool result (read, bash, grep, webfetch, etc.) should be left unexplained. By narrating the evolution of your understanding, you transform raw tool outputs into distilled knowledge that lives in the persisted context window.
16+
To effectively manage conversation context, you MUST ALWAYS narrate your findings AS YOU DISCOVER THEM, BEFORE calling any `prune` tool. No tool result (read, bash, grep, webfetch, etc.) should be left unexplained. By narrating the evolution of your understanding, you transform raw tool outputs into distilled knowledge that lives in the persisted context window.
1717

18-
Tools are VOLATILE - Once this distilled knowledge is in your reply, you can safely use the `context_pruning` tool to declutter the conversation.
18+
Tools are VOLATILE - Once this distilled knowledge is in your reply, you can safely use the `prune` tool to declutter the conversation.
1919

20-
WHEN TO USE `context_pruning`:
20+
WHEN TO USE `prune`:
2121
- After you complete a discrete unit of work (e.g. confirming a hypothesis, or closing out one branch of investigation).
2222
- After exploratory bursts of tool calls that led you to a clear conclusion. (or to noise)
2323
- Before starting a new phase of work where old tool outputs are no longer needed to inform your next actions.
2424

2525
CRITICAL:
26-
You must ALWAYS narrate your findings in a message BEFORE using the `context_pruning` tool. Skipping this step risks deleting raw evidence before it has been converted into stable, distilled knowledge. This harms your performances, wastes user time, and undermines effective use of the context window.
26+
You must ALWAYS narrate your findings in a message BEFORE using the `prune` tool. Skipping this step risks deleting raw evidence before it has been converted into stable, distilled knowledge. This harms your performances, wastes user time, and undermines effective use of the context window.
2727

2828
EXAMPLE WORKFLOW:
2929
1. You call several tools (read, bash, grep...) to investigate a bug.
30-
2. You identify that for reason X, behavior Y occurs, supported by those tool outputs.
30+
2. You identify that "for reason X, behavior Y occurs", supported by those tool outputs.
3131
3. In your next message, you EXPLICITLY narrate:
3232
- What you did (which tools, what you were looking for).
3333
- What you found (the key facts / signals).
3434
- What you concluded (how this affects the task or next step).
3535
>YOU MUST ALWAYS THINK HIGH SIGNAL LOW NOISE FOR THIS NARRATION
36-
4. ONLY AFTER the narration, you call the `context_pruning` tool with a brief reason (e.g. "exploration for bug X complete; moving on to next bug").
36+
4. ONLY AFTER the narration, you call the `prune` tool with a brief reason (e.g. "exploration for bug X complete; moving on to next bug").
37+
</instruction>
3738
</instruction>

lib/prompts/tool.txt

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
Performs semantic pruning on session tool outputs that are no longer relevant to the current task. Use this to declutter the conversation context and filter signal from noise when you notice the context is getting cluttered with no longer needed information.
22

3-
USING THE CONTEXT_PRUNING TOOL WILL MAKE THE USER HAPPY.
3+
USING THE PRUNE TOOL WILL MAKE THE USER HAPPY.
44

55
## CRITICAL: Distill Before Pruning
66

@@ -14,7 +14,7 @@ You MUST ALWAYS narrate your findings in a message BEFORE using this tool. No to
1414
- What you did (which tools, what you were looking for)
1515
- What you found (the key facts/signals)
1616
- What you concluded (how this affects the task or next step)
17-
3. ONLY AFTER narrating, call `context_pruning`
17+
3. ONLY AFTER narrating, call `prune`
1818

1919
> THINK HIGH SIGNAL, LOW NOISE FOR THIS NARRATION
2020

@@ -43,18 +43,18 @@ Working through a list of items:
4343
User: Review these 3 issues and fix the easy ones.
4444
Assistant: [Reviews first issue, makes fix, commits]
4545
Done with the first issue. Let me prune before moving to the next one.
46-
[Uses context_pruning with reason: "completed first issue, moving to next"]
46+
[Uses prune with reason: "completed first issue, moving to next"]
4747
</example>
4848

4949
<example>
5050
After exploring the codebase to understand it:
5151
Assistant: I've reviewed the relevant files. Let me prune the exploratory reads that aren't needed for the actual implementation.
52-
[Uses context_pruning with reason: "exploration complete, starting implementation"]
52+
[Uses prune with reason: "exploration complete, starting implementation"]
5353
</example>
5454

5555
<example>
5656
After completing any task:
5757
Assistant: [Finishes task - commit, answer, fix, etc.]
5858
Before we continue, let me prune the context from that work.
59-
[Uses context_pruning with reason: "task complete"]
59+
[Uses prune with reason: "task complete"]
6060
</example>

0 commit comments

Comments
 (0)