Skip to content

Commit 9183550

Browse files
authored
Merge pull request #33 from spoons-and-mirrors/feat/synth-instruction
feature(synthetic instructions)
2 parents c474053 + 6a199b1 commit 9183550

File tree

15 files changed

+498
-19
lines changed

15 files changed

+498
-19
lines changed

README.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,13 @@ DCP implements two complementary strategies:
2828
**Deduplication** — Fast, zero-cost pruning that identifies repeated tool calls (e.g., reading the same file multiple times) and keeps only the most recent output. Runs instantly with no LLM calls.
2929

3030
**AI Analysis** — Uses a language model to semantically analyze conversation context and identify tool outputs that are no longer relevant to the current task. More thorough but incurs LLM cost.
31+
32+
## Context Pruning Tool
33+
34+
When `strategies.onTool` is enabled, DCP exposes a `context_pruning` tool to Opencode that the AI can call to trigger pruning on demand. To help the AI use this tool effectively, DCP also injects guidance.
35+
36+
When `nudge_freq` is enabled, injects reminders (every `nudge_freq` tool results) prompting the AI to consider pruning when appropriate.
37+
3138
## How It Works
3239

3340
DCP is **non-destructive**—pruning state is kept in memory only. When requests go to your LLM, DCP replaces pruned outputs with a placeholder; original session data stays intact.
@@ -46,6 +53,7 @@ DCP uses its own config file (`~/.config/opencode/dcp.jsonc` or `.opencode/dcp.j
4653
| `showModelErrorToasts` | `true` | Show notifications on model fallback |
4754
| `strictModelSelection` | `false` | Only run AI analysis with session or configured model (disables fallback models) |
4855
| `pruning_summary` | `"detailed"` | `"off"`, `"minimal"`, or `"detailed"` |
56+
| `nudge_freq` | `5` | Remind AI to prune every N tool results (0 = disabled) |
4957
| `protectedTools` | `["task", "todowrite", "todoread", "context_pruning"]` | Tools that are never pruned |
5058
| `strategies.onIdle` | `["deduplication", "ai-analysis"]` | Strategies for automatic pruning |
5159
| `strategies.onTool` | `["deduplication", "ai-analysis"]` | Strategies when AI calls `context_pruning` |

index.ts

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@ import { createPluginState } from "./lib/state"
77
import { installFetchWrapper } from "./lib/fetch-wrapper"
88
import { createPruningTool } from "./lib/pruning-tool"
99
import { createEventHandler, createChatParamsHandler } from "./lib/hooks"
10+
import { createToolTracker } from "./lib/synth-instruction"
11+
import { loadPrompt } from "./lib/prompt"
1012

1113
const plugin: Plugin = (async (ctx) => {
1214
const { config, migrations } = getConfig(ctx)
@@ -39,8 +41,15 @@ const plugin: Plugin = (async (ctx) => {
3941
ctx.directory
4042
)
4143

42-
// Install global fetch wrapper for context pruning
43-
installFetchWrapper(state, logger, ctx.client)
44+
// Create tool tracker and load prompts for synthetic instruction injection
45+
const toolTracker = createToolTracker()
46+
const prompts = {
47+
synthInstruction: loadPrompt("synthetic"),
48+
nudgeInstruction: loadPrompt("nudge")
49+
}
50+
51+
// Install global fetch wrapper for context pruning and synthetic instruction injection
52+
installFetchWrapper(state, logger, ctx.client, config, toolTracker, prompts)
4453

4554
// Log initialization
4655
logger.info("plugin", "DCP initialized", {

lib/config.ts

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ export interface PluginConfig {
1515
showModelErrorToasts?: boolean
1616
strictModelSelection?: boolean
1717
pruning_summary: "off" | "minimal" | "detailed"
18+
nudge_freq: number
1819
strategies: {
1920
onIdle: PruningStrategy[]
2021
onTool: PruningStrategy[]
@@ -33,6 +34,7 @@ const defaultConfig: PluginConfig = {
3334
showModelErrorToasts: true,
3435
strictModelSelection: false,
3536
pruning_summary: 'detailed',
37+
nudge_freq: 5,
3638
strategies: {
3739
onIdle: ['deduplication', 'ai-analysis'],
3840
onTool: ['deduplication', 'ai-analysis']
@@ -47,6 +49,7 @@ const VALID_CONFIG_KEYS = new Set([
4749
'showModelErrorToasts',
4850
'strictModelSelection',
4951
'pruning_summary',
52+
'nudge_freq',
5053
'strategies'
5154
])
5255

@@ -118,6 +121,8 @@ function createDefaultConfig(): void {
118121
},
119122
// Summary display: "off", "minimal", or "detailed"
120123
"pruning_summary": "detailed",
124+
// How often to nudge the AI to prune (every N tool results, 0 = disabled)
125+
"nudge_freq": 5,
121126
// Tools that should never be pruned
122127
"protectedTools": ["task", "todowrite", "todoread", "context_pruning"]
123128
}
@@ -196,7 +201,8 @@ export function getConfig(ctx?: PluginInput): ConfigResult {
196201
showModelErrorToasts: globalConfig.showModelErrorToasts ?? config.showModelErrorToasts,
197202
strictModelSelection: globalConfig.strictModelSelection ?? config.strictModelSelection,
198203
strategies: mergeStrategies(config.strategies, globalConfig.strategies as any),
199-
pruning_summary: globalConfig.pruning_summary ?? config.pruning_summary
204+
pruning_summary: globalConfig.pruning_summary ?? config.pruning_summary,
205+
nudge_freq: globalConfig.nudge_freq ?? config.nudge_freq
200206
}
201207
logger.info('config', 'Loaded global config', { path: configPaths.global })
202208
}
@@ -226,7 +232,8 @@ export function getConfig(ctx?: PluginInput): ConfigResult {
226232
showModelErrorToasts: projectConfig.showModelErrorToasts ?? config.showModelErrorToasts,
227233
strictModelSelection: projectConfig.strictModelSelection ?? config.strictModelSelection,
228234
strategies: mergeStrategies(config.strategies, projectConfig.strategies as any),
229-
pruning_summary: projectConfig.pruning_summary ?? config.pruning_summary
235+
pruning_summary: projectConfig.pruning_summary ?? config.pruning_summary,
236+
nudge_freq: projectConfig.nudge_freq ?? config.nudge_freq
230237
}
231238
logger.info('config', 'Loaded project config (overrides global)', { path: configPaths.project })
232239
}

lib/fetch-wrapper/gemini.ts

Lines changed: 24 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,7 @@ import {
44
getAllPrunedIds,
55
fetchSessionMessages
66
} from "./types"
7+
import { injectNudgeGemini, injectSynthGemini } from "../synth-instruction"
78

89
/**
910
* Handles Google/Gemini format (body.contents array with functionResponse parts).
@@ -18,20 +19,39 @@ export async function handleGemini(
1819
return { modified: false, body }
1920
}
2021

22+
let modified = false
23+
24+
// Inject synthetic instructions if onTool strategies are enabled
25+
if (ctx.config.strategies.onTool.length > 0) {
26+
// Inject periodic nudge based on tool result count
27+
if (ctx.config.nudge_freq > 0) {
28+
if (injectNudgeGemini(body.contents, ctx.toolTracker, ctx.prompts.nudgeInstruction, ctx.config.nudge_freq)) {
29+
ctx.logger.info("fetch", "Injected nudge instruction (Gemini)")
30+
modified = true
31+
}
32+
}
33+
34+
// Inject synthetic instruction into last user content
35+
if (injectSynthGemini(body.contents, ctx.prompts.synthInstruction)) {
36+
ctx.logger.info("fetch", "Injected synthetic instruction (Gemini)")
37+
modified = true
38+
}
39+
}
40+
2141
// Check for functionResponse parts in any content item
2242
const hasFunctionResponses = body.contents.some((content: any) =>
2343
Array.isArray(content.parts) &&
2444
content.parts.some((part: any) => part.functionResponse)
2545
)
2646

2747
if (!hasFunctionResponses) {
28-
return { modified: false, body }
48+
return { modified, body }
2949
}
3050

3151
const { allSessions, allPrunedIds } = await getAllPrunedIds(ctx.client, ctx.state)
3252

3353
if (allPrunedIds.size === 0) {
34-
return { modified: false, body }
54+
return { modified, body }
3555
}
3656

3757
// Find the active session to get the position mapping
@@ -48,7 +68,7 @@ export async function handleGemini(
4868

4969
if (!positionMapping) {
5070
ctx.logger.info("fetch", "No Google tool call mapping found, skipping pruning for Gemini format")
51-
return { modified: false, body }
71+
return { modified, body }
5272
}
5373

5474
// Build position counters to track occurrence of each tool name
@@ -130,5 +150,5 @@ export async function handleGemini(
130150
return { modified: true, body }
131151
}
132152

133-
return { modified: false, body }
153+
return { modified, body }
134154
}

lib/fetch-wrapper/index.ts

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,11 +1,13 @@
11
import type { PluginState } from "../state"
22
import type { Logger } from "../logger"
3-
import type { FetchHandlerContext } from "./types"
3+
import type { FetchHandlerContext, SynthPrompts } from "./types"
4+
import type { ToolTracker } from "../synth-instruction"
5+
import type { PluginConfig } from "../config"
46
import { handleOpenAIChatAndAnthropic } from "./openai-chat"
57
import { handleGemini } from "./gemini"
68
import { handleOpenAIResponses } from "./openai-responses"
79

8-
export type { FetchHandlerContext, FetchHandlerResult } from "./types"
10+
export type { FetchHandlerContext, FetchHandlerResult, SynthPrompts } from "./types"
911

1012
/**
1113
* Creates a wrapped global fetch that intercepts API calls and performs
@@ -20,14 +22,20 @@ export type { FetchHandlerContext, FetchHandlerResult } from "./types"
2022
export function installFetchWrapper(
2123
state: PluginState,
2224
logger: Logger,
23-
client: any
25+
client: any,
26+
config: PluginConfig,
27+
toolTracker: ToolTracker,
28+
prompts: SynthPrompts
2429
): () => void {
2530
const originalGlobalFetch = globalThis.fetch
2631

2732
const ctx: FetchHandlerContext = {
2833
state,
2934
logger,
30-
client
35+
client,
36+
config,
37+
toolTracker,
38+
prompts
3139
}
3240

3341
globalThis.fetch = async (input: any, init?: any) => {

lib/fetch-wrapper/openai-chat.ts

Lines changed: 22 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ import {
66
getMostRecentActiveSession
77
} from "./types"
88
import { cacheToolParametersFromMessages } from "../tool-cache"
9+
import { injectNudge, injectSynth } from "../synth-instruction"
910

1011
/**
1112
* Handles OpenAI Chat Completions format (body.messages with role='tool').
@@ -23,6 +24,25 @@ export async function handleOpenAIChatAndAnthropic(
2324
// Cache tool parameters from messages
2425
cacheToolParametersFromMessages(body.messages, ctx.state)
2526

27+
let modified = false
28+
29+
// Inject synthetic instructions if onTool strategies are enabled
30+
if (ctx.config.strategies.onTool.length > 0) {
31+
// Inject periodic nudge based on tool result count
32+
if (ctx.config.nudge_freq > 0) {
33+
if (injectNudge(body.messages, ctx.toolTracker, ctx.prompts.nudgeInstruction, ctx.config.nudge_freq)) {
34+
ctx.logger.info("fetch", "Injected nudge instruction")
35+
modified = true
36+
}
37+
}
38+
39+
// Inject synthetic instruction into last user message
40+
if (injectSynth(body.messages, ctx.prompts.synthInstruction)) {
41+
ctx.logger.info("fetch", "Injected synthetic instruction")
42+
modified = true
43+
}
44+
}
45+
2646
// Check for tool messages in both formats:
2747
// 1. OpenAI style: role === 'tool'
2848
// 2. Anthropic style: role === 'user' with content containing tool_result
@@ -39,7 +59,7 @@ export async function handleOpenAIChatAndAnthropic(
3959
const { allSessions, allPrunedIds } = await getAllPrunedIds(ctx.client, ctx.state)
4060

4161
if (toolMessages.length === 0 || allPrunedIds.size === 0) {
42-
return { modified: false, body }
62+
return { modified, body }
4363
}
4464

4565
let replacedCount = 0
@@ -103,5 +123,5 @@ export async function handleOpenAIChatAndAnthropic(
103123
return { modified: true, body }
104124
}
105125

106-
return { modified: false, body }
126+
return { modified, body }
107127
}

lib/fetch-wrapper/openai-responses.ts

Lines changed: 23 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,7 @@ import {
66
getMostRecentActiveSession
77
} from "./types"
88
import { cacheToolParametersFromInput } from "../tool-cache"
9+
import { injectNudgeResponses, injectSynthResponses } from "../synth-instruction"
910

1011
/**
1112
* Handles OpenAI Responses API format (body.input array with function_call_output items).
@@ -23,17 +24,36 @@ export async function handleOpenAIResponses(
2324
// Cache tool parameters from input
2425
cacheToolParametersFromInput(body.input, ctx.state)
2526

27+
let modified = false
28+
29+
// Inject synthetic instructions if onTool strategies are enabled
30+
if (ctx.config.strategies.onTool.length > 0) {
31+
// Inject periodic nudge based on tool result count
32+
if (ctx.config.nudge_freq > 0) {
33+
if (injectNudgeResponses(body.input, ctx.toolTracker, ctx.prompts.nudgeInstruction, ctx.config.nudge_freq)) {
34+
ctx.logger.info("fetch", "Injected nudge instruction (Responses API)")
35+
modified = true
36+
}
37+
}
38+
39+
// Inject synthetic instruction into last user message
40+
if (injectSynthResponses(body.input, ctx.prompts.synthInstruction)) {
41+
ctx.logger.info("fetch", "Injected synthetic instruction (Responses API)")
42+
modified = true
43+
}
44+
}
45+
2646
// Check for function_call_output items
2747
const functionOutputs = body.input.filter((item: any) => item.type === 'function_call_output')
2848

2949
if (functionOutputs.length === 0) {
30-
return { modified: false, body }
50+
return { modified, body }
3151
}
3252

3353
const { allSessions, allPrunedIds } = await getAllPrunedIds(ctx.client, ctx.state)
3454

3555
if (allPrunedIds.size === 0) {
36-
return { modified: false, body }
56+
return { modified, body }
3757
}
3858

3959
let replacedCount = 0
@@ -77,5 +97,5 @@ export async function handleOpenAIResponses(
7797
return { modified: true, body }
7898
}
7999

80-
return { modified: false, body }
100+
return { modified, body }
81101
}

lib/fetch-wrapper/types.ts

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,25 @@
11
import type { PluginState } from "../state"
22
import type { Logger } from "../logger"
3+
import type { ToolTracker } from "../synth-instruction"
4+
import type { PluginConfig } from "../config"
35

46
/** The message used to replace pruned tool output content */
57
export const PRUNED_CONTENT_MESSAGE = '[Output removed to save context - information superseded or no longer needed]'
68

9+
/** Prompts used for synthetic instruction injection */
10+
export interface SynthPrompts {
11+
synthInstruction: string
12+
nudgeInstruction: string
13+
}
14+
715
/** Context passed to each format-specific handler */
816
export interface FetchHandlerContext {
917
state: PluginState
1018
logger: Logger
1119
client: any
20+
config: PluginConfig
21+
toolTracker: ToolTracker
22+
prompts: SynthPrompts
1223
}
1324

1425
/** Result from a format handler indicating what happened */

lib/prompt.ts

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,11 @@
1+
import { readFileSync } from "fs"
2+
import { join } from "path"
3+
4+
export function loadPrompt(name: string): string {
5+
const filePath = join(__dirname, "prompts", `${name}.txt`)
6+
return readFileSync(filePath, "utf8").trim()
7+
}
8+
19
function minimizeMessages(messages: any[], alreadyPrunedIds?: string[], protectedToolCallIds?: string[]): any[] {
210
const prunedIdsSet = alreadyPrunedIds ? new Set(alreadyPrunedIds.map(id => id.toLowerCase())) : new Set()
311
const protectedIdsSet = protectedToolCallIds ? new Set(protectedToolCallIds.map(id => id.toLowerCase())) : new Set()

lib/prompts/context_pruning.txt

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
Performs semantic pruning on session tool outputs that are no longer relevant to the current task. Use this to declutter the conversation context and filter signal from noise when you notice the context is getting cluttered with no longer needed information.
2+
3+
USING THE CONTEXT_PRUNING TOOL WILL MAKE THE USER HAPPY.
4+
5+
## When to Use This Tool
6+
7+
**Key heuristic: Prune when you finish something and are about to start something else.**
8+
9+
Ask yourself: "Have I just completed a discrete unit of work?" If yes, prune before moving on.
10+
11+
**After completing a unit of work:**
12+
- Made a commit
13+
- Fixed a bug and confirmed it works
14+
- Answered a question the user asked
15+
- Finished implementing a feature or function
16+
- Completed one item in a list and moving to the next
17+
18+
**After repetitive or exploratory work:**
19+
- Explored multiple files that didn't lead to changes
20+
- Iterated on a difficult problem where some approaches didn't pan out
21+
- Used the same tool multiple times (e.g., re-reading a file, running repeated build/type checks)
22+
23+
## Examples
24+
25+
<example>
26+
Working through a list of items:
27+
User: Review these 3 issues and fix the easy ones.
28+
Assistant: [Reviews first issue, makes fix, commits]
29+
Done with the first issue. Let me prune before moving to the next one.
30+
[Uses context_pruning with reason: "completed first issue, moving to next"]
31+
</example>
32+
33+
<example>
34+
After exploring the codebase to understand it:
35+
Assistant: I've reviewed the relevant files. Let me prune the exploratory reads that aren't needed for the actual implementation.
36+
[Uses context_pruning with reason: "exploration complete, starting implementation"]
37+
</example>
38+
39+
<example>
40+
After completing any task:
41+
Assistant: [Finishes task - commit, answer, fix, etc.]
42+
Before we continue, let me prune the context from that work.
43+
[Uses context_pruning with reason: "task complete"]
44+
</example>

0 commit comments

Comments
 (0)