-
Notifications
You must be signed in to change notification settings - Fork 25
Description
Description
OpenAI thinking models (o1, o3, o3-mini, etc.) do not have their tool outputs properly pruned even though the UI indicates that pruning has occurred.
Expected Behavior
When context_pruning runs successfully and the UI shows that tool outputs were pruned, subsequent LLM requests should have those tool outputs replaced with the placeholder text: [Output removed to save context - information superseded or no longer needed]
Actual Behavior
The pruning appears to complete successfully (UI shows items were pruned, stats are updated), but the fetch wrapper does not actually replace the tool outputs for OpenAI thinking models. The original tool output content is still sent to the model.
Likely Cause
The fetch wrapper in index.ts currently handles two message formats:
- OpenAI style:
role === 'tool'withtool_call_id - Anthropic style:
role === 'user'withcontentarray containingtool_result
OpenAI thinking models may use a different message structure that doesn't match either of these patterns. The AI SDK or OpenAI API may format tool results differently for reasoning models.
Investigation Needed
- Enable debug mode (
debug: truein config) and inspect the logged request bodies when using thinking models - Compare the message structure between standard OpenAI models (gpt-4o) and thinking models (o1, o3)
- Check if tool results are nested differently or use different field names
Relevant Code
index.ts lines 97-127 - the fetch wrapper that performs the replacement:
body.messages = body.messages.map((m: any) => {
// OpenAI style: role === 'tool' with tool_call_id
if (m.role === 'tool' && allPrunedIds.has(m.tool_call_id?.toLowerCase())) {
// ...
}
// Anthropic style: role === 'user' with content array containing tool_result
if (m.role === 'user' && Array.isArray(m.content)) {
// ...
}
return m
})Environment
- OpenCode plugin: opencode-dynamic-context-pruning
- Models affected: OpenAI o1, o3, o3-mini (and potentially other reasoning models)