You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
refactor: remove assistant message injection support
Remove the ability to inject synthetic assistant messages for context
pruning nudges. This simplifies the codebase to only use user message
injection.
Reasoning: Injecting assistant messages is risky because it's unknown
how different providers handle edge cases like missing reasoning blocks
or malformed assistant content. User message injection is safer and
more predictable across all providers.
Removed:
- getLastAssistantMessage() helper
- createSyntheticAssistantMessage() function
- isReasoningModel state property
- chat.params hook for model detection
- wrapPrunableToolsAssistant() function
- All assistant prompt files (nudge and system)
The following tools have been invoked and are available for pruning. This list does not mandate immediate action. Consider your current goals and the resources you need before discarding valuable tool inputs or outputs. Consolidate your prunes for efficiency; it is rarely worth pruning a single tiny tool output. Keep the context free of noise.
I have the following tool outputs available for pruning. I should consider my current goals and the resources I need before discarding valuable inputs or outputs. I should consolidate prunes for efficiency; it is rarely worth pruning a single tiny tool output.
0 commit comments