-
Notifications
You must be signed in to change notification settings - Fork 2.7k
Description
What specific problem does this solve?
When users have adjusted their temperature settings beyond the default (especially higher than 0.2), they experience frequent tool failures with errors like "Content appears to be truncated" and comments indicating omitted code (e.g., '// rest of code unchanged', '/* previous code */').
Who is affected: All users who customize temperature settings for more creative outputs
When this happens: During tool execution (write_to_file, apply_diff) with temperature > 0.2
Current behavior: Tool fails with an error, user must manually adjust temperature and retry the entire task
Expected behavior: System detects temperature-related failures and offers automatic recovery with temperature adjustment
Impact: Significant time wasted retrying tasks, frustration with corrupted outputs, loss of context
Additional context
Recent research and community observations show that higher temperatures (>0.2) significantly increase tool call failures in LLMs like Gemini 2.5 Pro. The randomness introduced by higher temperatures affects the model's ability to generate properly structured tool calls and complete file contents, leading to truncated outputs and malformed responses.
🛠️ Contributing & Technical Analysis
✅ I'm interested in providing technical analysis
✅ I understand implementation requires approval
Root Cause / Implementation Target
Higher temperature settings increase randomness in LLM outputs, causing:
- Malformed tool call syntax
- Truncated file contents with placeholder comments
- Incomplete structured outputs
The system currently handles API request failures with retry logic but lacks specific handling for tool execution failures caused by temperature settings.
Affected Components
Primary Files:
src/core/tools/writeToFileTool.ts(lines 176-180): Contains the specific error detection for truncated contentsrc/core/assistant-message/presentAssistantMessage.ts(lines 307-316): General error handling for tool executionsrc/core/task/Task.ts(lines 1940-1950): Records tool errors viarecordToolErrormethodwebview-ui/src/components/chat/ChatView.tsx(lines 258-265): Handles API failure retry UI
Secondary Impact:
src/core/prompts/responses.ts: Error message formattingsrc/api/transform/model-params.ts: Temperature parameter handlingwebview-ui/src/components/settings/TemperatureControl.tsx: Temperature UI componentsrc/core/webview/ClineProvider.ts: State management for API configuration
Current Implementation Analysis
The system has robust retry mechanisms for API failures (api_req_failed) but tool errors are handled differently:
- Tool errors call
recordToolErrorand display error messages - No automatic retry mechanism exists for tool failures
- Temperature is accessed via
apiConfiguration.modelTemperature - Chat UI already supports retry buttons for API failures
Proposed Implementation
Step 1: Create Temperature-Aware Error Detection
- File:
src/core/tools/utils/temperatureErrorDetection.ts(new) - Detect tool errors that match temperature-related patterns
- Check if current temperature > 0.2 and is custom-set
Step 2: Add New Ask Type for Temperature Errors
- File:
src/shared/ExtensionMessage.ts - Add new ClineAsk type:
"temperature_tool_error" - Include error details and current temperature
Step 3: Implement Error Handler in Tool Execution
- Files:
writeToFileTool.ts,applyDiffTool.ts - Before calling
recordToolError, check if temperature-related - If yes, trigger special ask flow instead of standard error
Step 4: Add UI Handler for Temperature Errors
- File:
webview-ui/src/components/chat/ChatView.tsx - Add case for
"temperature_tool_error"in ask handling - Show error explanation with temperature info
- Primary button: "Reduce Temperature to 0.2 & Retry"
- Secondary button: "Cancel"
Step 5: Implement Retry Mechanism with Temperature Change
- File:
src/core/webview/webviewMessageHandler.ts - Handle retry response by:
- Updating temperature to 0.2 for the current API profile via
setApiConfiguration - Removing corrupted messages from conversation
- Re-sending last user message
- Updating temperature to 0.2 for the current API profile via
- Important: Temperature change only affects the current API profile, not global settings
Code Architecture Considerations
- Reuse existing retry patterns from API failure handling
- Maintain separation between tool error detection and UI
- Temperature change updates only the current API profile's configuration
- Use existing state management for API configuration updates
- Users can manually change temperature back in settings if desired
Testing Requirements
- Unit Tests:
- Temperature error detection logic
- Message cleanup for retry
- API profile-specific temperature update
- Integration Tests:
- Full retry flow with temperature reduction
- Verify temperature change is profile-specific
- Edge Cases:
- Temperature already at 0.2 or below
- Multiple concurrent tool failures
- User cancellation during retry
Performance Impact
- Minimal - only adds checks on tool errors
- No impact on successful operations
- Retry mechanism reuses existing infrastructure
Security Considerations
- No security implications
- Temperature changes are user-initiated
- Configuration changes follow existing patterns
Migration Strategy
Not applicable - new feature addition
Rollback Plan
Feature can be disabled via feature flag if issues arise
Dependencies and Breaking Changes
- No external dependencies
- No breaking changes
- Backward compatible with existing error handling
Implementation Complexity
- Estimated effort: Medium
- Risk level: Low
- Prerequisites: None
Acceptance Criteria
Given a user has set a custom temperature > 0.2 on their current API profile
When a tool execution fails with truncation/omission errors
Then the system displays an error message explaining the temperature issue
And shows a "Reduce Temperature to 0.2 & Retry" button
And when clicked, updates the current API profile's temperature to 0.2
And clears the corrupted response from context
And retries the last user message
And the temperature setting remains at 0.2 for the current API profile only
Given a user encounters a temperature-related tool error
When they click "Cancel" instead of retry
Then the error is handled normally without retry
And the task continues with the error state
And the temperature setting remains unchanged
Given a retry is initiated with temperature reduction
When the tool executes successfully
Then the task continues normally
And the current API profile's temperature remains at 0.2
And other API profiles retain their original temperature settings
And the user can manually adjust the temperature back in settings if desired
Metadata
Metadata
Assignees
Labels
Type
Projects
Status