You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/update-notes/v3.25.mdx
+11Lines changed: 11 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -96,6 +96,7 @@ This feature provides peace of mind when using expensive models or running long
96
96
97
97
Small changes that make a big difference in your daily workflow:
98
98
99
+
***Chat Input Focus**: The chat input now automatically focuses when creating a new chat via the + button, allowing you to start typing immediately
99
100
***Gemini 2.5 Pro Thinking Budget Flexibility**: The minimum thinking budget is reduced from 1024 to 128 tokens, perfect for tasks that need quick, focused responses without extensive reasoning. You'll need to manually adjust the thinking budget to 128 in your settings to take advantage of this feature.
100
101
***Multi-Folder Workspace Support**: Code indexing now works correctly across all folders in multi-folder workspaces
101
102
***Checkpoint Timing**: Checkpoints now save before file changes are made, allowing easy undo of unwanted modifications
@@ -130,6 +131,9 @@ Small changes that make a big difference in your daily workflow:
130
131
131
132
Critical fixes that improve stability and compatibility:
132
133
134
+
***Token Calculation**: Fixed rounding errors that prevented certain models like GLM-4.5 from working properly by ensuring max output tokens are correctly rounded up
135
+
***XML Parsing**: Fixed errors when using the apply_diff tool with complex XML files containing special characters by implementing proper CDATA sections
136
+
***MCP Error Messages**: Fixed an issue where MCP server errors displayed raw translation keys instead of proper error messages
133
137
***Memory Leak Fix**: Resolved a critical memory leak in long conversations that was causing excessive memory usage and grey screens. Virtual scrolling now limits viewport rendering and optimizes caching for stable performance regardless of conversation length.
134
138
***Disabled MCP Servers**: Fixed issue where disabled MCP servers were still starting processes and consuming resources. Disabled servers now truly stay disabled with clear status indicators and immediate cleanup when MCP is globally disabled.
135
139
***MCP Server Refresh**: Settings changes no longer trigger unnecessary MCP server refreshes
@@ -170,6 +174,12 @@ Critical fixes that improve stability and compatibility:
170
174
171
175
### Provider Updates
172
176
177
+
***GPT-5 Model Support**: Added support for OpenAI's latest GPT-5 model family:
178
+
-`gpt-5-2025-08-07`: The full GPT-5 model, now set as the default for OpenAI Native provider
179
+
-`gpt-5-mini-2025-08-07`: A smaller, faster variant for quick responses
180
+
-`gpt-5-nano-2025-08-07`: The most compact version for resource-constrained scenarios
181
+
***GPT-5 Verbosity Controls**: New settings to control model output detail levels (low, medium, high), allowing you to fine-tune response length and detail based on your needs
182
+
***Fireworks AI Models**: Added GLM-4.5 series models (355B and 106B parameters with 128K context) and OpenAI gpt-oss models (20b for edge deployments, 120b for production use) to expand model selection options
173
183
***Claude Opus 4.1 Support**: Added support for the new Claude Opus 4.1 model across Anthropic, Claude Code, Bedrock, Vertex AI, and LiteLLM providers with 8192 max tokens, reasoning budget support, and prompt caching
174
184
***Z AI Provider**: Z AI (formerly Zhipu AI) is now available with GLM-4.5 series models, offering dual regional support for both international and mainland China users
175
185
***Fireworks AI Provider**: New provider offering hosted versions of popular open-source models like Kimi and Qwen
@@ -188,6 +198,7 @@ Critical fixes that improve stability and compatibility:
188
198
189
199
### Misc. Improvements
190
200
201
+
***Cloud Integration**: Reverted to using the npm package version of @roo-code/cloud for improved build stability and maintenance
191
202
***Slash Command Interpolation**: Skip interpolation for non-existent commands
192
203
***Linter Coverage**: Applied to locale README files
193
204
***Cloud Service Events**: Migrated from callbacks to event-based architecture
0 commit comments