You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Remove em dashes throughout
- Simplify section headings
- Make language more direct and natural
- Remove overly enthusiastic phrasing
- Focus on clarity over marketing speak
This week brings two significant releases: v3.36.3 and v3.36.4. These updates focus on making Roo Code more transparent, reliable, and easier to troubleshoot, while adding powerful new capabilities for browser automation and advanced AI models.
One of the most significant improvements in this release is making context management visible ([#9795](https://github.com/RooCodeInc/Roo-Code/pull/9795)). Previously, when Roo Code condensed or truncated your conversation history to fit within token limits, this happened silently in the background—leaving you wondering why responses might seem to lose track of earlier context.
21
+
One of the most significant improvements in this release is making context management visible ([#9795](https://github.com/RooCodeInc/Roo-Code/pull/9795)). Previously, when Roo Code condensed or truncated your conversation history to fit within token limits, this happened silently in the background. This left you wondering why responses might seem to lose track of earlier context.
22
22
23
-
Now you'll see exactly what's happening:
24
-
-**In-progress indicators** appear when context management begins, so you know Roo is optimizing your conversation
23
+
Now you get real-time feedback:
24
+
-**In-progress indicators** appear when context management begins
25
25
-**Condensation summaries** show how many tokens were saved (e.g., "Reduced from 45,000 to 12,000 tokens") with an expandable summary of what was condensed
26
26
-**Truncation notifications** tell you when older messages are removed, including which messages and why
27
27
28
-
This transparency helps you understand when you're approaching context limits and make better decisions about when to start fresh conversations versus continuing existing ones.
28
+
This helps you understand when you're approaching context limits and make better decisions about when to start fresh conversations.
29
29
30
-
### Error Messages That Actually Help
30
+
### Improved Error Messages
31
31
32
32
We've made two major improvements to error handling that will save you time when things go wrong.
33
33
34
-
**For OpenAI users** ([#9639](https://github.com/RooCodeInc/Roo-Code/pull/9639)), error messages are now significantly more detailed. Instead of seeing generic error text, you'll get the actual error details from OpenAI's API, making it much easier to understand what went wrong—whether it's a billing issue, rate limit, or model availability problem.
34
+
**For OpenAI users** ([#9639](https://github.com/RooCodeInc/Roo-Code/pull/9639)), error messages are now significantly more detailed. Instead of seeing generic error text, you get the actual error details from OpenAI's API. This makes it easier to understand what went wrong, whether it's a billing issue, rate limit, or model availability problem.
35
35
36
-
**For everyone** ([#9985](https://github.com/RooCodeInc/Roo-Code/pull/9985)), error messages now include an info icon you can click to see the complete error details in a modal dialog. This gives you clean, concise error messages by default, with the ability to dive into full technical details when you need them. Plus, there's a copy button to easily share error details when reporting issues or seeking help.
36
+
**For everyone** ([#9985](https://github.com/RooCodeInc/Roo-Code/pull/9985)), error messages now include an info icon you can click to see the complete error details in a modal dialog. This gives you clean, concise error messages by default, with the ability to dive into full technical details when you need them. There's also a copy button to easily share error details when reporting issues or seeking help.
37
37
38
38
## New Capabilities
39
39
40
-
### Browser Screenshots for Documentation and Debugging
40
+
### Browser Screenshots
41
41
42
-
The browser automation tool can now save screenshots ([#9963](https://github.com/RooCodeInc/Roo-Code/pull/9963)). This is incredibly useful for:
42
+
The browser automation tool can now save screenshots ([#9963](https://github.com/RooCodeInc/Roo-Code/pull/9963)). This is useful for:
43
43
-**Documenting web UI states** during testing or development
44
44
-**Creating visual records** of browser automation workflows
45
45
-**Debugging visual issues** by capturing exactly what the browser sees at specific moments
46
46
-**Building tutorials** with step-by-step screenshots automatically generated during automation
47
47
48
-
When Roo uses the browser tool, it can now capture and save visual states to specific file paths, making browser automation workflows more comprehensive and self-documenting.
48
+
When Roo uses the browser tool, it can now capture and save visual states to specific file paths.
49
49
50
-
### Maximum Reasoning Power for OpenAI's o1 Models
50
+
### Extra-High Reasoning Effort
51
51
52
-
For users of OpenAI's most advanced reasoning model, gpt-5.1-codex-max, you can now access "extra-high" reasoning effort ([#9900](https://github.com/RooCodeInc/Roo-Code/pull/9900)). This gives you even more control over the model's thinking depth for complex coding tasks that benefit from extended reasoning time.
52
+
For users of OpenAI's gpt-5.1-codex-max model, you can now access "extra-high" reasoning effort ([#9900](https://github.com/RooCodeInc/Roo-Code/pull/9900)). This gives you more control over the model's thinking depth for complex coding tasks that benefit from extended reasoning time.
53
53
54
-
### Smarter Tool Usage on OpenRouter
54
+
### Native Tools on OpenRouter
55
55
56
-
When using models through OpenRouter, Roo Code now automatically defaults to native tool calling when the model supports it ([#9878](https://github.com/RooCodeInc/Roo-Code/pull/9878)). Native tool calling is more reliable and efficient than simulated approaches, so this change means better performance and fewer errors when working with capable models on OpenRouter—without requiring you to manually configure anything.
56
+
When using models through OpenRouter, Roo Code now automatically defaults to native tool calling when the model supports it ([#9878](https://github.com/RooCodeInc/Roo-Code/pull/9878)). Native tool calling is more reliable and efficient than simulated approaches, so this change means better performance and fewer errors. No manual configuration needed.
57
57
58
-
## Performance and Reliability Improvements
58
+
## Performance and Reliability
59
59
60
60
### Faster Token Counting
61
61
62
-
We've eliminated unnecessary token counting API requests ([#9884](https://github.com/RooCodeInc/Roo-Code/pull/9884)) for Anthropic, Gemini, and MiniMax providers. Previously, Roo Code was making extra API calls just to count tokens, which added latency and cost. Now we use fast local token estimation (tiktoken heuristics) that's accurate enough for all practical purposes while being essentially instantaneous and free.
62
+
We've eliminated unnecessary token counting API requests ([#9884](https://github.com/RooCodeInc/Roo-Code/pull/9884)) for Anthropic, Gemini, and MiniMax providers. Previously, Roo Code was making extra API calls just to count tokens, which added latency and cost. Now we use fast local token estimation (tiktoken heuristics) that's accurate enough for practical purposes while being essentially instantaneous and free.
63
63
64
-
The result: snappier responses and reduced API costs, especially during long conversations where token counting was happening frequently.
64
+
The result: snappier responses and reduced API costs, especially during long conversations.
65
65
66
-
### MCP Tool Streaming Reliability
66
+
### MCP Tool Streaming
67
67
68
68
Fixed an issue where MCP tool streaming could fail prematurely ([#9993](https://github.com/RooCodeInc/Roo-Code/pull/9993)). If you're using MCP servers that stream tool results, you'll experience more reliable data handling with no premature cutoffs.
69
69
70
70
### TODO List Display Order
71
71
72
-
TODO lists now display in the correct execution order ([#9991](https://github.com/RooCodeInc/Roo-Code/pull/9991)). Previously, the order could get scrambled, making it harder to follow multi-step workflows. Now your tasks always appear in the logical sequence they'll be executed.
72
+
TODO lists now display in the correct execution order ([#9991](https://github.com/RooCodeInc/Roo-Code/pull/9991)). Previously, the order could get scrambled. Now your tasks always appear in the logical sequence they'll be executed.
73
73
74
74
## Provider Updates
75
75
76
-
### DeepSeek V3.2: Better and Cheaper
76
+
### DeepSeek V3.2
77
77
78
78
DeepSeek has updated their flagship model with significant improvements:
79
-
-**50% price reduction** making it even more cost-effective
79
+
-**50% price reduction**
80
80
-**Native tools support** for more reliable tool calling
81
-
-**8K output tokens**allowing longer, more comprehensive responses
81
+
-**8K output tokens**for longer responses
82
82
83
-
If you're using DeepSeek for cost-effective AI assistance, this update delivers better performance at an even better price point.
83
+
If you're using DeepSeek for cost-effective AI assistance, this update delivers better performance at a better price.
84
84
85
-
### xAI Model Improvements
85
+
### xAI Models
86
86
87
87
The xAI model catalog has been updated with:
88
-
-**Corrected context windows**ensuring you get the full capacity of each model
89
-
-**Image support** for grok-3 and grok-3-mini, enabling multimodal use cases
88
+
-**Corrected context windows**so you get the full capacity of each model
89
+
-**Image support** for grok-3 and grok-3-mini
90
90
-**Optimized tool preferences** for better tool-calling behavior
91
91
92
92
### Expanded Model Support
93
93
94
-
-**Bedrock** now supports additional models: Kimi, MiniMax, and Qwen, giving you more choices for your AWS-based workflows
94
+
-**Bedrock** now supports additional models: Kimi, MiniMax, and Qwen
95
95
-**Baseten** added DeepSeek V3.2 support with optimized model definitions
96
96
97
-
## Architecture and Quality of Life
97
+
## Architecture Improvements
98
98
99
99
### Decoupled Tool System
100
100
101
-
Tools are now decoupled from the system prompt ([#9784](https://github.com/RooCodeInc/Roo-Code/pull/9784)), creating a more modular and maintainable architecture. While this is primarily an internal improvement, it sets the foundation for more flexible tool configuration and better debugging capabilities in future releases.
101
+
Tools are now decoupled from the system prompt ([#9784](https://github.com/RooCodeInc/Roo-Code/pull/9784)), creating a more modular architecture. While this is primarily an internal improvement, it sets the foundation for more flexible tool configuration and better debugging in future releases.
102
102
103
103
### Cleaner Telemetry
104
104
105
-
Telemetry has been refined to filter out 429 rate limit errors ([#9987](https://github.com/RooCodeInc/Roo-Code/pull/9987)), providing cleaner metrics for understanding actual issues versus expected rate limiting behavior. This helps the development team focus on real problems rather than noise in the data.
105
+
Telemetry has been refined to filter out 429 rate limit errors ([#9987](https://github.com/RooCodeInc/Roo-Code/pull/9987)), providing cleaner metrics for understanding actual issues versus expected rate limiting behavior.
106
106
107
107
### Multiple Bug Fixes
108
108
109
-
Both releases include numerous stability improvements and bug fixes across providers, UI theming, API timeout handling, configuration edge cases, and more. These collectively make Roo Code more robust and reliable across diverse usage scenarios.
109
+
Both releases include numerous stability improvements and bug fixes across providers, UI theming, API timeout handling, and configuration edge cases.
110
110
111
111
## Looking Forward
112
112
113
-
These releases represent our continued focus on making Roo Code not just more powerful, but more understandable and reliable. By surfacing what's happening under the hood—from context management to detailed error information—we're helping you work more effectively and troubleshoot issues faster when they arise.
113
+
These releases focus on making Roo Code more transparent and reliable. By surfacing what's happening under the hood (from context management to detailed error information), we're helping you work more effectively and troubleshoot faster.
114
114
115
-
The combination of better transparency, new capabilities, and performance improvements makes v3.36.3 and v3.36.4 significant updates for all Roo Code users.
115
+
The combination of better transparency, new capabilities, and performance improvements makes v3.36.3 and v3.36.4 significant updates for all users.
0 commit comments