Releases: RooCodeInc/Roo-Code
Releases · RooCodeInc/Roo-Code
Release v3.36.5
[3.36.5] - 2025-12-11
- Add: GPT-5.2 model to openai-native provider (PR #10024 by @hannesrudolph)
- Add: Toggle for Enter key behavior in chat input allowing users to configure whether Enter sends or creates new line (#8555 by @lmtr0, PR #10002 by @hannesrudolph)
- Add: App version to telemetry exception captures and filter 402 errors (PR #9996 by @daniel-lxs)
- Fix: Handle empty Gemini responses and reasoning loops to prevent infinite retries (PR #10007 by @hannesrudolph)
- Fix: Add missing tool_result blocks to prevent API errors when tool results are expected (PR #10015 by @daniel-lxs)
- Fix: Filter orphaned tool_results when more results than tool_uses to prevent message validation errors (PR #10027 by @daniel-lxs)
- Fix: Add general API endpoints for Z.ai provider (#9879 by @richtong, PR #9894 by @roomote)
- Fix: Apply versioned settings on nightly builds (PR #9997 by @hannesrudolph)
- Remove: Glama provider (PR #9801 by @hannesrudolph)
- Remove: Deprecated list_code_definition_names tool (PR #10005 by @hannesrudolph)
Release v3.36.4
[3.36.4] - 2025-12-10
- Add error details modal with on-demand display for improved error visibility when debugging issues (PR #9985 by @roomote)
- Fix: Prevent premature rawChunkTracker clearing for MCP tools, improving reliability of MCP tool streaming (PR #9993 by @daniel-lxs)
- Fix: Filter out 429 rate limit errors from API error telemetry for cleaner metrics (PR #9987 by @daniel-lxs)
- Fix: Correct TODO list display order in chat view to show items in proper sequence (PR #9991 by @roomote)
Release v3.36.3
[3.36.3] - 2025-12-09
- Adds support for minimal and medium reasoning effort levels in the Gemini provider implementation (PR #9973 by @hannesrudolph)
- Unified context-management architecture with improved UX for better conversation context handling (PR #9795 by @hannesrudolph)
- Update DeepSeek models to V3.2 with new pricing (PR #9962 by @hannesrudolph)
- Add versioned settings support with minPluginVersion gating for Roo provider (PR #9934 by @hannesrudolph)
- Make Architect mode save plans to
/plansdirectory and gitignore it (PR #9944 by @brunobergher) - Add streaming tool stats and token usage throttling for improved performance (PR #9926 by @hannesrudolph)
- Add search_replace native tool for single-replacement operations (PR #9918 by @hannesrudolph)
- Add timeout configuration to OpenAI Compatible Provider Client (PR #9898 by @dcbartlett)
- Add xhigh reasoning effort option for gpt-5.1-codex-max (PR #9900 by @andrewginns)
- Add tool preferences configuration for xAI models (PR #9923 by @hannesrudolph)
- Add ability to save screenshots from the browser tool (PR #9963 by @mrubens)
- Add DeepSeek V3-2 support for Baseten Provider (PR #9861 by @AlexKer)
- Add Kimi, MiniMax, and Qwen model configurations for Bedrock (PR #9905 by @app/roomote)
- Add announcement support CTA and social icons (PR #9945 by @hannesrudolph)
- Default to using native tools when supported on OpenRouter (PR #9878 by @mrubens)
- Update xAI models catalog with latest models (PR #9872 by @hannesrudolph)
- Tweaks to Baseten model definitions (PR #9866 by @mrubens)
- Stop making count_tokens requests to improve performance (PR #9884 by @mrubens)
- Refactor: Decouple tools from system prompt for better modularity (PR #9784 by @daniel-lxs)
- Refactor: Consolidate ThinkingBudget components and fix disable handling (PR #9930 by @hannesrudolph)
- Add API error telemetry to OpenRouter provider (PR #9953 by @daniel-lxs)
- Evals UI: Make eval runs deleteable (PR #9909 by @mrubens)
- Web: Product pages (PR #9865 by @brunobergher)
- Fix: Respect explicit supportsReasoningEffort array values (PR #9970 by @hannesrudolph)
- Fix: Validate and fix tool_result IDs before API requests (PR #9952 by @daniel-lxs)
- Fix: Always show tool protocol selector for openai-compatible provider (PR #9966 by @hannesrudolph)
- Fix: Return undefined instead of 0 for disabled API timeout (PR #9960 by @hannesrudolph)
- Fix: Display actual API error message instead of generic text on retry (PR #9954 by @hannesrudolph)
- Fix: Add finish_reason processing to xai.ts provider (PR #9929 by @daniel-lxs)
- Fix: Exclude apply_diff from native tools when diffEnabled is false (PR #9920 by @app/roomote)
- Fix: Suppress 'ask promise was ignored' error in handleError (PR #9914 by @daniel-lxs)
- Fix: Process finish_reason to emit tool_call_end events (PR #9927 by @daniel-lxs)
- Fix: Use foreground color for context-management icons (PR #9912 by @hannesrudolph)
- Fix: Sanitize removed/invalid API providers to prevent infinite loop (PR #9869 by @hannesrudolph)
- Improve OpenAI error messages to be more useful (PR #9639 by @mrubens)
- Improve cloud job error logging for RCC provider errors (PR #9924 by @cte)
- Improve error logs for parseToolCall exceptions (PR #9857 by @cte)
Release v3.36.2
[3.36.2] - 2025-12-04
- Restrict GPT-5 tool set to apply_patch for improved compatibility (PR #9853 by @hannesrudolph)
- Add dynamic settings support for Roo models from API, allowing model-specific configurations to be fetched dynamically (PR #9852 by @hannesrudolph)
- Fix: Resolve Chutes provider model fetching issue (PR #9854 by @cte)
Release v3.36.1
[3.36.1] - 2025-12-04
- Add MessageManager layer for centralized history coordination, fixing message synchronization issues (PR #9842 by @hannesrudolph)
- Fix: Prevent cascading truncation loop by only truncating visible messages (PR #9844 by @hannesrudolph)
- Fix: Handle unknown/invalid native tool calls to prevent extension freeze (PR #9834 by @daniel-lxs)
- Always enable reasoning for models that require it (PR #9836 by @cte)
- ChatView: Smoother stick-to-bottom behavior during streaming (PR #8999 by @hannesrudolph)
- UX: Improved error messages and documentation links (PR #9777 by @brunobergher)
- Fix: Overly round follow-up question suggestions styling (PR #9829 by @brunobergher)
- Add symlink support for slash commands in .roo/commands folder (PR #9838 by @mrubens)
- Ignore input to the execa terminal process for safer command execution (PR #9827 by @mrubens)
- Be safer about large file reads (PR #9843 by @jr)
- Add gpt-5.1-codex-max model to OpenAI provider (PR #9848 by @hannesrudolph)
- Evals UI: Add filtering, bulk delete, tool consolidation, and run notes (PR #9837 by @hannesrudolph)
- Evals UI: Add multi-model launch and UI improvements (PR #9845 by @hannesrudolph)
- Web: New pricing page (PR #9821 by @brunobergher)
Release v3.36.0
[3.36.0] - 2025-12-04
- Fix: Restore context when rewinding after condense (#8295 by @hannesrudolph, PR #9665 by @hannesrudolph)
- Add reasoning_details support to Roo provider for enhanced model reasoning visibility (PR #9796 by @app/roomote)
- Default to native tools for all models in the Roo provider for improved performance (PR #9811 by @mrubens)
- Enable search_and_replace for Minimax models (PR #9780 by @mrubens)
- Fix: Resolve Vercel AI Gateway model fetching issues (PR #9791 by @cte)
- Fix: Apply conservative max tokens for Cerebras provider (PR #9804 by @sebastiand-cerebras)
- Fix: Remove omission detection logic to eliminate false positives (#9785 by @Michaelzag, PR #9787 by @app/roomote)
- Refactor: Remove deprecated insert_content tool (PR #9751 by @daniel-lxs)
- Chore: Hide parallel tool calls experiment and disable feature (PR #9798 by @hannesrudolph)
- Update next.js documentation site dependencies (PR #9799 by @jr)
- Fix: Correct download count display on homepage (PR #9807 by @mrubens)
Release v3.35.5
[3.35.5] - 2025-12-03
- Feat: Add provider routing selection for OpenRouter embeddings (#9144 by @SannidhyaSah, PR #9693 by @SannidhyaSah)
- Default Minimax M2 to native tool calling (PR #9778 by @mrubens)
- Sanitize the native tool calls to fix a bug with Gemini (PR #9769 by @mrubens)
- UX: Updates to CloudView (PR #9776 by @roomote)
Release v3.35.4
[3.35.4] - 2025-12-02
- Fix: Handle malformed native tool calls to prevent hanging (PR #9758 by @daniel-lxs)
- Fix: Remove reasoning toggles for GLM-4.5 and GLM-4.6 on z.ai provider (PR #9752 by @roomote)
- Refactor: Remove line_count parameter from write_to_file tool (PR #9667 by @hannesrudolph)
Release v3.35.3
Release v3.35.2
[3.35.2] - 2025-12-01
- Allow models to contain default temperature settings for provider-specific optimal defaults (PR #9734 by @mrubens)
- Add tag-based native tool calling detection for Roo provider models (PR #9735 by @mrubens)
- Enable native tool support for all LiteLLM models by default (PR #9736 by @mrubens)
- Pass app version to provider for improved request tracking (PR #9730 by @cte)





