|
1 | 1 | # Roo Code Changelog |
2 | 2 |
|
| 3 | +## [3.25.23] - 2025-08-22 |
| 4 | + |
| 5 | +- feat: add custom base URL support for Requesty provider (thanks @requesty-JohnCosta27!) |
| 6 | +- feat: add DeepSeek V3.1 model to Chutes AI provider (#7294 by @dmarkey, PR by @app/roomote) |
| 7 | +- Revert "feat: enable loading Roo modes from multiple files in .roo/modes directory" temporarily to fix a bug with mode installation |
| 8 | + |
| 9 | +## [3.25.22] - 2025-08-22 |
| 10 | + |
| 11 | +- Add prompt caching support for Kimi K2 on Groq (thanks @daniel-lxs and @benank!) |
| 12 | +- Add documentation links for global custom instructions in UI (thanks @app/roomote!) |
| 13 | + |
| 14 | +## [3.25.21] - 2025-08-21 |
| 15 | + |
| 16 | +- Ensure subtask results are provided to GPT-5 in OpenAI Responses API |
| 17 | +- Promote the experimental AssistantMessageParser to the default parser |
| 18 | +- Update DeepSeek models context window to 128k (thanks @JuanPerezReal) |
| 19 | +- Enable grounding features for Vertex AI (thanks @anguslees) |
| 20 | +- Allow orchestrator to pass TODO lists to subtasks |
| 21 | +- Improved MDM handling |
| 22 | +- Handle nullish token values in ContextCondenseRow to prevent UI crash (thanks @s97712) |
| 23 | +- Improved context window error handling for OpenAI and other providers |
| 24 | +- Add "installed" filter to Roo Marketplace (thanks @semidark) |
| 25 | +- Improve filesystem access checks (thanks @elianiva) |
| 26 | +- Support for loading Roo modes from multiple YAML files in the `.roo/modes/` directory (thanks @farazoman) |
| 27 | +- Add Featherless provider (thanks @DarinVerheijke) |
| 28 | + |
| 29 | +## [3.25.20] - 2025-08-19 |
| 30 | + |
| 31 | +- Add announcement for Sonic model |
| 32 | + |
| 33 | +## [3.25.19] - 2025-08-19 |
| 34 | + |
| 35 | +- Fix issue where new users couldn't select the Roo Code Cloud provider (thanks @daniel-lxs!) |
| 36 | + |
| 37 | +## [3.25.18] - 2025-08-19 |
| 38 | + |
| 39 | +- Add new stealth Sonic model through the Roo Code Cloud provider |
| 40 | +- Fix: respect enableReasoningEffort setting when determining reasoning usage (#7048 by @ikbencasdoei, PR by @app/roomote) |
| 41 | +- Fix: prevent duplicate LM Studio models with case-insensitive deduplication (#6954 by @fbuechler, PR by @daniel-lxs) |
| 42 | +- Feat: simplify ask_followup_question prompt documentation (thanks @daniel-lxs!) |
| 43 | +- Feat: simple read_file tool for single-file-only models (thanks @daniel-lxs!) |
| 44 | +- Fix: Add missing zaiApiKey and doubaoApiKey to SECRET_STATE_KEYS (#7082 by @app/roomote) |
| 45 | +- Feat: Add new models and update configurations for vscode-lm (thanks @NaccOll!) |
| 46 | + |
| 47 | +## [3.25.17] - 2025-08-17 |
| 48 | + |
| 49 | +- Fix: Resolve terminal reuse logic issues |
| 50 | + |
| 51 | +## [3.25.16] - 2025-08-16 |
| 52 | + |
| 53 | +- Add support for OpenAI gpt-5-chat-latest model (#7057 by @PeterDaveHello, PR by @app/roomote) |
| 54 | +- Fix: Use native Ollama API instead of OpenAI compatibility layer (#7070 by @LivioGama, PR by @daniel-lxs) |
| 55 | +- Fix: Prevent XML entity decoding in diff tools (#7107 by @indiesewell, PR by @app/roomote) |
| 56 | +- Fix: Add type check before calling .match() on diffItem.content (#6905 by @pwilkin, PR by @app/roomote) |
| 57 | +- Refactor task execution system: improve call stack management (thanks @catrielmuller!) |
| 58 | +- Fix: Enable save button for provider dropdown and checkbox changes (thanks @daniel-lxs!) |
| 59 | +- Add an API for resuming tasks by ID (thanks @mrubens!) |
| 60 | +- Emit event when a task ask requires interaction (thanks @cte!) |
| 61 | +- Make enhance with task history default to true (thanks @liwilliam2021!) |
| 62 | +- Fix: Use cline.cwd as primary source for workspace path in codebaseSearchTool (thanks @NaccOll!) |
| 63 | +- Hotfix multiple folder workspace checkpoint (thanks @NaccOll!) |
| 64 | + |
| 65 | +## [3.25.15] - 2025-08-14 |
| 66 | + |
| 67 | +- Fix: Remove 500-message limit to prevent scrollbar jumping in long conversations (#7052, #7063 by @daniel-lxs, PR by @app/roomote) |
| 68 | +- Fix: Reset condensing state when switching tasks (#6919 by @f14XuanLv, PR by @f14XuanLv) |
| 69 | +- Fix: Implement sitemap generation in TypeScript and remove XML file (#5231 by @abumalick, PR by @abumalick) |
| 70 | +- Fix: allowedMaxRequests and allowedMaxCost values not showing in the settings UI (thanks @chrarnoldus!) |
| 71 | + |
| 72 | +## [3.25.14] - 2025-08-13 |
| 73 | + |
| 74 | +- Fix: Only include verbosity parameter for models that support it (#7054 by @eastonmeth, PR by @app/roomote) |
| 75 | +- Fix: AWS Bedrock 1M context - Move anthropic_beta to additionalModelRequestFields (thanks @daniel-lxs!) |
| 76 | +- Fix: Make cancelling requests more responsive by reverting recent changes |
| 77 | + |
| 78 | +## [3.25.13] - 2025-08-12 |
| 79 | + |
| 80 | +- Add Sonnet 1M context checkbox to Bedrock |
| 81 | +- Fix: add --no-messages flag to ripgrep to suppress file access errors (#6756 by @R-omk, PR by @app/roomote) |
| 82 | +- Add support for AGENT.md alongside AGENTS.md (#6912 by @Brendan-Z, PR by @app/roomote) |
| 83 | +- Remove deprecated GPT-4.5 Preview model (thanks @PeterDaveHello!) |
| 84 | + |
| 85 | +## [3.25.12] - 2025-08-12 |
| 86 | + |
| 87 | +- Update: Claude Sonnet 4 context window configurable to 1 million tokens in Anthropic provider (thanks @daniel-lxs!) |
| 88 | +- Add: Minimal reasoning support to OpenRouter (thanks @daniel-lxs!) |
| 89 | +- Fix: Add configurable API request timeout for local providers (#6521 by @dabockster, PR by @app/roomote) |
| 90 | +- Fix: Add --no-sandbox flag to browser launch options (#6632 by @QuinsZouls, PR by @QuinsZouls) |
| 91 | +- Fix: Ensure JSON files respect .rooignore during indexing (#6690 by @evermoving, PR by @app/roomote) |
| 92 | +- Add: New Chutes provider models (#6698 by @fstandhartinger, PR by @app/roomote) |
| 93 | +- Add: OpenAI gpt-oss models to Amazon Bedrock dropdown (#6752 by @josh-clanton-powerschool, PR by @app/roomote) |
| 94 | +- Fix: Correct tool repetition detector to not block first tool call when limit is 1 (#6834 by @NaccOll, PR by @app/roomote) |
| 95 | +- Fix: Improve checkpoint service initialization handling (thanks @NaccOll!) |
| 96 | +- Update: Improve zh-TW Traditional Chinese locale (thanks @PeterDaveHello!) |
| 97 | +- Add: Task expand and collapse translations (thanks @app/roomote!) |
| 98 | +- Update: Exclude GPT-5 models from 20% context window output token cap (thanks @app/roomote!) |
| 99 | +- Fix: Truncate long model names in model selector to prevent overflow (thanks @app/roomote!) |
| 100 | +- Add: Requesty base url support (thanks @requesty-JohnCosta27!) |
| 101 | + |
| 102 | +## [3.25.11] - 2025-08-11 |
| 103 | + |
| 104 | +- Add: Native OpenAI provider support for Codex Mini model (#5386 by @KJ7LNW, PR by @daniel-lxs) |
| 105 | +- Add: IO Intelligence Provider support (thanks @ertan2002!) |
| 106 | +- Fix: MCP startup issues and remove refresh notifications (thanks @hannesrudolph!) |
| 107 | +- Fix: Improvements to GPT-5 OpenAI provider configuration (thanks @hannesrudolph!) |
| 108 | +- Fix: Clarify codebase_search path parameter as optional and improve tool descriptions (thanks @app/roomote!) |
| 109 | +- Fix: Bedrock provider workaround for LiteLLM passthrough issues (thanks @jr!) |
| 110 | +- Fix: Token usage and cost being underreported on cancelled requests (thanks @chrarnoldus!) |
| 111 | + |
| 112 | +## [3.25.10] - 2025-08-07 |
| 113 | + |
| 114 | +- Add support for GPT-5 (thanks Cline and @app/roomote!) |
| 115 | +- Fix: Use CDATA sections in XML examples to prevent parser errors (#4852 by @hannesrudolph, PR by @hannesrudolph) |
| 116 | +- Fix: Add missing MCP error translation keys (thanks @app/roomote!) |
| 117 | + |
| 118 | +## [3.25.9] - 2025-08-07 |
| 119 | + |
| 120 | +- Fix: Resolve rounding issue with max tokens (#6806 by @markp018, PR by @mrubens) |
| 121 | +- Add support for GLM-4.5 and OpenAI gpt-oss models in Fireworks provider (#6753 by @alexfarlander, PR by @app/roomote) |
| 122 | +- Improve UX by focusing chat input when clicking plus button in extension menu (thanks @app/roomote!) |
| 123 | + |
| 124 | +## [3.25.8] - 2025-08-06 |
| 125 | + |
| 126 | +- Fix: Prevent disabled MCP servers from starting processes and show correct status (#6036 by @hannesrudolph, PR by @app/roomote) |
| 127 | +- Fix: Handle current directory path "." correctly in codebase_search tool (#6514 by @hannesrudolph, PR by @app/roomote) |
| 128 | +- Fix: Trim whitespace from OpenAI base URL to fix model detection (#6559 by @vauhochzett, PR by @app/roomote) |
| 129 | +- Feat: Reduce Gemini 2.5 Pro minimum thinking budget to 128 (thanks @app/roomote!) |
| 130 | +- Fix: Improve handling of net::ERR_ABORTED errors in URL fetching (#6632 by @QuinsZouls, PR by @app/roomote) |
| 131 | +- Fix: Recover from error state when Qdrant becomes available (#6660 by @hannesrudolph, PR by @app/roomote) |
| 132 | +- Fix: Resolve memory leak in ChatView virtual scrolling implementation (thanks @xyOz-dev!) |
| 133 | +- Add: Swift files to fallback list (#5857 by @niteshbalusu11, #6555 by @sealad886, PR by @niteshbalusu11) |
| 134 | +- Feat: Clamp default model max tokens to 20% of context window (thanks @mrubens!) |
| 135 | + |
| 136 | +## [3.25.7] - 2025-08-05 |
| 137 | + |
| 138 | +- Add support for Claude Opus 4.1 |
| 139 | +- Add Fireworks AI provider (#6653 by @ershang-fireworks, PR by @ershang-fireworks) |
| 140 | +- Add Z AI provider (thanks @jues!) |
| 141 | +- Add Groq support for GPT-OSS |
| 142 | +- Add Cerebras support for GPT-OSS |
| 143 | +- Add code indexing support for multiple folders similar to task history (#6197 by @NaccOll, PR by @NaccOll) |
| 144 | +- Make mode selection dropdowns responsive (#6423 by @AyazKaan, PR by @AyazKaan) |
| 145 | +- Redesigned task header and task history (thanks @brunobergher!) |
| 146 | +- Fix checkpoints timing and ensure checkpoints work properly (#4827 by @mrubens, PR by @NaccOll) |
| 147 | +- Fix empty mode names from being saved (#5766 by @kfxmvp, PR by @app/roomote) |
| 148 | +- Fix MCP server creation when setting is disabled (#6607 by @characharm, PR by @app/roomote) |
| 149 | +- Update highlight layer style and align to textarea (#6647 by @NaccOll, PR by @NaccOll) |
| 150 | +- Fix UI for approving chained commands |
| 151 | +- Use assistantMessageParser class instead of parseAssistantMessage (#5340 by @qdaxb, PR by @qdaxb) |
| 152 | +- Conditionally include reminder section based on todo list config (thanks @NaccOll!) |
| 153 | +- Task and TaskProvider event emitter cleanup with new events (thanks @cte!) |
| 154 | + |
| 155 | +## [3.25.6] - 2025-08-01 |
| 156 | + |
| 157 | +- Set horizon-beta model max tokens to 32k for OpenRouter (requested by @hannesrudolph, PR by @app/roomote) |
| 158 | +- Add support for syncing provider profiles from the cloud |
| 159 | + |
| 160 | +## [3.25.5] - 2025-08-01 |
| 161 | + |
| 162 | +- Fix: Improve Claude Code ENOENT error handling with installation guidance (#5866 by @JamieJ1, PR by @app/roomote) |
| 163 | +- Fix: LM Studio model context length (#5075 by @Angular-Angel, PR by @pwilkin) |
| 164 | +- Fix: VB.NET indexing by implementing fallback chunking system (#6420 by @JensvanZutphen, PR by @daniel-lxs) |
| 165 | +- Add auto-approved cost limits (thanks @hassoncs!) |
| 166 | +- Add Cerebras as a provider (thanks @kevint-cerebras!) |
| 167 | +- Add Qwen 3 Coder from Cerebras (thanks @kevint-cerebras!) |
| 168 | +- Fix: Handle Qdrant deletion errors gracefully to prevent indexing interruption (thanks @daniel-lxs!) |
| 169 | +- Fix: Restore message sending when clicking save button (thanks @daniel-lxs!) |
| 170 | +- Fix: Linter not applied to locales/\*/README.md (thanks @liwilliam2021!) |
| 171 | +- Handle more variations of chaining and subshell command validation |
| 172 | +- More tolerant search/replace match |
| 173 | +- Clean up the auto-approve UI (thanks @mrubens!) |
| 174 | +- Skip interpolation for non-existent slash commands (thanks @app/roomote!) |
| 175 | + |
3 | 176 | ## [3.25.4] - 2025-07-30 |
4 | 177 |
|
5 | 178 | - feat: add SambaNova provider integration (#6077 by @snova-jorgep, PR by @snova-jorgep) |
|
0 commit comments