|
1 | 1 | # Roo Code Changelog |
2 | 2 |
|
| 3 | +## [3.25.23] - 2025-08-22 |
| 4 | + |
| 5 | +- feat: add custom base URL support for Requesty provider (thanks @requesty-JohnCosta27!) |
| 6 | +- feat: add DeepSeek V3.1 model to Chutes AI provider (#7294 by @dmarkey, PR by @app/roomote) |
| 7 | +- Revert "feat: enable loading Roo modes from multiple files in .roo/modes directory" temporarily to fix a bug with mode installation |
| 8 | + |
| 9 | +## [3.25.22] - 2025-08-22 |
| 10 | + |
| 11 | +- Add prompt caching support for Kimi K2 on Groq (thanks @daniel-lxs and @benank!) |
| 12 | +- Add documentation links for global custom instructions in UI (thanks @app/roomote!) |
| 13 | + |
| 14 | +## [3.25.21] - 2025-08-21 |
| 15 | + |
| 16 | +- Ensure subtask results are provided to GPT-5 in OpenAI Responses API |
| 17 | +- Promote the experimental AssistantMessageParser to the default parser |
| 18 | +- Update DeepSeek models context window to 128k (thanks @JuanPerezReal) |
| 19 | +- Enable grounding features for Vertex AI (thanks @anguslees) |
| 20 | +- Allow orchestrator to pass TODO lists to subtasks |
| 21 | +- Improved MDM handling |
| 22 | +- Handle nullish token values in ContextCondenseRow to prevent UI crash (thanks @s97712) |
| 23 | +- Improved context window error handling for OpenAI and other providers |
| 24 | +- Add "installed" filter to Roo Marketplace (thanks @semidark) |
| 25 | +- Improve filesystem access checks (thanks @elianiva) |
| 26 | +- Support for loading Roo modes from multiple YAML files in the `.roo/modes/` directory (thanks @farazoman) |
| 27 | +- Add Featherless provider (thanks @DarinVerheijke) |
| 28 | + |
| 29 | +## [3.25.20] - 2025-08-19 |
| 30 | + |
| 31 | +- Add announcement for Sonic model |
| 32 | + |
| 33 | +## [3.25.19] - 2025-08-19 |
| 34 | + |
| 35 | +- Fix issue where new users couldn't select the Roo Code Cloud provider (thanks @daniel-lxs!) |
| 36 | + |
| 37 | +## [3.25.18] - 2025-08-19 |
| 38 | + |
| 39 | +- Add new stealth Sonic model through the Roo Code Cloud provider |
| 40 | +- Fix: respect enableReasoningEffort setting when determining reasoning usage (#7048 by @ikbencasdoei, PR by @app/roomote) |
| 41 | +- Fix: prevent duplicate LM Studio models with case-insensitive deduplication (#6954 by @fbuechler, PR by @daniel-lxs) |
| 42 | +- Feat: simplify ask_followup_question prompt documentation (thanks @daniel-lxs!) |
| 43 | +- Feat: simple read_file tool for single-file-only models (thanks @daniel-lxs!) |
| 44 | +- Fix: Add missing zaiApiKey and doubaoApiKey to SECRET_STATE_KEYS (#7082 by @app/roomote) |
| 45 | +- Feat: Add new models and update configurations for vscode-lm (thanks @NaccOll!) |
| 46 | + |
| 47 | +## [3.25.17] - 2025-08-17 |
| 48 | + |
| 49 | +- Fix: Resolve terminal reuse logic issues |
| 50 | + |
| 51 | +## [3.25.16] - 2025-08-16 |
| 52 | + |
| 53 | +- Add support for OpenAI gpt-5-chat-latest model (#7057 by @PeterDaveHello, PR by @app/roomote) |
| 54 | +- Fix: Use native Ollama API instead of OpenAI compatibility layer (#7070 by @LivioGama, PR by @daniel-lxs) |
| 55 | +- Fix: Prevent XML entity decoding in diff tools (#7107 by @indiesewell, PR by @app/roomote) |
| 56 | +- Fix: Add type check before calling .match() on diffItem.content (#6905 by @pwilkin, PR by @app/roomote) |
| 57 | +- Refactor task execution system: improve call stack management (thanks @catrielmuller!) |
| 58 | +- Fix: Enable save button for provider dropdown and checkbox changes (thanks @daniel-lxs!) |
| 59 | +- Add an API for resuming tasks by ID (thanks @mrubens!) |
| 60 | +- Emit event when a task ask requires interaction (thanks @cte!) |
| 61 | +- Make enhance with task history default to true (thanks @liwilliam2021!) |
| 62 | +- Fix: Use cline.cwd as primary source for workspace path in codebaseSearchTool (thanks @NaccOll!) |
| 63 | +- Hotfix multiple folder workspace checkpoint (thanks @NaccOll!) |
| 64 | + |
| 65 | +## [3.25.15] - 2025-08-14 |
| 66 | + |
| 67 | +- Fix: Remove 500-message limit to prevent scrollbar jumping in long conversations (#7052, #7063 by @daniel-lxs, PR by @app/roomote) |
| 68 | +- Fix: Reset condensing state when switching tasks (#6919 by @f14XuanLv, PR by @f14XuanLv) |
| 69 | +- Fix: Implement sitemap generation in TypeScript and remove XML file (#5231 by @abumalick, PR by @abumalick) |
| 70 | +- Fix: allowedMaxRequests and allowedMaxCost values not showing in the settings UI (thanks @chrarnoldus!) |
| 71 | + |
| 72 | +## [3.25.14] - 2025-08-13 |
| 73 | + |
| 74 | +- Fix: Only include verbosity parameter for models that support it (#7054 by @eastonmeth, PR by @app/roomote) |
| 75 | +- Fix: AWS Bedrock 1M context - Move anthropic_beta to additionalModelRequestFields (thanks @daniel-lxs!) |
| 76 | +- Fix: Make cancelling requests more responsive by reverting recent changes |
| 77 | + |
| 78 | +## [3.25.13] - 2025-08-12 |
| 79 | + |
| 80 | +- Add Sonnet 1M context checkbox to Bedrock |
| 81 | +- Fix: add --no-messages flag to ripgrep to suppress file access errors (#6756 by @R-omk, PR by @app/roomote) |
| 82 | +- Add support for AGENT.md alongside AGENTS.md (#6912 by @Brendan-Z, PR by @app/roomote) |
| 83 | +- Remove deprecated GPT-4.5 Preview model (thanks @PeterDaveHello!) |
| 84 | + |
| 85 | +## [3.25.12] - 2025-08-12 |
| 86 | + |
| 87 | +- Update: Claude Sonnet 4 context window configurable to 1 million tokens in Anthropic provider (thanks @daniel-lxs!) |
| 88 | +- Add: Minimal reasoning support to OpenRouter (thanks @daniel-lxs!) |
| 89 | +- Fix: Add configurable API request timeout for local providers (#6521 by @dabockster, PR by @app/roomote) |
| 90 | +- Fix: Add --no-sandbox flag to browser launch options (#6632 by @QuinsZouls, PR by @QuinsZouls) |
| 91 | +- Fix: Ensure JSON files respect .rooignore during indexing (#6690 by @evermoving, PR by @app/roomote) |
| 92 | +- Add: New Chutes provider models (#6698 by @fstandhartinger, PR by @app/roomote) |
| 93 | +- Add: OpenAI gpt-oss models to Amazon Bedrock dropdown (#6752 by @josh-clanton-powerschool, PR by @app/roomote) |
| 94 | +- Fix: Correct tool repetition detector to not block first tool call when limit is 1 (#6834 by @NaccOll, PR by @app/roomote) |
| 95 | +- Fix: Improve checkpoint service initialization handling (thanks @NaccOll!) |
| 96 | +- Update: Improve zh-TW Traditional Chinese locale (thanks @PeterDaveHello!) |
| 97 | +- Add: Task expand and collapse translations (thanks @app/roomote!) |
| 98 | +- Update: Exclude GPT-5 models from 20% context window output token cap (thanks @app/roomote!) |
| 99 | +- Fix: Truncate long model names in model selector to prevent overflow (thanks @app/roomote!) |
| 100 | +- Add: Requesty base url support (thanks @requesty-JohnCosta27!) |
| 101 | + |
| 102 | +## [3.25.11] - 2025-08-11 |
| 103 | + |
| 104 | +- Add: Native OpenAI provider support for Codex Mini model (#5386 by @KJ7LNW, PR by @daniel-lxs) |
| 105 | +- Add: IO Intelligence Provider support (thanks @ertan2002!) |
| 106 | +- Fix: MCP startup issues and remove refresh notifications (thanks @hannesrudolph!) |
| 107 | +- Fix: Improvements to GPT-5 OpenAI provider configuration (thanks @hannesrudolph!) |
| 108 | +- Fix: Clarify codebase_search path parameter as optional and improve tool descriptions (thanks @app/roomote!) |
| 109 | +- Fix: Bedrock provider workaround for LiteLLM passthrough issues (thanks @jr!) |
| 110 | +- Fix: Token usage and cost being underreported on cancelled requests (thanks @chrarnoldus!) |
| 111 | + |
3 | 112 | ## [3.25.10] - 2025-08-07 |
4 | 113 |
|
5 | 114 | - Add support for GPT-5 (thanks Cline and @app/roomote!) |
|
0 commit comments