|
| 1 | +--- |
| 2 | +description: Roo Code 3.30.0 adds OpenRouter embeddings, reasoning handling improvements, and stability/UI fixes. |
| 3 | +keywords: |
| 4 | + - roo code 3.30.0 |
| 5 | + - new features |
| 6 | + - bug fixes |
| 7 | +image: /img/v3.30.0/v3.30.0.png |
| 8 | +--- |
| 9 | + |
| 10 | +# Roo Code 3.30.0 Release Notes (2025-11-03) |
| 11 | + |
| 12 | +This patch introduces OpenRouter embeddings, improves reasoning handling, and delivers stability and UI fixes. |
| 13 | + |
| 14 | +<img src="/img/v3.30.0/v3.30.0.png" alt="Roo Code v3.30.0 Release" width="600" /> |
| 15 | + |
| 16 | +## OpenRouter Embeddings |
| 17 | + |
| 18 | +We've added OpenRouter as an embedding provider for codebase indexing in Roo Code (thanks dmarkey!) ([#8973](https://github.com/RooCodeInc/Roo-Code/pull/8973)). |
| 19 | + |
| 20 | +OpenRouter currently supports 7 embedding models, including the top‑ranking Qwen3 Embedding. |
| 21 | + |
| 22 | +> **📚 Documentation**: See [Codebase Indexing](/features/codebase-indexing) and [OpenRouter Provider](/providers/openrouter). |
| 23 | +
|
| 24 | +## QOL Improvements |
| 25 | + |
| 26 | +* Terminal settings cleanup with Inline as the default terminal and clearer options; shell integration default is disabled to reduce environment conflicts ([#8342](https://github.com/RooCodeInc/Roo-Code/pull/8342)) |
| 27 | + |
| 28 | +## Bug Fixes |
| 29 | + |
| 30 | +* Prevent message loss during queue drain race conditions to preserve message order and reliable chats ([#8955](https://github.com/RooCodeInc/Roo-Code/pull/8955)) |
| 31 | +* Requesty OAuth: auto-create a stable "Requesty" profile with a default model so sign-in completes reliably (thanks Thibault00!) ([#8699](https://github.com/RooCodeInc/Roo-Code/pull/8699)) |
| 32 | +* Cancel during streaming no longer causes flicker; you can resume in place, input stays enabled, and the spinner stops deterministically ([#8986](https://github.com/RooCodeInc/Roo-Code/pull/8986)) |
| 33 | +* Remove newline-only reasoning blocks from OpenAI-compatible responses for cleaner output and logs ([#8990](https://github.com/RooCodeInc/Roo-Code/pull/8990)) |
| 34 | +* "Disable Terminal Shell Integration" now links to the correct documentation section ([#8997](https://github.com/RooCodeInc/Roo-Code/pull/8997)) |
| 35 | + |
| 36 | +## Misc Improvements |
| 37 | + |
| 38 | +* Add preserveReasoning flag to optionally include reasoning in API history so later turns can leverage prior reasoning; off by default and model‑gated ([#8934](https://github.com/RooCodeInc/Roo-Code/pull/8934)) |
| 39 | + |
| 40 | +## Provider Updates |
| 41 | + |
| 42 | +* Chutes: dynamic/router provider so new models appear automatically; safer error logging and temperature applied only when supported ([#8980](https://github.com/RooCodeInc/Roo-Code/pull/8980)) |
| 43 | +* OpenAI‑compatible providers: handle `<think>` reasoning tags in streaming for consistent reasoning chunk handling ([#8989](https://github.com/RooCodeInc/Roo-Code/pull/8989)) |
| 44 | +* GLM 4.6: capture reasoning content in base OpenAI‑compatible provider during streaming ([#8976](https://github.com/RooCodeInc/Roo-Code/pull/8976)) |
| 45 | +* Fireworks: add GLM‑4.6 to the model dropdown for stronger coding performance and longer context (thanks mmealman!) ([#8754](https://github.com/RooCodeInc/Roo-Code/pull/8754)) |
| 46 | +* Fireworks: add MiniMax M2 with 204.8K context and 4K output tokens; correct pricing metadata (thanks dmarkey!) ([#8962](https://github.com/RooCodeInc/Roo-Code/pull/8962)) |
0 commit comments