Skip to content

Commit c1fd911

Browse files
feat: add GPT-5-Codex support and keyboard shortcut for auto-approve in v3.28.6 release notes (#361)
1 parent a81cc0d commit c1fd911

File tree

7 files changed

+135
-7
lines changed

7 files changed

+135
-7
lines changed

docs/features/auto-approving-actions.mdx

Lines changed: 18 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,24 @@ Auto-approve settings speed up your workflow by eliminating repetitive confirmat
4343
3. Use the All/None chips to bulk-select or clear permissions, or select individual tiles; you can keep Enabled On with "None" selected
4444
4. (Optional) Click the gear icon to open Settings for deeper per-permission controls
4545

46+
### Keyboard Shortcut
47+
48+
**Default shortcut:** `Cmd+Alt+A` (macOS) / `Ctrl+Alt+A` (Windows/Linux)
49+
50+
Quickly toggle auto-approve on/off without using the mouse. This shortcut toggles the global "Enabled" state while preserving your permission selections.
51+
52+
**To customize the shortcut:**
53+
1. Open VS Code Command Palette (`Cmd+Shift+P` / `Ctrl+Shift+P`)
54+
2. Search for "Preferences: Open Keyboard Shortcuts"
55+
3. Search for the command name (varies by language):
56+
- English: "Toggle Auto-Approve"
57+
- Other languages: Look for the localized equivalent
58+
4. Click the pencil icon next to the command
59+
5. Press your desired key combination
60+
6. Press Enter to save
61+
62+
**Note:** The command name appears in your VS Code interface language. If you're using a non-English locale, the command will be translated accordingly.
63+
4664
---
4765

4866
## Auto-Approve Dropdown

docs/providers/ollama.md

Lines changed: 47 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -53,8 +53,14 @@ Roo Code supports running models locally using Ollama. This provides privacy, of
5353
ollama pull qwen2.5-coder:32b
5454
```
5555

56-
3. **Configure the Model:** Configure your models context window in Ollama and save a copy. Roo automatically reads the model’s reported context window from Ollama and passes it as `num_ctx`; no Roo-side context size setting is required for the Ollama provider.
56+
3. **Configure the Model:** Configure your model's context window in Ollama and save a copy.
5757
58+
:::info Default Context Behavior
59+
**Roo Code automatically defers to the Modelfile's `num_ctx` setting by default.** When you use a model with Ollama, Roo Code reads the model's configured context window and uses it automatically. You don't need to configure context size in Roo Code settings - it respects what's defined in your Ollama model.
60+
:::
61+
62+
**Option A: Interactive Configuration**
63+
5864
Load the model (we will use `qwen2.5-coder:32b` as an example):
5965
6066
```bash
@@ -73,6 +79,37 @@ Roo Code supports running models locally using Ollama. This provides privacy, of
7379
/save your_model_name
7480
```
7581
82+
**Option B: Using a Modelfile (Recommended)**
83+
84+
Create a `Modelfile` with your desired configuration:
85+
86+
```dockerfile
87+
# Example Modelfile for reduced context
88+
FROM qwen2.5-coder:32b
89+
90+
# Set context window to 32K tokens (reduced from default)
91+
PARAMETER num_ctx 32768
92+
93+
# Optional: Adjust temperature for more consistent output
94+
PARAMETER temperature 0.7
95+
96+
# Optional: Set repeat penalty
97+
PARAMETER repeat_penalty 1.1
98+
```
99+
100+
Then create your custom model:
101+
102+
```bash
103+
ollama create qwen-32k -f Modelfile
104+
```
105+
106+
:::tip Override Context Window
107+
If you need to override the model's default context window:
108+
- **Permanently:** Save a new model version with your desired `num_ctx` using either method above
109+
- **Roo Code behavior:** Roo automatically uses whatever `num_ctx` is configured in your Ollama model
110+
- **Memory considerations:** Reducing `num_ctx` helps prevent out-of-memory errors on limited hardware
111+
:::
112+
76113
4. **Configure Roo Code:**
77114
* Open the Roo Code sidebar (<KangarooIcon /> icon).
78115
* Click the settings gear icon (<Codicon name="gear" />).
@@ -120,17 +157,22 @@ If no model instance is running, Ollama spins one up on demand. During that cold
120157
/set parameter num_ctx 32768
121158
/save &lt;your_model_name&gt;
122159
```
123-
- Option B — Modelfile:
124-
```text
160+
- Option B — Modelfile (recommended for reproducibility):
161+
```dockerfile
162+
FROM &lt;base-model&gt;
125163
PARAMETER num_ctx 32768
164+
# Adjust based on your available memory:
165+
# 16384 for ~8GB VRAM
166+
# 32768 for ~16GB VRAM
167+
# 65536 for ~24GB+ VRAM
126168
```
127-
Then re-create the model:
169+
Then create the model:
128170
```bash
129171
ollama create &lt;your_model_name&gt; -f Modelfile
130172
```
131173

132174
3. **Ensure the model's context window is pinned**
133-
Save your Ollama model with an appropriate `num_ctx` (e.g., via `/set` + `/save`, or a Modelfile). Roo reads this automatically and passes it as `num_ctx`; there is no Roo-side context size setting for the Ollama provider.
175+
Save your Ollama model with an appropriate `num_ctx` (via `/set` + `/save`, or preferably a Modelfile). **Roo Code automatically detects and uses the model's configured `num_ctx`** - there is no manual context size setting in Roo Code for the Ollama provider.
134176

135177
4. **Use smaller variants**
136178
If GPU memory is limited, use a smaller quant (e.g., q4 instead of q5) or a smaller parameter size (e.g., 7B/13B instead of 32B).

docs/providers/openai.md

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,17 @@ The GPT-5 models are OpenAI's most advanced, offering superior coding capabiliti
4343
* **`gpt-5-mini-2025-08-07`** - Faster, cost-efficient for well-defined tasks
4444
* **`gpt-5-nano-2025-08-07`** - Fastest, most cost-efficient option
4545

46+
### GPT-5-Codex
47+
OpenAI's specialized coding model with advanced capabilities:
48+
49+
**Key Features:**
50+
* **400K Token Context Window** - Process entire codebases and lengthy documentation
51+
* **Image Support** - Analyze screenshots, diagrams, and visual documentation
52+
* **Prompt Caching** - Reduced costs for repeated context through automatic caching
53+
* **Adaptive Reasoning** - Dynamically adjusts reasoning depth based on task complexity
54+
55+
**Ideal for:** Large-scale code analysis, multimodal tasks requiring visual understanding, complex refactoring projects, and extensive codebase operations.
56+
4657
### GPT-4.1 Family
4758
Advanced multimodal models with balanced capabilities:
4859

@@ -136,5 +147,6 @@ GPT-5 models maintain conversation context efficiently through response IDs, red
136147

137148
## Tips and Notes
138149

139-
* **Pricing:** Refer to the [OpenAI Pricing](https://openai.com/pricing) page for details on model costs.
150+
* **Pricing:** Refer to the [OpenAI Pricing](https://openai.com/pricing) page for current model costs and discounts, including prompt caching.
140151
* **Azure OpenAI Service:** If you'd like to use the Azure OpenAI service, please see our section on [OpenAI-compatible](/providers/openai-compatible) providers.
152+
* **Context Optimization:** For GPT-5-Codex, leverage prompt caching by maintaining consistent context across requests to reduce costs significantly.

docs/update-notes/index.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,7 @@ image: /img/social-share.jpg
2020
### Version 3.28
2121

2222
* [3.28](/update-notes/v3.28) (Combined)
23+
* [3.28.6](/update-notes/v3.28.6) (2025-09-23)
2324
* [3.28.5](/update-notes/v3.28.5) (2025-09-20)
2425
* [3.28.4](/update-notes/v3.28.4) (2025-09-19)
2526
* [3.28.3](/update-notes/v3.28.3) (2025-09-16)

docs/update-notes/v3.28.6.mdx

Lines changed: 38 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,38 @@
1+
---
2+
description: GPT-5-Codex arrives in OpenAI Native alongside localization tooling and UI refinements.
3+
keywords:
4+
- roo code 3.28.6
5+
- gpt-5-codex
6+
- localization
7+
- bug fixes
8+
- release notes
9+
image: /img/social-share.jpg
10+
---
11+
12+
# Roo Code 3.28.6 Release Notes (2025-09-23)
13+
14+
This release adds GPT-5-Codex to OpenAI Native, sharpens localization coverage, and smooths UI workflows across languages.
15+
16+
## GPT-5-Codex lands in OpenAI Native
17+
18+
- **Work with repository-scale context**: Keep multi-file specs and long reviews in a single thread thanks to a 400k token window.
19+
- **Reuse prompts faster and include visuals**: Prompt caching and image support help you iterate on UI fixes without re-uploading context.
20+
- **Let the model adapt its effort**: GPT-5-Codex automatically balances quick responses for simple questions with deeper reasoning on complex builds.
21+
22+
This gives teams a higher-capacity OpenAI option without extra configuration.[#8260](https://github.com/RooCodeInc/Roo-Code/pull/8260):
23+
24+
25+
> **📚 Documentation**: See [OpenAI Provider Guide](/providers/openai) for capabilities and setup guidance.
26+
27+
## QOL Improvements
28+
29+
* **Keyboard shortcut for auto-approve**: Toggle approvals with Cmd/Ctrl+Alt+A from anywhere in the editor, keeping focus on the code review flow (via [#8214](https://github.com/RooCodeInc/Roo-Code/pull/8214))
30+
* **Cleaner code blocks**: Removed the snippet language picker and word-wrap toggle so wrapped code is easier to read and copy across locales (via [#8208](https://github.com/RooCodeInc/Roo-Code/pull/8208))
31+
* **More readable reasoning blocks**: Added spacing before section headers inside reasoning transcripts to make long explanations easier to scan (via [#7868](https://github.com/RooCodeInc/Roo-Code/pull/7868))
32+
* **Translation checks cover package settings**: The missing translation finder now validates package.nls files for 17 locales to catch untranslated VS Code strings earlier (via [#8255](https://github.com/RooCodeInc/Roo-Code/pull/8255))
33+
34+
## Bug Fixes
35+
36+
* **Bare-metal evals stay signed in**: Roo provider tokens refresh automatically and the local evals app binds to port 3446 for predictable scripts (via [#8224](https://github.com/RooCodeInc/Roo-Code/pull/8224))
37+
* **Checkpoint text stays on one line**: Prevented multi-line wrapping in languages such as Chinese, Korean, Japanese, and Russian so the checkpoint UI stays compact (via [#8207](https://github.com/RooCodeInc/Roo-Code/pull/8207); reported in [#8206](https://github.com/RooCodeInc/Roo-Code/issues/8206))
38+
* **Ollama respects Modelfile num_ctx**: Roo now defers to your Modelfile’s context window to avoid GPU OOMs while still allowing explicit overrides when needed (via [#7798](https://github.com/RooCodeInc/Roo-Code/pull/7798); reported in [#7797](https://github.com/RooCodeInc/Roo-Code/issues/7797))

docs/update-notes/v3.28.mdx

Lines changed: 17 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,14 +52,27 @@ Task Sync enables monitoring your local development environment from any device.
5252

5353
> **Documentation**: See [Task Sync](/roo-code-cloud/task-sync), [Roomote Control Guide](/roo-code-cloud/roomote-control), and [Billing & Subscriptions](/roo-code-cloud/billing-subscriptions).
5454
55+
## GPT-5-Codex lands in OpenAI Native
56+
57+
- **Work with repository-scale context**: Keep multi-file specs and long reviews in a single thread thanks to a 400k token window.
58+
- **Reuse prompts faster and include visuals**: Prompt caching and image support help you iterate on UI fixes without re-uploading context.
59+
- **Let the model adapt its effort**: GPT-5-Codex automatically balances quick responses for simple questions with deeper reasoning on complex builds.
60+
61+
This gives teams a higher-capacity OpenAI option without extra configuration.[#8260](https://github.com/RooCodeInc/Roo-Code/pull/8260):
62+
63+
> **Documentation**: See [OpenAI Provider Guide](/providers/openai) for capabilities and setup guidance.
64+
5565
## QOL Improvements
5666

67+
* **Auto-approve keyboard shortcut**: Toggle approvals with Cmd/Ctrl+Alt+A from anywhere in the editor so you can stay in the flow while reviewing changes (via [#8214](https://github.com/RooCodeInc/Roo-Code/pull/8214))
5768
* **Click-to-Edit Chat Messages**: Click directly on any message text to edit it, with ESC to cancel and improved padding consistency ([#7790](https://github.com/RooCodeInc/Roo-Code/pull/7790))
5869
* **Enhanced Reasoning Display**: The AI's thinking process now shows a persistent timer and displays reasoning content in clean italic text ([#7752](https://github.com/RooCodeInc/Roo-Code/pull/7752))
70+
* **Easier-to-scan reasoning transcripts**: Added clear line breaks before reasoning headers inside the UI so long thoughts are easier to skim (via [#7868](https://github.com/RooCodeInc/Roo-Code/pull/7868))
5971
* **Manual Auth URL Input**: Users in containerized environments can now paste authentication redirect URLs manually when automatic redirection fails ([#7805](https://github.com/RooCodeInc/Roo-Code/pull/7805))
6072
* **Active Mode Centering**: The mode selector dropdown now automatically centers the active mode when opened ([#7883](https://github.com/RooCodeInc/Roo-Code/pull/7883))
6173
* **Preserve First Message**: The first message containing slash commands or initial context is now preserved during conversation condensing instead of being replaced with a summary ([#7910](https://github.com/RooCodeInc/Roo-Code/pull/7910))
6274
* **Checkpoint Initialization Notifications**: You'll now receive clear notifications when checkpoint initialization fails, particularly with nested Git repositories ([#7766](https://github.com/RooCodeInc/Roo-Code/pull/7766))
75+
* **Translation coverage auditing**: The translation checker now validates package.nls locales by default to catch missing strings before release (via [#8255](https://github.com/RooCodeInc/Roo-Code/pull/8255))
6376

6477
* Smaller and more subtle auto-approve UI (thanks brunobergher!) ([#7894](https://github.com/RooCodeInc/Roo-Code/pull/7894))
6578
* Disable Roomote Control on logout for better security ([#7976](https://github.com/RooCodeInc/Roo-Code/pull/7976))
@@ -74,10 +87,13 @@ Task Sync enables monitoring your local development environment from any device.
7487
* **Redesigned Message Feed**: Enjoy a cleaner, more readable chat interface with improved visual hierarchy that helps you focus on what matters ([#7985](https://github.com/RooCodeInc/Roo-Code/pull/7985))
7588
* **Responsive Auto-Approve**: The auto-approve dropdown now adapts to different window sizes with smart 1-2 column layouts, and tooltips show all enabled actions without truncation ([#8032](https://github.com/RooCodeInc/Roo-Code/pull/8032))
7689
* **Network Resilience**: Telemetry data now automatically retries on network failures, ensuring analytics and diagnostics aren't lost during connectivity issues ([#7597](https://github.com/RooCodeInc/Roo-Code/pull/7597))
77-
* **Code blocks wrap by default**: Code blocks now wrap text by default, improving readability when viewing long commands and code snippets ([#8194](https://github.com/RooCodeInc/Roo-Code/pull/8194))
90+
* **Code blocks wrap by default**: Code blocks now wrap text by default, and the snippet toolbar no longer includes language or wrap toggles, keeping snippets readable across locales (via [#8194](https://github.com/RooCodeInc/Roo-Code/pull/8194); [#8208](https://github.com/RooCodeInc/Roo-Code/pull/8208))
7891

7992
## Bug Fixes
8093

94+
* **Roo provider stays signed in**: Roo provider tokens refresh automatically and the local evals app binds to port 3446 for predictable scripts (via [#8224](https://github.com/RooCodeInc/Roo-Code/pull/8224))
95+
* **Checkpoint text stays on one line**: Prevented multi-line wrapping in languages such as Chinese, Korean, Japanese, and Russian so the checkpoint UI stays compact (via [#8207](https://github.com/RooCodeInc/Roo-Code/pull/8207); reported in [#8206](https://github.com/RooCodeInc/Roo-Code/issues/8206))
96+
* **Ollama respects Modelfile num_ctx**: Roo now defers to your Modelfile’s context window to avoid GPU OOMs while still allowing explicit overrides when needed (via [#7798](https://github.com/RooCodeInc/Roo-Code/pull/7798); reported in [#7797](https://github.com/RooCodeInc/Roo-Code/issues/7797))
8197
* **Groq Context Window**: Fixed incorrect display of cached tokens in context window ([#7839](https://github.com/RooCodeInc/Roo-Code/pull/7839))
8298
* **Chat Message Operations**: Resolved duplication issues when editing messages and "Couldn't find timestamp" errors when deleting ([#7793](https://github.com/RooCodeInc/Roo-Code/pull/7793))
8399
* **UI Overlap**: Fixed CodeBlock button z-index to prevent overlap with popovers and configuration panels (thanks A0nameless0man!) ([#7783](https://github.com/RooCodeInc/Roo-Code/pull/7783))

sidebars.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -223,6 +223,7 @@ const sidebars: SidebarsConfig = {
223223
label: '3.28',
224224
items: [
225225
{ type: 'doc', id: 'update-notes/v3.28', label: '3.28 Combined' },
226+
{ type: 'doc', id: 'update-notes/v3.28.6', label: '3.28.6' },
226227
{ type: 'doc', id: 'update-notes/v3.28.5', label: '3.28.5' },
227228
{ type: 'doc', id: 'update-notes/v3.28.4', label: '3.28.4' },
228229
{ type: 'doc', id: 'update-notes/v3.28.3', label: '3.28.3' },

0 commit comments

Comments
 (0)