You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Quickly toggle auto-approve on/off without using the mouse. This shortcut toggles the global "Enabled" state while preserving your permission selections.
51
+
52
+
**To customize the shortcut:**
53
+
1. Open VS Code Command Palette (`Cmd+Shift+P` / `Ctrl+Shift+P`)
54
+
2. Search for "Preferences: Open Keyboard Shortcuts"
55
+
3. Search for the command name (varies by language):
56
+
- English: "Toggle Auto-Approve"
57
+
- Other languages: Look for the localized equivalent
58
+
4. Click the pencil icon next to the command
59
+
5. Press your desired key combination
60
+
6. Press Enter to save
61
+
62
+
**Note:** The command name appears in your VS Code interface language. If you're using a non-English locale, the command will be translated accordingly.
Copy file name to clipboardExpand all lines: docs/providers/ollama.md
+47-5Lines changed: 47 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -53,8 +53,14 @@ Roo Code supports running models locally using Ollama. This provides privacy, of
53
53
ollama pull qwen2.5-coder:32b
54
54
```
55
55
56
-
3. **Configure the Model:** Configure your model’s context window in Ollama and save a copy. Roo automatically reads the model’s reported context window from Ollama and passes it as `num_ctx`; no Roo-side context size setting is required for the Ollama provider.
56
+
3. **Configure the Model:** Configure your model's context window in Ollama and save a copy.
57
57
58
+
:::info Default Context Behavior
59
+
**Roo Code automatically defers to the Modelfile's `num_ctx` setting by default.** When you use a model with Ollama, Roo Code reads the model's configured context window and uses it automatically. You don't need to configure context size in Roo Code settings - it respects what's defined in your Ollama model.
60
+
:::
61
+
62
+
**Option A: Interactive Configuration**
63
+
58
64
Load the model (we will use `qwen2.5-coder:32b` as an example):
59
65
60
66
```bash
@@ -73,6 +79,37 @@ Roo Code supports running models locally using Ollama. This provides privacy, of
73
79
/save your_model_name
74
80
```
75
81
82
+
**Option B: Using a Modelfile (Recommended)**
83
+
84
+
Create a `Modelfile` with your desired configuration:
85
+
86
+
```dockerfile
87
+
# Example Modelfile for reduced context
88
+
FROM qwen2.5-coder:32b
89
+
90
+
# Set context window to 32K tokens (reduced from default)
91
+
PARAMETER num_ctx 32768
92
+
93
+
# Optional: Adjust temperature for more consistent output
94
+
PARAMETER temperature 0.7
95
+
96
+
# Optional: Set repeat penalty
97
+
PARAMETER repeat_penalty 1.1
98
+
```
99
+
100
+
Then create your custom model:
101
+
102
+
```bash
103
+
ollama create qwen-32k -f Modelfile
104
+
```
105
+
106
+
:::tip Override Context Window
107
+
If you need to override the model's default context window:
108
+
- **Permanently:** Save a new model version with your desired `num_ctx` using either method above
109
+
- **Roo Code behavior:** Roo automatically uses whatever `num_ctx` is configured in your Ollama model
3. **Ensure the model's context window is pinned**
133
-
Save your Ollama model with an appropriate `num_ctx` (e.g., via `/set` + `/save`, or a Modelfile). Roo reads this automatically and passes it as `num_ctx`; there is no Roo-side context size setting for the Ollama provider.
175
+
Save your Ollama model with an appropriate `num_ctx` (via `/set` + `/save`, or preferably a Modelfile). **Roo Code automatically detects and uses the model's configured `num_ctx`** - there is no manual context size settingin Roo Codefor the Ollama provider.
134
176
135
177
4. **Use smaller variants**
136
178
If GPU memory is limited, use a smaller quant (e.g., q4 instead of q5) or a smaller parameter size (e.g., 7B/13B instead of 32B).
Advanced multimodal models with balanced capabilities:
48
59
@@ -136,5 +147,6 @@ GPT-5 models maintain conversation context efficiently through response IDs, red
136
147
137
148
## Tips and Notes
138
149
139
-
***Pricing:** Refer to the [OpenAI Pricing](https://openai.com/pricing) page for details on model costs.
150
+
***Pricing:** Refer to the [OpenAI Pricing](https://openai.com/pricing) page for current model costs and discounts, including prompt caching.
140
151
***Azure OpenAI Service:** If you'd like to use the Azure OpenAI service, please see our section on [OpenAI-compatible](/providers/openai-compatible) providers.
152
+
***Context Optimization:** For GPT-5-Codex, leverage prompt caching by maintaining consistent context across requests to reduce costs significantly.
description: GPT-5-Codex arrives in OpenAI Native alongside localization tooling and UI refinements.
3
+
keywords:
4
+
- roo code 3.28.6
5
+
- gpt-5-codex
6
+
- localization
7
+
- bug fixes
8
+
- release notes
9
+
image: /img/social-share.jpg
10
+
---
11
+
12
+
# Roo Code 3.28.6 Release Notes (2025-09-23)
13
+
14
+
This release adds GPT-5-Codex to OpenAI Native, sharpens localization coverage, and smooths UI workflows across languages.
15
+
16
+
## GPT-5-Codex lands in OpenAI Native
17
+
18
+
-**Work with repository-scale context**: Keep multi-file specs and long reviews in a single thread thanks to a 400k token window.
19
+
-**Reuse prompts faster and include visuals**: Prompt caching and image support help you iterate on UI fixes without re-uploading context.
20
+
-**Let the model adapt its effort**: GPT-5-Codex automatically balances quick responses for simple questions with deeper reasoning on complex builds.
21
+
22
+
This gives teams a higher-capacity OpenAI option without extra configuration.[#8260](https://github.com/RooCodeInc/Roo-Code/pull/8260):
23
+
24
+
25
+
> **📚 Documentation**: See [OpenAI Provider Guide](/providers/openai) for capabilities and setup guidance.
26
+
27
+
## QOL Improvements
28
+
29
+
***Keyboard shortcut for auto-approve**: Toggle approvals with Cmd/Ctrl+Alt+A from anywhere in the editor, keeping focus on the code review flow (via [#8214](https://github.com/RooCodeInc/Roo-Code/pull/8214))
30
+
***Cleaner code blocks**: Removed the snippet language picker and word-wrap toggle so wrapped code is easier to read and copy across locales (via [#8208](https://github.com/RooCodeInc/Roo-Code/pull/8208))
31
+
***More readable reasoning blocks**: Added spacing before section headers inside reasoning transcripts to make long explanations easier to scan (via [#7868](https://github.com/RooCodeInc/Roo-Code/pull/7868))
32
+
***Translation checks cover package settings**: The missing translation finder now validates package.nls files for 17 locales to catch untranslated VS Code strings earlier (via [#8255](https://github.com/RooCodeInc/Roo-Code/pull/8255))
33
+
34
+
## Bug Fixes
35
+
36
+
***Bare-metal evals stay signed in**: Roo provider tokens refresh automatically and the local evals app binds to port 3446 for predictable scripts (via [#8224](https://github.com/RooCodeInc/Roo-Code/pull/8224))
37
+
***Checkpoint text stays on one line**: Prevented multi-line wrapping in languages such as Chinese, Korean, Japanese, and Russian so the checkpoint UI stays compact (via [#8207](https://github.com/RooCodeInc/Roo-Code/pull/8207); reported in [#8206](https://github.com/RooCodeInc/Roo-Code/issues/8206))
38
+
***Ollama respects Modelfile num_ctx**: Roo now defers to your Modelfile’s context window to avoid GPU OOMs while still allowing explicit overrides when needed (via [#7798](https://github.com/RooCodeInc/Roo-Code/pull/7798); reported in [#7797](https://github.com/RooCodeInc/Roo-Code/issues/7797))
Copy file name to clipboardExpand all lines: docs/update-notes/v3.28.mdx
+17-1Lines changed: 17 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -52,14 +52,27 @@ Task Sync enables monitoring your local development environment from any device.
52
52
53
53
> **Documentation**: See [Task Sync](/roo-code-cloud/task-sync), [Roomote Control Guide](/roo-code-cloud/roomote-control), and [Billing & Subscriptions](/roo-code-cloud/billing-subscriptions).
54
54
55
+
## GPT-5-Codex lands in OpenAI Native
56
+
57
+
-**Work with repository-scale context**: Keep multi-file specs and long reviews in a single thread thanks to a 400k token window.
58
+
-**Reuse prompts faster and include visuals**: Prompt caching and image support help you iterate on UI fixes without re-uploading context.
59
+
-**Let the model adapt its effort**: GPT-5-Codex automatically balances quick responses for simple questions with deeper reasoning on complex builds.
60
+
61
+
This gives teams a higher-capacity OpenAI option without extra configuration.[#8260](https://github.com/RooCodeInc/Roo-Code/pull/8260):
62
+
63
+
> **Documentation**: See [OpenAI Provider Guide](/providers/openai) for capabilities and setup guidance.
64
+
55
65
## QOL Improvements
56
66
67
+
***Auto-approve keyboard shortcut**: Toggle approvals with Cmd/Ctrl+Alt+A from anywhere in the editor so you can stay in the flow while reviewing changes (via [#8214](https://github.com/RooCodeInc/Roo-Code/pull/8214))
57
68
***Click-to-Edit Chat Messages**: Click directly on any message text to edit it, with ESC to cancel and improved padding consistency ([#7790](https://github.com/RooCodeInc/Roo-Code/pull/7790))
58
69
***Enhanced Reasoning Display**: The AI's thinking process now shows a persistent timer and displays reasoning content in clean italic text ([#7752](https://github.com/RooCodeInc/Roo-Code/pull/7752))
70
+
***Easier-to-scan reasoning transcripts**: Added clear line breaks before reasoning headers inside the UI so long thoughts are easier to skim (via [#7868](https://github.com/RooCodeInc/Roo-Code/pull/7868))
59
71
***Manual Auth URL Input**: Users in containerized environments can now paste authentication redirect URLs manually when automatic redirection fails ([#7805](https://github.com/RooCodeInc/Roo-Code/pull/7805))
60
72
***Active Mode Centering**: The mode selector dropdown now automatically centers the active mode when opened ([#7883](https://github.com/RooCodeInc/Roo-Code/pull/7883))
61
73
***Preserve First Message**: The first message containing slash commands or initial context is now preserved during conversation condensing instead of being replaced with a summary ([#7910](https://github.com/RooCodeInc/Roo-Code/pull/7910))
62
74
***Checkpoint Initialization Notifications**: You'll now receive clear notifications when checkpoint initialization fails, particularly with nested Git repositories ([#7766](https://github.com/RooCodeInc/Roo-Code/pull/7766))
75
+
***Translation coverage auditing**: The translation checker now validates package.nls locales by default to catch missing strings before release (via [#8255](https://github.com/RooCodeInc/Roo-Code/pull/8255))
63
76
64
77
* Smaller and more subtle auto-approve UI (thanks brunobergher!) ([#7894](https://github.com/RooCodeInc/Roo-Code/pull/7894))
65
78
* Disable Roomote Control on logout for better security ([#7976](https://github.com/RooCodeInc/Roo-Code/pull/7976))
@@ -74,10 +87,13 @@ Task Sync enables monitoring your local development environment from any device.
74
87
***Redesigned Message Feed**: Enjoy a cleaner, more readable chat interface with improved visual hierarchy that helps you focus on what matters ([#7985](https://github.com/RooCodeInc/Roo-Code/pull/7985))
75
88
***Responsive Auto-Approve**: The auto-approve dropdown now adapts to different window sizes with smart 1-2 column layouts, and tooltips show all enabled actions without truncation ([#8032](https://github.com/RooCodeInc/Roo-Code/pull/8032))
76
89
***Network Resilience**: Telemetry data now automatically retries on network failures, ensuring analytics and diagnostics aren't lost during connectivity issues ([#7597](https://github.com/RooCodeInc/Roo-Code/pull/7597))
77
-
***Code blocks wrap by default**: Code blocks now wrap text by default, improving readability when viewing long commands and code snippets ([#8194](https://github.com/RooCodeInc/Roo-Code/pull/8194))
90
+
***Code blocks wrap by default**: Code blocks now wrap text by default, and the snippet toolbar no longer includes language or wrap toggles, keeping snippets readable across locales (via [#8194](https://github.com/RooCodeInc/Roo-Code/pull/8194); [#8208](https://github.com/RooCodeInc/Roo-Code/pull/8208))
78
91
79
92
## Bug Fixes
80
93
94
+
***Roo provider stays signed in**: Roo provider tokens refresh automatically and the local evals app binds to port 3446 for predictable scripts (via [#8224](https://github.com/RooCodeInc/Roo-Code/pull/8224))
95
+
***Checkpoint text stays on one line**: Prevented multi-line wrapping in languages such as Chinese, Korean, Japanese, and Russian so the checkpoint UI stays compact (via [#8207](https://github.com/RooCodeInc/Roo-Code/pull/8207); reported in [#8206](https://github.com/RooCodeInc/Roo-Code/issues/8206))
96
+
***Ollama respects Modelfile num_ctx**: Roo now defers to your Modelfile’s context window to avoid GPU OOMs while still allowing explicit overrides when needed (via [#7798](https://github.com/RooCodeInc/Roo-Code/pull/7798); reported in [#7797](https://github.com/RooCodeInc/Roo-Code/issues/7797))
81
97
***Groq Context Window**: Fixed incorrect display of cached tokens in context window ([#7839](https://github.com/RooCodeInc/Roo-Code/pull/7839))
82
98
***Chat Message Operations**: Resolved duplication issues when editing messages and "Couldn't find timestamp" errors when deleting ([#7793](https://github.com/RooCodeInc/Roo-Code/pull/7793))
83
99
***UI Overlap**: Fixed CodeBlock button z-index to prevent overlap with popovers and configuration panels (thanks A0nameless0man!) ([#7783](https://github.com/RooCodeInc/Roo-Code/pull/7783))
0 commit comments