Skip to content

Commit bb42fb6

Browse files
author
aheizi
committed
Merge branch 'main' into feature/add_sse_mcp
2 parents 549b06f + 73f4350 commit bb42fb6

File tree

20 files changed

+748
-96
lines changed

20 files changed

+748
-96
lines changed

.changeset/wild-dragons-leave.md

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
---
2+
"roo-cline": patch
3+
---
4+
5+
Add o3-mini support to openai compatible

CHANGELOG.md

Lines changed: 37 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,17 @@
11
# Roo Code Changelog
22

3-
## [3.8.0]
3+
## [3.8.1] - 2025-03-07
4+
5+
- Show the reserved output tokens in the context window visualization
6+
- Improve the UI of the configuration profile dropdown (thanks @DeXtroTip!)
7+
- Fix bug where custom temperature could not be unchecked (thanks @System233!)
8+
- Fix bug where decimal prices could not be entered for OpenAI-compatible providers (thanks @System233!)
9+
- Fix bug with enhance prompt on Sonnet 3.7 with a high thinking budget (thanks @moqimoqidea!)
10+
- Fix bug with the context window management for thinking models (thanks @ReadyPlayerEmma!)
11+
- Fix bug where checkpoints were no longer enabled by default
12+
- Add extension and VSCode versions to telemetry
13+
14+
## [3.8.0] - 2025-03-07
415

516
- Add opt-in telemetry to help us improve Roo Code faster (thanks Cline!)
617
- Fix terminal overload / gray screen of death, and other terminal issues
@@ -19,7 +30,7 @@
1930
- Improve styling of the task headers (thanks @monotykamary!)
2031
- Improve context mention path handling on Windows (thanks @samhvw8!)
2132

22-
## [3.7.12]
33+
## [3.7.12] - 2025-03-03
2334

2435
- Expand max tokens of thinking models to 128k, and max thinking budget to over 100k (thanks @monotykamary!)
2536
- Fix issue where keyboard mode switcher wasn't updating API profile (thanks @aheizi!)
@@ -31,19 +42,19 @@
3142
- Update the warning text for the VS LM API
3243
- Correctly populate the default OpenRouter model on the welcome screen
3344

34-
## [3.7.11]
45+
## [3.7.11] - 2025-03-02
3546

3647
- Don't honor custom max tokens for non thinking models
3748
- Include custom modes in mode switching keyboard shortcut
3849
- Support read-only modes that can run commands
3950

40-
## [3.7.10]
51+
## [3.7.10] - 2025-03-01
4152

4253
- Add Gemini models on Vertex AI (thanks @ashktn!)
4354
- Keyboard shortcuts to switch modes (thanks @aheizi!)
4455
- Add support for Mermaid diagrams (thanks Cline!)
4556

46-
## [3.7.9]
57+
## [3.7.9] - 2025-03-01
4758

4859
- Delete task confirmation enhancements
4960
- Smarter context window management
@@ -53,76 +64,76 @@
5364
- UI fix to dropdown hover colors (thanks @SamirSaji!)
5465
- Add support for Claude Sonnet 3.7 thinking via Vertex AI (thanks @lupuletic!)
5566

56-
## [3.7.8]
67+
## [3.7.8] - 2025-02-27
5768

5869
- Add Vertex AI prompt caching support for Claude models (thanks @aitoroses and @lupuletic!)
5970
- Add gpt-4.5-preview
6071
- Add an advanced feature to customize the system prompt
6172

62-
## [3.7.7]
73+
## [3.7.7] - 2025-02-27
6374

6475
- Graduate checkpoints out of beta
6576
- Fix enhance prompt button when using Thinking Sonnet
6677
- Add tooltips to make what buttons do more obvious
6778

68-
## [3.7.6]
79+
## [3.7.6] - 2025-02-26
6980

7081
- Handle really long text better in the in the ChatRow similar to TaskHeader (thanks @joemanley201!)
7182
- Support multiple files in drag-and-drop
7283
- Truncate search_file output to avoid crashing the extension
7384
- Better OpenRouter error handling (no more "Provider Error")
7485
- Add slider to control max output tokens for thinking models
7586

76-
## [3.7.5]
87+
## [3.7.5] - 2025-02-26
7788

7889
- Fix context window truncation math (see [#1173](https://github.com/RooVetGit/Roo-Code/issues/1173))
7990
- Fix various issues with the model picker (thanks @System233!)
8091
- Fix model input / output cost parsing (thanks @System233!)
8192
- Add drag-and-drop for files
8293
- Enable the "Thinking Budget" slider for Claude 3.7 Sonnet on OpenRouter
8394

84-
## [3.7.4]
95+
## [3.7.4] - 2025-02-25
8596

8697
- Fix a bug that prevented the "Thinking" setting from properly updating when switching profiles.
8798

88-
## [3.7.3]
99+
## [3.7.3] - 2025-02-25
89100

90101
- Support for ["Thinking"](https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking) Sonnet 3.7 when using the Anthropic provider.
91102

92-
## [3.7.2]
103+
## [3.7.2] - 2025-02-24
93104

94105
- Fix computer use and prompt caching for OpenRouter's `anthropic/claude-3.7-sonnet:beta` (thanks @cte!)
95106
- Fix sliding window calculations for Sonnet 3.7 that were causing a context window overflow (thanks @cte!)
96107
- Encourage diff editing more strongly in the system prompt (thanks @hannesrudolph!)
97108

98-
## [3.7.1]
109+
## [3.7.1] - 2025-02-24
99110

100111
- Add AWS Bedrock support for Sonnet 3.7 and update some defaults to Sonnet 3.7 instead of 3.5
101112

102-
## [3.7.0]
113+
## [3.7.0] - 2025-02-24
103114

104115
- Introducing Roo Code 3.7, with support for the new Claude Sonnet 3.7. Because who cares about skipping version numbers anymore? Thanks @lupuletic and @cte for the PRs!
105116

106-
## [3.3.26]
117+
## [3.3.26] - 2025-02-27
107118

108119
- Adjust the default prompt for Debug mode to focus more on diagnosis and to require user confirmation before moving on to implementation
109120

110-
## [3.3.25]
121+
## [3.3.25] - 2025-02-21
111122

112123
- Add a "Debug" mode that specializes in debugging tricky problems (thanks [Ted Werbel](https://x.com/tedx_ai/status/1891514191179309457) and [Carlos E. Perez](https://x.com/IntuitMachine/status/1891516362486337739)!)
113124
- Add an experimental "Power Steering" option to significantly improve adherence to role definitions and custom instructions
114125

115-
## [3.3.24]
126+
## [3.3.24] - 2025-02-20
116127

117128
- Fixed a bug with region selection preventing AWS Bedrock profiles from being saved (thanks @oprstchn!)
118129
- Updated the price of gpt-4o (thanks @marvijo-code!)
119130

120-
## [3.3.23]
131+
## [3.3.23] - 2025-02-20
121132

122133
- Handle errors more gracefully when reading custom instructions from files (thanks @joemanley201!)
123134
- Bug fix to hitting "Done" on settings page with unsaved changes (thanks @System233!)
124135

125-
## [3.3.22]
136+
## [3.3.22] - 2025-02-20
126137

127138
- Improve the Provider Settings configuration with clear Save buttons and warnings about unsaved changes (thanks @System233!)
128139
- Correctly parse `<think>` reasoning tags from Ollama models (thanks @System233!)
@@ -132,7 +143,7 @@
132143
- Fix a bug where the .roomodes file was not automatically created when adding custom modes from the Prompts tab
133144
- Allow setting a wildcard (`*`) to auto-approve all command execution (use with caution!)
134145

135-
## [3.3.21]
146+
## [3.3.21] - 2025-02-17
136147

137148
- Fix input box revert issue and configuration loss during profile switch (thanks @System233!)
138149
- Fix default preferred language for zh-cn and zh-tw (thanks @System233!)
@@ -141,23 +152,23 @@
141152
- Fix system prompt to make sure Roo knows about all available modes
142153
- Enable streaming mode for OpenAI o1
143154

144-
## [3.3.20]
155+
## [3.3.20] - 2025-02-14
145156

146157
- Support project-specific custom modes in a .roomodes file
147158
- Add more Mistral models (thanks @d-oit and @bramburn!)
148159
- By popular request, make it so Ask mode can't write to Markdown files and is purely for chatting with
149160
- Add a setting to control the number of open editor tabs to tell the model about (665 is probably too many!)
150161
- Fix race condition bug with entering API key on the welcome screen
151162

152-
## [3.3.19]
163+
## [3.3.19] - 2025-02-12
153164

154165
- Fix a bug where aborting in the middle of file writes would not revert the write
155166
- Honor the VS Code theme for dialog backgrounds
156167
- Make it possible to clear out the default custom instructions for built-in modes
157168
- Add a help button that links to our new documentation site (which we would love help from the community to improve!)
158169
- Switch checkpoints logic to use a shadow git repository to work around issues with hot reloads and polluting existing repositories (thanks Cline for the inspiration!)
159170

160-
## [3.3.18]
171+
## [3.3.18] - 2025-02-11
161172

162173
- Add a per-API-configuration model temperature setting (thanks @joemanley201!)
163174
- Add retries for fetching usage stats from OpenRouter (thanks @jcbdev!)
@@ -168,18 +179,18 @@
168179
- Fix logic error where automatic retries were waiting twice as long as intended
169180
- Rework the checkpoints code to avoid conflicts with file locks on Windows (sorry for the hassle!)
170181

171-
## [3.3.17]
182+
## [3.3.17] - 2025-02-09
172183

173184
- Fix the restore checkpoint popover
174185
- Unset git config that was previously set incorrectly by the checkpoints feature
175186

176-
## [3.3.16]
187+
## [3.3.16] - 2025-02-09
177188

178189
- Support Volcano Ark platform through the OpenAI-compatible provider
179190
- Fix jumpiness while entering API config by updating on blur instead of input
180191
- Add tooltips on checkpoint actions and fix an issue where checkpoints were overwriting existing git name/email settings - thanks for the feedback!
181192

182-
## [3.3.15]
193+
## [3.3.15] - 2025-02-08
183194

184195
- Improvements to MCP initialization and server restarts (thanks @MuriloFP and @hannesrudolph!)
185196
- Add a copy button to the recent tasks (thanks @hannesrudolph!)

package-lock.json

Lines changed: 2 additions & 2 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@
33
"displayName": "Roo Code (prev. Roo Cline)",
44
"description": "A whole dev team of AI agents in your editor.",
55
"publisher": "RooVeterinaryInc",
6-
"version": "3.8.0",
6+
"version": "3.8.1",
77
"icon": "assets/icons/rocket.png",
88
"galleryBanner": {
99
"color": "#617A91",

src/api/providers/anthropic.ts

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -214,12 +214,12 @@ export class AnthropicHandler extends BaseProvider implements SingleCompletionHa
214214
}
215215

216216
async completePrompt(prompt: string) {
217-
let { id: modelId, maxTokens, thinking, temperature } = this.getModel()
217+
let { id: modelId, temperature } = this.getModel()
218218

219219
const message = await this.client.messages.create({
220220
model: modelId,
221-
max_tokens: maxTokens ?? ANTHROPIC_DEFAULT_MAX_TOKENS,
222-
thinking,
221+
max_tokens: ANTHROPIC_DEFAULT_MAX_TOKENS,
222+
thinking: undefined,
223223
temperature,
224224
messages: [{ role: "user", content: prompt }],
225225
stream: false,

src/api/providers/openai.ts

Lines changed: 68 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,11 @@ export class OpenAiHandler extends BaseProvider implements SingleCompletionHandl
6666
const deepseekReasoner = modelId.includes("deepseek-reasoner")
6767
const ark = modelUrl.includes(".volces.com")
6868

69+
if (modelId.startsWith("o3-mini")) {
70+
yield* this.handleO3FamilyMessage(modelId, systemPrompt, messages)
71+
return
72+
}
73+
6974
if (this.options.openAiStreamingEnabled ?? true) {
7075
const systemMessage: OpenAI.Chat.ChatCompletionSystemMessageParam = {
7176
role: "system",
@@ -169,6 +174,69 @@ export class OpenAiHandler extends BaseProvider implements SingleCompletionHandl
169174
throw error
170175
}
171176
}
177+
178+
private async *handleO3FamilyMessage(
179+
modelId: string,
180+
systemPrompt: string,
181+
messages: Anthropic.Messages.MessageParam[],
182+
): ApiStream {
183+
if (this.options.openAiStreamingEnabled ?? true) {
184+
const stream = await this.client.chat.completions.create({
185+
model: "o3-mini",
186+
messages: [
187+
{
188+
role: "developer",
189+
content: `Formatting re-enabled\n${systemPrompt}`,
190+
},
191+
...convertToOpenAiMessages(messages),
192+
],
193+
stream: true,
194+
stream_options: { include_usage: true },
195+
reasoning_effort: this.getModel().info.reasoningEffort,
196+
})
197+
198+
yield* this.handleStreamResponse(stream)
199+
} else {
200+
const requestOptions: OpenAI.Chat.Completions.ChatCompletionCreateParamsNonStreaming = {
201+
model: modelId,
202+
messages: [
203+
{
204+
role: "developer",
205+
content: `Formatting re-enabled\n${systemPrompt}`,
206+
},
207+
...convertToOpenAiMessages(messages),
208+
],
209+
}
210+
211+
const response = await this.client.chat.completions.create(requestOptions)
212+
213+
yield {
214+
type: "text",
215+
text: response.choices[0]?.message.content || "",
216+
}
217+
yield this.processUsageMetrics(response.usage)
218+
}
219+
}
220+
221+
private async *handleStreamResponse(stream: AsyncIterable<OpenAI.Chat.Completions.ChatCompletionChunk>): ApiStream {
222+
for await (const chunk of stream) {
223+
const delta = chunk.choices[0]?.delta
224+
if (delta?.content) {
225+
yield {
226+
type: "text",
227+
text: delta.content,
228+
}
229+
}
230+
231+
if (chunk.usage) {
232+
yield {
233+
type: "usage",
234+
inputTokens: chunk.usage.prompt_tokens || 0,
235+
outputTokens: chunk.usage.completion_tokens || 0,
236+
}
237+
}
238+
}
239+
}
172240
}
173241

174242
export async function getOpenAiModels(baseUrl?: string, apiKey?: string) {

src/core/Cline.ts

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -158,7 +158,7 @@ export class Cline {
158158
apiConfiguration,
159159
customInstructions,
160160
enableDiff,
161-
enableCheckpoints = false,
161+
enableCheckpoints = true,
162162
checkpointStorage = "task",
163163
fuzzyMatchThreshold,
164164
task,
@@ -1124,9 +1124,12 @@ export class Cline {
11241124

11251125
const totalTokens = tokensIn + tokensOut + cacheWrites + cacheReads
11261126

1127+
// Default max tokens value for thinking models when no specific value is set
1128+
const DEFAULT_THINKING_MODEL_MAX_TOKENS = 16_384
1129+
11271130
const modelInfo = this.api.getModel().info
11281131
const maxTokens = modelInfo.thinking
1129-
? this.apiConfiguration.modelMaxTokens || modelInfo.maxTokens
1132+
? this.apiConfiguration.modelMaxTokens || DEFAULT_THINKING_MODEL_MAX_TOKENS
11301133
: modelInfo.maxTokens
11311134
const contextWindow = modelInfo.contextWindow
11321135
const trimmedMessages = await truncateConversationIfNeeded({

src/core/webview/ClineProvider.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -2389,7 +2389,7 @@ export class ClineProvider implements vscode.WebviewViewProvider {
23892389
allowedCommands: stateValues.allowedCommands,
23902390
soundEnabled: stateValues.soundEnabled ?? false,
23912391
diffEnabled: stateValues.diffEnabled ?? true,
2392-
enableCheckpoints: stateValues.enableCheckpoints ?? false,
2392+
enableCheckpoints: stateValues.enableCheckpoints ?? true,
23932393
checkpointStorage: stateValues.checkpointStorage ?? "task",
23942394
soundVolume: stateValues.soundVolume,
23952395
browserViewportSize: stateValues.browserViewportSize ?? "900x600",

src/shared/api.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -70,7 +70,7 @@ export interface ApiHandlerOptions {
7070
requestyApiKey?: string
7171
requestyModelId?: string
7272
requestyModelInfo?: ModelInfo
73-
modelTemperature?: number
73+
modelTemperature?: number | null
7474
modelMaxTokens?: number
7575
modelMaxThinkingTokens?: number
7676
}

0 commit comments

Comments
 (0)