Skip to content

Commit bf7a637

Browse files
authored
Merge branch 'RooCodeInc:main' into i/update-gemini-and-vertex-models
2 parents e053aad + f9e85a5 commit bf7a637

File tree

147 files changed

+3407
-574
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

147 files changed

+3407
-574
lines changed

.roo/commands/release.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ argument-hint: patch | minor | major
44
---
55

66
1. Identify the SHA corresponding to the most recent release using GitHub CLI: `gh release view --json tagName,targetCommitish,publishedAt`
7-
2. Analyze changes since the last release using: `gh pr list --state merged --json number,title,author,url,mergedAt,closingIssuesReferences --limit 1000 -q '[.[] | select(.mergedAt > "TIMESTAMP") | {number, title, author: .author.login, url, mergedAt, issues: .closingIssuesReferences}] | sort_by(.number)'`
7+
2. Analyze changes since the last release using: `gh pr list --state merged --base main --json number,title,author,url,mergedAt,closingIssuesReferences --limit 1000 -q '[.[] | select(.mergedAt > "TIMESTAMP") | {number, title, author: .author.login, url, mergedAt, issues: .closingIssuesReferences}] | sort_by(.number)'`
88
3. For each PR with linked issues, fetch the issue details to get the issue reporter: `gh issue view ISSUE_NUMBER --json number,author -q '{number, reporter: .author.login}'`
99
4. Summarize the changes. If the user did not specify, ask them whether this should be a major, minor, or patch release.
1010
5. Create a changeset in .changeset/v[version].md instead of directly modifying package.json. The format is:

.roo/roomotes.yml

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -20,14 +20,6 @@ github_events:
2020
- event: pull_request.opened
2121
action:
2222
name: github.pr.review
23-
- event: pull_request.opened
24-
action:
25-
name: general.task
26-
prompt: |
27-
1. Run the script `node scripts/find-missing-translations.js` and carefully review its output for any missing translations.
28-
2. If the script reports missing translations, switch into `translate` mode and add them in all supported languages.
29-
3. If you've added new translations, commit and push them to the existing PR.
30-
4. If you get a permission error trying to push to the PR just give up (i.e don't create a new PR instead).
3123
- event: pull_request_review_comment.created
3224
action:
3325
name: github.pr.comment.respond

CHANGELOG.md

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,15 @@
11
# Roo Code Changelog
22

3+
## [3.25.11] - 2025-08-11
4+
5+
- Add: Native OpenAI provider support for Codex Mini model (#5386 by @KJ7LNW, PR by @daniel-lxs)
6+
- Add: IO Intelligence Provider support (thanks @ertan2002!)
7+
- Fix: MCP startup issues and remove refresh notifications (thanks @hannesrudolph!)
8+
- Fix: Improvements to GPT-5 OpenAI provider configuration (thanks @hannesrudolph!)
9+
- Fix: Clarify codebase_search path parameter as optional and improve tool descriptions (thanks @app/roomote!)
10+
- Fix: Bedrock provider workaround for LiteLLM passthrough issues (thanks @jr!)
11+
- Fix: Token usage and cost being underreported on cancelled requests (thanks @chrarnoldus!)
12+
313
## [3.25.10] - 2025-08-07
414

515
- Add support for GPT-5 (thanks Cline and @app/roomote!)

packages/types/npm/package.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
{
22
"name": "@roo-code/types",
3-
"version": "1.44.0",
3+
"version": "1.45.0",
44
"description": "TypeScript type definitions for Roo Code.",
55
"publishConfig": {
66
"access": "public",

packages/types/src/global-settings.ts

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,6 +29,13 @@ export const DEFAULT_WRITE_DELAY_MS = 1000
2929
*/
3030
export const DEFAULT_TERMINAL_OUTPUT_CHARACTER_LIMIT = 50_000
3131

32+
/**
33+
* Default timeout for background usage collection in milliseconds.
34+
* This timeout prevents the background task from running indefinitely
35+
* when collecting usage data from streaming API responses.
36+
*/
37+
export const DEFAULT_USAGE_COLLECTION_TIMEOUT_MS = 30_000
38+
3239
/**
3340
* GlobalSettings
3441
*/
@@ -194,6 +201,7 @@ export const SECRET_STATE_KEYS = [
194201
"huggingFaceApiKey",
195202
"sambaNovaApiKey",
196203
"fireworksApiKey",
204+
"ioIntelligenceApiKey",
197205
] as const satisfies readonly (keyof ProviderSettings)[]
198206
export type SecretState = Pick<ProviderSettings, (typeof SECRET_STATE_KEYS)[number]>
199207

packages/types/src/provider-settings.ts

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -43,6 +43,7 @@ export const providerNames = [
4343
"sambanova",
4444
"zai",
4545
"fireworks",
46+
"io-intelligence",
4647
] as const
4748

4849
export const providerNamesSchema = z.enum(providerNames)
@@ -224,6 +225,7 @@ const unboundSchema = baseProviderSettingsSchema.extend({
224225
})
225226

226227
const requestySchema = baseProviderSettingsSchema.extend({
228+
requestyBaseUrl: z.string().optional(),
227229
requestyApiKey: z.string().optional(),
228230
requestyModelId: z.string().optional(),
229231
})
@@ -276,6 +278,11 @@ const fireworksSchema = apiModelIdProviderModelSchema.extend({
276278
fireworksApiKey: z.string().optional(),
277279
})
278280

281+
const ioIntelligenceSchema = apiModelIdProviderModelSchema.extend({
282+
ioIntelligenceModelId: z.string().optional(),
283+
ioIntelligenceApiKey: z.string().optional(),
284+
})
285+
279286
const defaultSchema = z.object({
280287
apiProvider: z.undefined(),
281288
})
@@ -311,6 +318,7 @@ export const providerSettingsSchemaDiscriminated = z.discriminatedUnion("apiProv
311318
sambaNovaSchema.merge(z.object({ apiProvider: z.literal("sambanova") })),
312319
zaiSchema.merge(z.object({ apiProvider: z.literal("zai") })),
313320
fireworksSchema.merge(z.object({ apiProvider: z.literal("fireworks") })),
321+
ioIntelligenceSchema.merge(z.object({ apiProvider: z.literal("io-intelligence") })),
314322
defaultSchema,
315323
])
316324

@@ -346,6 +354,7 @@ export const providerSettingsSchema = z.object({
346354
...sambaNovaSchema.shape,
347355
...zaiSchema.shape,
348356
...fireworksSchema.shape,
357+
...ioIntelligenceSchema.shape,
349358
...codebaseIndexProviderSchema.shape,
350359
})
351360

@@ -371,6 +380,7 @@ export const MODEL_ID_KEYS: Partial<keyof ProviderSettings>[] = [
371380
"requestyModelId",
372381
"litellmModelId",
373382
"huggingFaceModelId",
383+
"ioIntelligenceModelId",
374384
]
375385

376386
export const getModelId = (settings: ProviderSettings): string | undefined => {

packages/types/src/providers/chutes.ts

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23,10 +23,12 @@ export type ChutesModelId =
2323
| "Qwen/Qwen3-30B-A3B"
2424
| "Qwen/Qwen3-14B"
2525
| "Qwen/Qwen3-8B"
26+
| "Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8"
2627
| "microsoft/MAI-DS-R1-FP8"
2728
| "tngtech/DeepSeek-R1T-Chimera"
2829
| "zai-org/GLM-4.5-Air"
2930
| "zai-org/GLM-4.5-FP8"
31+
| "moonshotai/Kimi-K2-Instruct-75k"
3032

3133
export const chutesDefaultModelId: ChutesModelId = "deepseek-ai/DeepSeek-R1-0528"
3234

@@ -258,4 +260,22 @@ export const chutesModels = {
258260
description:
259261
"GLM-4.5-FP8 model with 128k token context window, optimized for agent-based applications with MoE architecture.",
260262
},
263+
"Qwen/Qwen3-Coder-480B-A35B-Instruct-FP8": {
264+
maxTokens: 32768,
265+
contextWindow: 262144,
266+
supportsImages: false,
267+
supportsPromptCache: false,
268+
inputPrice: 0,
269+
outputPrice: 0,
270+
description: "Qwen3 Coder 480B A35B Instruct FP8 model, optimized for coding tasks.",
271+
},
272+
"moonshotai/Kimi-K2-Instruct-75k": {
273+
maxTokens: 32768,
274+
contextWindow: 75000,
275+
supportsImages: false,
276+
supportsPromptCache: false,
277+
inputPrice: 0.1481,
278+
outputPrice: 0.5926,
279+
description: "Moonshot AI Kimi K2 Instruct model with 75k context window.",
280+
},
261281
} as const satisfies Record<string, ModelInfo>

packages/types/src/providers/index.ts

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ export * from "./gemini.js"
88
export * from "./glama.js"
99
export * from "./groq.js"
1010
export * from "./huggingface.js"
11+
export * from "./io-intelligence.js"
1112
export * from "./lite-llm.js"
1213
export * from "./lm-studio.js"
1314
export * from "./mistral.js"
Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
import type { ModelInfo } from "../model.js"
2+
3+
export type IOIntelligenceModelId =
4+
| "deepseek-ai/DeepSeek-R1-0528"
5+
| "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8"
6+
| "Intel/Qwen3-Coder-480B-A35B-Instruct-int4-mixed-ar"
7+
| "openai/gpt-oss-120b"
8+
9+
export const ioIntelligenceDefaultModelId: IOIntelligenceModelId = "meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8"
10+
11+
export const ioIntelligenceDefaultBaseUrl = "https://api.intelligence.io.solutions/api/v1"
12+
13+
export const IO_INTELLIGENCE_CACHE_DURATION = 1000 * 60 * 60 // 1 hour
14+
15+
export const ioIntelligenceModels = {
16+
"deepseek-ai/DeepSeek-R1-0528": {
17+
maxTokens: 8192,
18+
contextWindow: 128000,
19+
supportsImages: false,
20+
supportsPromptCache: false,
21+
description: "DeepSeek R1 reasoning model",
22+
},
23+
"meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8": {
24+
maxTokens: 8192,
25+
contextWindow: 430000,
26+
supportsImages: true,
27+
supportsPromptCache: false,
28+
description: "Llama 4 Maverick 17B model",
29+
},
30+
"Intel/Qwen3-Coder-480B-A35B-Instruct-int4-mixed-ar": {
31+
maxTokens: 8192,
32+
contextWindow: 106000,
33+
supportsImages: false,
34+
supportsPromptCache: false,
35+
description: "Qwen3 Coder 480B specialized for coding",
36+
},
37+
"openai/gpt-oss-120b": {
38+
maxTokens: 8192,
39+
contextWindow: 131072,
40+
supportsImages: false,
41+
supportsPromptCache: false,
42+
description: "OpenAI GPT-OSS 120B model",
43+
},
44+
} as const satisfies Record<string, ModelInfo>

packages/types/src/providers/openai.ts

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -220,6 +220,17 @@ export const openAiNativeModels = {
220220
outputPrice: 0.6,
221221
cacheReadsPrice: 0.075,
222222
},
223+
"codex-mini-latest": {
224+
maxTokens: 16_384,
225+
contextWindow: 200_000,
226+
supportsImages: false,
227+
supportsPromptCache: false,
228+
inputPrice: 1.5,
229+
outputPrice: 6,
230+
cacheReadsPrice: 0,
231+
description:
232+
"Codex Mini: Cloud-based software engineering agent powered by codex-1, a version of o3 optimized for coding tasks. Trained with reinforcement learning to generate human-style code, adhere to instructions, and iteratively run tests.",
233+
},
223234
} as const satisfies Record<string, ModelInfo>
224235

225236
export const openAiModelInfoSaneDefaults: ModelInfo = {

0 commit comments

Comments
 (0)