Skip to content

Commit dc57552

Browse files
roomote[bot]roomote-agenthannesrudolphdaniel-lxs
authored
feat: add GPT-5 model support (RooCodeInc#6819)
* feat: add GPT-5 model support - Added GPT-5 models (gpt-5-2025-08-07, gpt-5-mini-2025-08-07, gpt-5-nano-2025-08-07) - Added nectarine-alpha-new-reasoning-effort-2025-07-25 experimental model - Set gpt-5-2025-08-07 as default OpenAI Native model - Implemented GPT-5 specific handling with streaming and reasoning effort support * fix: remove hardcoded temperature from GPT-5 handler - Updated handleGPT5Message to use configurable temperature - Now uses this.options.modelTemperature ?? OPENAI_NATIVE_DEFAULT_TEMPERATURE - Maintains consistency with other model handlers * feat: add reasoning effort support for all OpenAI models * fix: update test to expect new default model gpt-5-2025-08-07 * feat: increase GPT-5 models context window to 400,000 - Updated context window from 256,000 to 400,000 for gpt-5-2025-08-07 - Updated context window from 256,000 to 400,000 for gpt-5-mini-2025-08-07 - Updated context window from 256,000 to 400,000 for gpt-5-nano-2025-08-07 - Updated context window from 256,000 to 400,000 for nectarine-alpha-new-reasoning-effort-2025-07-25 As requested by @daniel-lxs in PR RooCodeInc#6819 * revert: remove GPT-5 models, keep only nectarine experimental model - Removed gpt-5-2025-08-07, gpt-5-mini-2025-08-07, gpt-5-nano-2025-08-07 - Kept nectarine-alpha-new-reasoning-effort-2025-07-25 experimental model - Reverted default model back to gpt-4o - Updated tests and changeset accordingly * feat: add GPT-5 models with updated context windows - Added gpt-5-2025-08-07, gpt-5-mini-2025-08-07, gpt-5-nano-2025-08-07 models - All GPT-5 models configured with 400,000 context window - Updated nectarine model context window to 256,000 - All models configured with reasoning effort support - Set gpt-5-2025-08-07 as default OpenAI Native model - Added GPT-5 model handling in openai-native.ts - Updated tests to reflect new default model * fix: restore reasoning effort support for o1 series models - Added supportsReasoningEffort: true to o1, o1-preview, and o1-mini models - This restores the ability to use reasoning effort parameters with these models - The existing code in openai-native.ts already handles reasoning effort correctly * Revert "fix: restore reasoning effort support for o1 series models" This reverts commit 7251237. * fix: restore reasoning effort support for o3 and o4 models - Added supportsReasoningEffort: true to o3, o3-high, o3-low models - Added supportsReasoningEffort: true to o4-mini, o4-mini-high, o4-mini-low models - Added supportsReasoningEffort: true to o3-mini, o3-mini-high, o3-mini-low models - These models have both supportsReasoningEffort and reasoningEffort properties * Revert "fix: restore reasoning effort support for o3 and o4 models" This reverts commit a75a2b8. * fix: restore reasoning effort support for o3 and o4 models - Added supportsReasoningEffort: true to o3, o3-high, o3-low models - Added supportsReasoningEffort: true to o4-mini, o4-mini-high, o4-mini-low models - Added supportsReasoningEffort: true to o3-mini, o3-mini-high, o3-mini-low models * fix: adjust reasoning effort support for o3/o4 models - Keep supportsReasoningEffort only for base o3, o4-mini, and o3-mini models - Remove supportsReasoningEffort from -high and -low variants - Position supportsReasoningEffort right before reasoningEffort property * fix: remove nectarine experimental model - Removed nectarine-alpha-new-reasoning-effort-2025-07-25 from openai.ts - Removed nectarine handling from openai-native.ts (renamed to handleGpt5Message) - Removed associated changeset file - Keep GPT-5 models with developer role handling * feat: implement full GPT-5 support with verbosity and minimal reasoning - Add all three GPT-5 models with accurate pricing (.25/0 for gpt-5, /bin/sh.25/ for mini, /bin/sh.05//bin/sh.40 for nano) - Implement verbosity control (low/medium/high) that passes through to API - Add minimal reasoning effort support for fastest response times - GPT-5 models use developer role instead of system role - Set gpt-5-2025-08-07 as default OpenAI Native model - Add Responses API infrastructure for future migration - Update tests to verify all GPT-5 features - All 27 tests passing Note: UI controls for verbosity still need to be added in a follow-up PR * feat: add verbosity setting for GPT-5 models - Add VerbosityLevel type definition to model types - Add verbosity field to ProviderSettings schema - Create Verbosity UI component for settings - Add verbosity labels to all localization files - Integrate verbosity handling in model parameters transformation - Update OpenAI native handler to support verbosity for GPT-5 - Add comprehensive tests for verbosity setting - Update existing GPT-5 tests to use verbosity from settings * Delete .roorules --------- Co-authored-by: Roo Code <[email protected]> Co-authored-by: hannesrudolph <[email protected]> Co-authored-by: Daniel Riccio <[email protected]> Co-authored-by: Daniel <[email protected]>
1 parent 72668fe commit dc57552

File tree

27 files changed

+741
-12
lines changed

27 files changed

+741
-12
lines changed

packages/types/src/model.ts

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,16 @@ export const reasoningEffortsSchema = z.enum(reasoningEfforts)
1010

1111
export type ReasoningEffort = z.infer<typeof reasoningEffortsSchema>
1212

13+
/**
14+
* Verbosity
15+
*/
16+
17+
export const verbosityLevels = ["low", "medium", "high"] as const
18+
19+
export const verbosityLevelsSchema = z.enum(verbosityLevels)
20+
21+
export type VerbosityLevel = z.infer<typeof verbosityLevelsSchema>
22+
1323
/**
1424
* ModelParameter
1525
*/

packages/types/src/provider-settings.ts

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
import { z } from "zod"
22

3-
import { reasoningEffortsSchema, modelInfoSchema } from "./model.js"
3+
import { reasoningEffortsSchema, verbosityLevelsSchema, modelInfoSchema } from "./model.js"
44
import { codebaseIndexProviderSchema } from "./codebase-index.js"
55

66
/**
@@ -79,6 +79,9 @@ const baseProviderSettingsSchema = z.object({
7979
reasoningEffort: reasoningEffortsSchema.optional(),
8080
modelMaxTokens: z.number().optional(),
8181
modelMaxThinkingTokens: z.number().optional(),
82+
83+
// Model verbosity.
84+
verbosity: verbosityLevelsSchema.optional(),
8285
})
8386

8487
// Several of the providers share common model config properties.

packages/types/src/providers/openai.ts

Lines changed: 34 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,42 @@ import type { ModelInfo } from "../model.js"
33
// https://openai.com/api/pricing/
44
export type OpenAiNativeModelId = keyof typeof openAiNativeModels
55

6-
export const openAiNativeDefaultModelId: OpenAiNativeModelId = "gpt-4.1"
6+
export const openAiNativeDefaultModelId: OpenAiNativeModelId = "gpt-5-2025-08-07"
77

88
export const openAiNativeModels = {
9+
"gpt-5-2025-08-07": {
10+
maxTokens: 128000,
11+
contextWindow: 400000,
12+
supportsImages: true,
13+
supportsPromptCache: true,
14+
supportsReasoningEffort: true,
15+
inputPrice: 1.25,
16+
outputPrice: 10.0,
17+
cacheReadsPrice: 0.13,
18+
description: "GPT-5: The best model for coding and agentic tasks across domains",
19+
},
20+
"gpt-5-mini-2025-08-07": {
21+
maxTokens: 128000,
22+
contextWindow: 400000,
23+
supportsImages: true,
24+
supportsPromptCache: true,
25+
supportsReasoningEffort: true,
26+
inputPrice: 0.25,
27+
outputPrice: 2.0,
28+
cacheReadsPrice: 0.03,
29+
description: "GPT-5 Mini: A faster, more cost-efficient version of GPT-5 for well-defined tasks",
30+
},
31+
"gpt-5-nano-2025-08-07": {
32+
maxTokens: 128000,
33+
contextWindow: 400000,
34+
supportsImages: true,
35+
supportsPromptCache: true,
36+
supportsReasoningEffort: true,
37+
inputPrice: 0.05,
38+
outputPrice: 0.4,
39+
cacheReadsPrice: 0.01,
40+
description: "GPT-5 Nano: Fastest, most cost-efficient version of GPT-5",
41+
},
942
"gpt-4.1": {
1043
maxTokens: 32_768,
1144
contextWindow: 1_047_576,

src/api/providers/__tests__/openai-native.spec.ts

Lines changed: 155 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -455,8 +455,162 @@ describe("OpenAiNativeHandler", () => {
455455
openAiNativeApiKey: "test-api-key",
456456
})
457457
const modelInfo = handlerWithoutModel.getModel()
458-
expect(modelInfo.id).toBe("gpt-4.1") // Default model
458+
expect(modelInfo.id).toBe("gpt-5-2025-08-07") // Default model
459459
expect(modelInfo.info).toBeDefined()
460460
})
461461
})
462+
463+
describe("GPT-5 models", () => {
464+
it("should handle GPT-5 model with developer role", async () => {
465+
handler = new OpenAiNativeHandler({
466+
...mockOptions,
467+
apiModelId: "gpt-5-2025-08-07",
468+
})
469+
470+
const stream = handler.createMessage(systemPrompt, messages)
471+
const chunks: any[] = []
472+
for await (const chunk of stream) {
473+
chunks.push(chunk)
474+
}
475+
476+
// Verify developer role is used for GPT-5 with default parameters
477+
expect(mockCreate).toHaveBeenCalledWith(
478+
expect.objectContaining({
479+
model: "gpt-5-2025-08-07",
480+
messages: [{ role: "developer", content: expect.stringContaining(systemPrompt) }],
481+
stream: true,
482+
stream_options: { include_usage: true },
483+
reasoning_effort: "minimal", // Default for GPT-5
484+
verbosity: "medium", // Default verbosity
485+
}),
486+
)
487+
})
488+
489+
it("should handle GPT-5-mini model", async () => {
490+
handler = new OpenAiNativeHandler({
491+
...mockOptions,
492+
apiModelId: "gpt-5-mini-2025-08-07",
493+
})
494+
495+
const stream = handler.createMessage(systemPrompt, messages)
496+
const chunks: any[] = []
497+
for await (const chunk of stream) {
498+
chunks.push(chunk)
499+
}
500+
501+
expect(mockCreate).toHaveBeenCalledWith(
502+
expect.objectContaining({
503+
model: "gpt-5-mini-2025-08-07",
504+
messages: [{ role: "developer", content: expect.stringContaining(systemPrompt) }],
505+
stream: true,
506+
stream_options: { include_usage: true },
507+
reasoning_effort: "minimal", // Default for GPT-5
508+
verbosity: "medium", // Default verbosity
509+
}),
510+
)
511+
})
512+
513+
it("should handle GPT-5-nano model", async () => {
514+
handler = new OpenAiNativeHandler({
515+
...mockOptions,
516+
apiModelId: "gpt-5-nano-2025-08-07",
517+
})
518+
519+
const stream = handler.createMessage(systemPrompt, messages)
520+
const chunks: any[] = []
521+
for await (const chunk of stream) {
522+
chunks.push(chunk)
523+
}
524+
525+
expect(mockCreate).toHaveBeenCalledWith(
526+
expect.objectContaining({
527+
model: "gpt-5-nano-2025-08-07",
528+
messages: [{ role: "developer", content: expect.stringContaining(systemPrompt) }],
529+
stream: true,
530+
stream_options: { include_usage: true },
531+
reasoning_effort: "minimal", // Default for GPT-5
532+
verbosity: "medium", // Default verbosity
533+
}),
534+
)
535+
})
536+
537+
it("should support verbosity control for GPT-5", async () => {
538+
handler = new OpenAiNativeHandler({
539+
...mockOptions,
540+
apiModelId: "gpt-5-2025-08-07",
541+
verbosity: "low", // Set verbosity through options
542+
})
543+
544+
// Create a message to verify verbosity is passed
545+
const stream = handler.createMessage(systemPrompt, messages)
546+
const chunks: any[] = []
547+
for await (const chunk of stream) {
548+
chunks.push(chunk)
549+
}
550+
551+
// Verify that verbosity is passed in the request
552+
expect(mockCreate).toHaveBeenCalledWith(
553+
expect.objectContaining({
554+
model: "gpt-5-2025-08-07",
555+
messages: expect.any(Array),
556+
stream: true,
557+
stream_options: { include_usage: true },
558+
verbosity: "low",
559+
}),
560+
)
561+
})
562+
563+
it("should support minimal reasoning effort for GPT-5", async () => {
564+
handler = new OpenAiNativeHandler({
565+
...mockOptions,
566+
apiModelId: "gpt-5-2025-08-07",
567+
reasoningEffort: "low",
568+
})
569+
570+
const stream = handler.createMessage(systemPrompt, messages)
571+
const chunks: any[] = []
572+
for await (const chunk of stream) {
573+
chunks.push(chunk)
574+
}
575+
576+
// With low reasoning effort, the model should pass it through
577+
expect(mockCreate).toHaveBeenCalledWith(
578+
expect.objectContaining({
579+
model: "gpt-5-2025-08-07",
580+
messages: expect.any(Array),
581+
stream: true,
582+
stream_options: { include_usage: true },
583+
reasoning_effort: "low",
584+
verbosity: "medium", // Default verbosity
585+
}),
586+
)
587+
})
588+
589+
it("should support both verbosity and reasoning effort together for GPT-5", async () => {
590+
handler = new OpenAiNativeHandler({
591+
...mockOptions,
592+
apiModelId: "gpt-5-2025-08-07",
593+
verbosity: "high", // Set verbosity through options
594+
reasoningEffort: "low", // Set reasoning effort
595+
})
596+
597+
const stream = handler.createMessage(systemPrompt, messages)
598+
const chunks: any[] = []
599+
for await (const chunk of stream) {
600+
chunks.push(chunk)
601+
}
602+
603+
// Verify both parameters are passed
604+
expect(mockCreate).toHaveBeenCalledWith(
605+
expect.objectContaining({
606+
model: "gpt-5-2025-08-07",
607+
messages: expect.any(Array),
608+
stream: true,
609+
stream_options: { include_usage: true },
610+
reasoning_effort: "low",
611+
verbosity: "high",
612+
}),
613+
)
614+
})
615+
})
462616
})

0 commit comments

Comments
 (0)