Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
14 commits
Select commit Hold shift + click to select a range
c55a71c
feat(models): add per-model timeout disable to avoid global override …
hannesrudolph Oct 10, 2025
dcc8791
feat(openai-models): add gpt-5-pro-2025-10-06 with timeout disabled a…
hannesrudolph Oct 10, 2025
373fbc9
Revert "feat(models): add per-model timeout disable to avoid global o…
hannesrudolph Oct 10, 2025
957b8d9
revert: per-model disableTimeout implementation; remove flag from gpt…
hannesrudolph Oct 10, 2025
6bb0bc0
feat(openai-native): background mode + auto-resume and poll fallback
hannesrudolph Oct 12, 2025
41dadd5
chore: remove TEMP_OPENAI_BACKGROUND_TASK_DOCS.DM and ignore temp docs
hannesrudolph Oct 12, 2025
9c2a830
feat(openai-models): update maxTokens for gpt-5-pro-2025-10-06 from 2…
hannesrudolph Oct 16, 2025
d93aeef
feat(chat): enhance background status handling and UI updates for ter…
hannesrudolph Oct 16, 2025
4d40225
fix: Address PR review feedback - fix stale resume IDs, update model …
hannesrudolph Oct 16, 2025
ac17911
fix(webview): define chevron icon via codicon and add missing isExpan…
hannesrudolph Oct 24, 2025
760a233
fix(openai): update reasoning effort default to high and improve mode…
hannesrudolph Oct 24, 2025
85ddaeb
webview-ui: use standard API Request icons for background mode; keep …
hannesrudolph Oct 24, 2025
f8be63e
fix(openai-native): add logging for background resume and polling; cl…
hannesrudolph Oct 24, 2025
3a0add7
fix(types/openai): correct GPT-5 Pro description typos/grammar; perf(…
hannesrudolph Oct 25, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions packages/types/src/model.ts
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,9 @@ export const modelInfoSchema = z.object({
supportsReasoningBudget: z.boolean().optional(),
// Capability flag to indicate whether the model supports temperature parameter
supportsTemperature: z.boolean().optional(),
// When true, this model must be invoked using Responses background mode.
// Providers should auto-enable background:true, stream:true, and store:true.
backgroundMode: z.boolean().optional(),
requiredReasoningBudget: z.boolean().optional(),
supportsReasoningEffort: z.boolean().optional(),
supportedParameters: z.array(modelParametersSchema).optional(),
Expand Down
9 changes: 9 additions & 0 deletions packages/types/src/provider-settings.ts
Original file line number Diff line number Diff line change
Expand Up @@ -297,6 +297,15 @@ const openAiNativeSchema = apiModelIdProviderModelSchema.extend({
// OpenAI Responses API service tier for openai-native provider only.
// UI should only expose this when the selected model supports flex/priority.
openAiNativeServiceTier: serviceTierSchema.optional(),
// Enable OpenAI Responses background mode when using Responses API.
// Opt-in; defaults to false when omitted.
openAiNativeBackgroundMode: z.boolean().optional(),
// Background auto-resume/poll settings (no UI; plumbed via options)
openAiNativeBackgroundAutoResume: z.boolean().optional(),
openAiNativeBackgroundResumeMaxRetries: z.number().int().min(0).optional(),
openAiNativeBackgroundResumeBaseDelayMs: z.number().int().min(0).optional(),
openAiNativeBackgroundPollIntervalMs: z.number().int().min(0).optional(),
openAiNativeBackgroundPollMaxMinutes: z.number().int().min(1).optional(),
})

const mistralSchema = apiModelIdProviderModelSchema.extend({
Expand Down
15 changes: 15 additions & 0 deletions packages/types/src/providers/openai.ts
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,21 @@ export const openAiNativeModels = {
{ name: "priority", contextWindow: 400000, inputPrice: 2.5, outputPrice: 20.0, cacheReadsPrice: 0.25 },
],
},
"gpt-5-pro-2025-10-06": {
maxTokens: 128000,
contextWindow: 400000,
supportsImages: true,
supportsPromptCache: false,
supportsReasoningEffort: false, // This is set to false to prevent the ui from displaying the reasoning effort selector
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reasoning effort config is contradictory here: supportsReasoningEffort is false but reasoningEffort is set to "high". With this combination the UI hides the selector while the backend still injects a reasoning parameter (see src/shared/api.ts and src/api/providers/openai-native.ts). This can be confusing for users and maintainers. Consider either enabling supportsReasoningEffort to reflect configurability or removing the default reasoningEffort and documenting that the model runs with provider defaults.

Fix it with Roo Code or mention @roomote and request a fix.

reasoningEffort: "high", // Pro model uses high reasoning effort by default and must be specified
inputPrice: 15.0,
outputPrice: 120.0,
description:
"GPT-5 Pro: A slow, reasoning-focused model for complex problems. Uses background mode with resilient streaming — requests may take some time and will automatically reconnect if they time out.",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Style nit: The description uses an em dash (—). Project text typically avoids em dashes for consistency with UI strings. Consider replacing with a spaced hyphen form to match style elsewhere, e.g.:

"GPT-5 Pro: A slow, reasoning-focused model for complex problems. Uses background mode with resilient streaming - requests may take some time and will automatically reconnect if they time out."

Fix it with Roo Code or mention @roomote and request a fix.

supportsVerbosity: true,
supportsTemperature: false,
backgroundMode: true,
},
"gpt-5-mini-2025-08-07": {
maxTokens: 128000,
contextWindow: 400000,
Expand Down
32 changes: 32 additions & 0 deletions src/api/providers/__tests__/openai-native-usage.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -344,6 +344,38 @@ describe("OpenAiNativeHandler - normalizeUsage", () => {
})
})

it("should produce identical usage chunk when background mode is enabled", () => {
const usage = {
input_tokens: 120,
output_tokens: 60,
cache_creation_input_tokens: 10,
cache_read_input_tokens: 30,
}

const baselineHandler = new OpenAiNativeHandler({
openAiNativeApiKey: "test-key",
apiModelId: "gpt-5-pro-2025-10-06",
})
const backgroundHandler = new OpenAiNativeHandler({
openAiNativeApiKey: "test-key",
apiModelId: "gpt-5-pro-2025-10-06",
openAiNativeBackgroundMode: true,
})

const baselineUsage = (baselineHandler as any).normalizeUsage(usage, baselineHandler.getModel())
const backgroundUsage = (backgroundHandler as any).normalizeUsage(usage, backgroundHandler.getModel())

expect(baselineUsage).toMatchObject({
type: "usage",
inputTokens: 120,
outputTokens: 60,
cacheWriteTokens: 10,
cacheReadTokens: 30,
totalCost: expect.any(Number),
})
expect(backgroundUsage).toEqual(baselineUsage)
})

describe("cost calculation", () => {
it("should pass total input tokens to calculateApiCostOpenAI", () => {
const usage = {
Expand Down
Loading