Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
17 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions .changeset/gpt5-support.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
"@roo-code/types": minor
"roo-cline": minor
---

Add GPT-5 model support

- Added GPT-5 models (gpt-5-2025-08-07, gpt-5-mini-2025-08-07, gpt-5-nano-2025-08-07) to OpenAI Native provider
- Added nectarine-alpha-new-reasoning-effort-2025-07-25 experimental model
- Set gpt-5-2025-08-07 as the new default OpenAI Native model
- Implemented GPT-5 specific handling with streaming and reasoning effort support
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changeset mentions 'reasoning effort support' but I don't see the flag set for GPT-5 models like it is for o3/o4 models. Should we either add the flag or remove this from the description?

38 changes: 37 additions & 1 deletion packages/types/src/providers/openai.ts
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,45 @@ import type { ModelInfo } from "../model.js"
// https://openai.com/api/pricing/
export type OpenAiNativeModelId = keyof typeof openAiNativeModels

export const openAiNativeDefaultModelId: OpenAiNativeModelId = "gpt-4.1"
export const openAiNativeDefaultModelId: OpenAiNativeModelId = "gpt-5-2025-08-07"

export const openAiNativeModels = {
"gpt-5-2025-08-07": {
maxTokens: 128000,
contextWindow: 256000,
supportsImages: true,
supportsPromptCache: true,
inputPrice: 1.25,
outputPrice: 10.0,
cacheReadsPrice: 0.125,
},
"gpt-5-mini-2025-08-07": {
maxTokens: 128000,
contextWindow: 256000,
supportsImages: true,
supportsPromptCache: true,
inputPrice: 0.25,
outputPrice: 2.0,
cacheReadsPrice: 0.025,
},
"gpt-5-nano-2025-08-07": {
maxTokens: 128000,
contextWindow: 256000,
supportsImages: true,
supportsPromptCache: true,
inputPrice: 0.05,
outputPrice: 0.4,
cacheReadsPrice: 0.005,
},
"nectarine-alpha-new-reasoning-effort-2025-07-25": {
maxTokens: 128000,
contextWindow: 256000,
supportsImages: true,
supportsPromptCache: true,
inputPrice: 0,
outputPrice: 0,
cacheReadsPrice: 0,
},
"gpt-4.1": {
maxTokens: 32_768,
contextWindow: 1_047_576,
Expand Down
22 changes: 22 additions & 0 deletions src/api/providers/openai-native.ts
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,8 @@ export class OpenAiNativeHandler extends BaseProvider implements SingleCompletio
yield* this.handleReasonerMessage(model, id, systemPrompt, messages)
} else if (model.id.startsWith("o1")) {
yield* this.handleO1FamilyMessage(model, systemPrompt, messages)
} else if (this.isGPT5Model(model.id)) {
yield* this.handleGPT5Message(model, systemPrompt, messages)
} else {
yield* this.handleDefaultModelMessage(model, systemPrompt, messages)
}
Expand Down Expand Up @@ -123,6 +125,26 @@ export class OpenAiNativeHandler extends BaseProvider implements SingleCompletio
yield* this.handleStreamResponse(stream, model)
}

private async *handleGPT5Message(
model: OpenAiNativeModel,
systemPrompt: string,
messages: Anthropic.Messages.MessageParam[],
): ApiStream {
const stream = await this.client.chat.completions.create({
model: model.id,
temperature: 1,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In handleGPT5Message, temperature is hardcoded to 1. If intended, add a comment; otherwise consider using a configurable temperature.

Suggested change
temperature: 1,
temperature: 1, // Intentionally hardcoded for GPT-5 models

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@roomote-agent this PR is merged, but can you make a new PR against main that hardcodes the temperature to 1 for gpt 5 models?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The temperature is hardcoded to 1 here, but other models use . Is this intentional? GPT-5 might benefit from respecting user temperature settings.

messages: [{ role: "developer", content: systemPrompt }, ...convertToOpenAiMessages(messages)],
stream: true,
stream_options: { include_usage: true },
})

yield* this.handleStreamResponse(stream, model)
}

private isGPT5Model(modelId: string): boolean {
return modelId.includes("gpt-5") || modelId.includes("gpt5") || modelId.includes("nectarine")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This detection logic is quite broad - any model containing 'nectarine' would be treated as GPT-5. Could we be more explicit?

}

private async *handleStreamResponse(
stream: AsyncIterable<OpenAI.Chat.Completions.ChatCompletionChunk>,
model: OpenAiNativeModel,
Expand Down
Loading