-
Notifications
You must be signed in to change notification settings - Fork 2.6k
feat: add GPT-5 model support #6819
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
0e7abee
b99eb41
57eccb5
4d6056e
fe82301
0c32602
7c5e5a0
7251237
e37a6d2
a75a2b8
f229392
77b3be7
4a3a7ac
c402014
3f9bd5d
abe3252
331ecf2
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,11 @@ | ||
| --- | ||
| "@roo-code/types": minor | ||
| "roo-cline": minor | ||
| --- | ||
|
|
||
| Add GPT-5 model support | ||
|
|
||
| - Added GPT-5 models (gpt-5-2025-08-07, gpt-5-mini-2025-08-07, gpt-5-nano-2025-08-07) to OpenAI Native provider | ||
| - Added nectarine-alpha-new-reasoning-effort-2025-07-25 experimental model | ||
| - Set gpt-5-2025-08-07 as the new default OpenAI Native model | ||
| - Implemented GPT-5 specific handling with streaming and reasoning effort support | ||
| Original file line number | Diff line number | Diff line change | ||||
|---|---|---|---|---|---|---|
|
|
@@ -53,6 +53,8 @@ export class OpenAiNativeHandler extends BaseProvider implements SingleCompletio | |||||
| yield* this.handleReasonerMessage(model, id, systemPrompt, messages) | ||||||
| } else if (model.id.startsWith("o1")) { | ||||||
| yield* this.handleO1FamilyMessage(model, systemPrompt, messages) | ||||||
| } else if (this.isGPT5Model(model.id)) { | ||||||
| yield* this.handleGPT5Message(model, systemPrompt, messages) | ||||||
| } else { | ||||||
| yield* this.handleDefaultModelMessage(model, systemPrompt, messages) | ||||||
| } | ||||||
|
|
@@ -123,6 +125,26 @@ export class OpenAiNativeHandler extends BaseProvider implements SingleCompletio | |||||
| yield* this.handleStreamResponse(stream, model) | ||||||
| } | ||||||
|
|
||||||
| private async *handleGPT5Message( | ||||||
| model: OpenAiNativeModel, | ||||||
| systemPrompt: string, | ||||||
| messages: Anthropic.Messages.MessageParam[], | ||||||
| ): ApiStream { | ||||||
| const stream = await this.client.chat.completions.create({ | ||||||
| model: model.id, | ||||||
| temperature: 1, | ||||||
|
||||||
| temperature: 1, | |
| temperature: 1, // Intentionally hardcoded for GPT-5 models |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@roomote-agent this PR is merged, but can you make a new PR against main that hardcodes the temperature to 1 for gpt 5 models?
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The temperature is hardcoded to 1 here, but other models use . Is this intentional? GPT-5 might benefit from respecting user temperature settings.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This detection logic is quite broad - any model containing 'nectarine' would be treated as GPT-5. Could we be more explicit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changeset mentions 'reasoning effort support' but I don't see the flag set for GPT-5 models like it is for o3/o4 models. Should we either add the flag or remove this from the description?