-
Notifications
You must be signed in to change notification settings - Fork 2.1k
feat: add support for OpenAI gpt-5-chat-latest model #7058
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Added gpt-5-chat-latest model configuration to openAiNativeModels - Updated OpenAiNativeHandler to recognize gpt-5-chat-latest as a Responses API model - Added comprehensive tests for the new model - Model is optimized for conversational AI and non-reasoning tasks Fixes #7057
src/api/providers/openai-native.ts
Outdated
@@ -1139,7 +1139,8 @@ export class OpenAiNativeHandler extends BaseProvider implements SingleCompletio | |||
|
|||
private isResponsesApiModel(modelId: string): boolean { | |||
// Both GPT-5 and Codex Mini use the v1/responses endpoint | |||
return modelId.startsWith("gpt-5") || modelId === "codex-mini-latest" | |||
// gpt-5-chat-latest also uses the Responses API |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It can also use the completions API
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewing my own code is like debugging in production - technically possible but morally questionable.
src/api/providers/openai-native.ts
Outdated
@@ -1139,7 +1139,8 @@ export class OpenAiNativeHandler extends BaseProvider implements SingleCompletio | |||
|
|||
private isResponsesApiModel(modelId: string): boolean { | |||
// Both GPT-5 and Codex Mini use the v1/responses endpoint | |||
return modelId.startsWith("gpt-5") || modelId === "codex-mini-latest" | |||
// gpt-5-chat-latest also uses the Responses API | |||
return modelId.startsWith("gpt-5") || modelId === "codex-mini-latest" || modelId === "gpt-5-chat-latest" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this intentional? The condition modelId === "gpt-5-chat-latest"
will never be evaluated because modelId.startsWith("gpt-5")
already returns true for "gpt-5-chat-latest". The third condition is redundant and can be removed:
return modelId.startsWith("gpt-5") || modelId === "codex-mini-latest" || modelId === "gpt-5-chat-latest" | |
return modelId.startsWith("gpt-5") || modelId === "codex-mini-latest" |
expect(requestBody.previous_response_id).toBe("previous-response-456") | ||
}) | ||
}) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add error handling tests similar to other GPT-5 models? For example:
- API error responses (400, 401, 429, etc.)
- Network failures
- Invalid response formats
This would ensure the gpt-5-chat-latest model handles errors consistently with other models.
const requestBody = JSON.parse(mockFetch.mock.calls[0][1].body) | ||
expect(requestBody.previous_response_id).toBe("previous-response-456") | ||
}) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add a test to verify that completePrompt()
throws an error for gpt-5-chat-latest? Since this model uses the Responses API, it doesn't support non-streaming completion. Adding this test would prevent regressions:
}) | |
describe("Unsupported Operations", () => { | |
it("should throw error for completePrompt since gpt-5-chat-latest uses Responses API", async () => { | |
await expect(handler.completePrompt("Test prompt")).rejects.toThrow( | |
"completePrompt is not supported for gpt-5-chat-latest. Use createMessage (Responses API) instead." | |
) | |
}) | |
}) | |
}) |
@@ -0,0 +1,147 @@ | |||
import { describe, it, expect, vi, beforeEach } from "vitest" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider moving these tests into the existing "GPT-5 models" describe block in openai-native.spec.ts for better organization. Having all GPT-5 model tests in one place would make it easier to maintain and ensure consistency across similar models.
- Remove redundant gpt-5-chat-latest check in isResponsesApiModel since startsWith('gpt-5') already covers it - Remove unnecessary dedicated test file for gpt-5-chat-latest
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
* main: (70 commits) fix: use native Ollama API instead of OpenAI compatibility layer (RooCodeInc#7137) feat: add support for OpenAI gpt-5-chat-latest model (RooCodeInc#7058) Make enhance with task history default to true (RooCodeInc#7140) Bump cloud version to 0.16.0 (RooCodeInc#7135) Release: v1.51.0 (RooCodeInc#7130) Add an API for resuming tasks by ID (RooCodeInc#7122) Add support for task page event population (RooCodeInc#7117) fix: add type check before calling .match() on diffItem.content (RooCodeInc#6905) (RooCodeInc#6906) Fix: Enable save button for provider dropdown and checkbox changes (RooCodeInc#7113) fix: Use cline.cwd as primary source for workspace path in codebaseSearchTool (RooCodeInc#6902) Hotfix multiple folder workspace checkpoint (RooCodeInc#6903) fix: prevent XML entity decoding in diff tools (RooCodeInc#7107) (RooCodeInc#7108) Refactor task execution system: improve call stack management (RooCodeInc#7035) Changeset version bump (RooCodeInc#7104) feat(web): fill missing SEO-related values (RooCodeInc#7096) Update contributors list (RooCodeInc#6883) Release v3.25.15 (RooCodeInc#7103) fix: add /evals page to sitemap generation (RooCodeInc#7102) feat: implement sitemap generation in TypeScript and remove XML file (RooCodeInc#6206) fix: reset condensing state when switching tasks (RooCodeInc#6922) ...
This PR adds support for the OpenAI gpt-5-chat-latest model for non-reasoning tasks.
Changes
gpt-5-chat-latest
model configuration toopenAiNativeModels
inpackages/types/src/providers/openai.ts
OpenAiNativeHandler
to recognizegpt-5-chat-latest
as a Responses API modelsrc/api/providers/__tests__/openai-native-gpt5-chat.spec.ts
Details
The
gpt-5-chat-latest
model is optimized for conversational AI and non-reasoning tasks. It:supportsReasoningEffort: false
)Testing
Fixes #7057
Important
Add support for
gpt-5-chat-latest
model in OpenAI provider, optimized for conversational AI and non-reasoning tasks, with tests and handler updates.gpt-5-chat-latest
toopenAiNativeModels
inopenai.ts
with specific features: no reasoning support, supports images, prompt caching, verbosity, and specific pricing.OpenAiNativeHandler
to recognizegpt-5-chat-latest
as a Responses API model.gpt-5-chat-latest
inopenai-native-gpt5-chat.spec.ts
.This description was created by
for ff06f8d. You can customize this summary. It will automatically update as commits are pushed.