-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Fix GPT-5 Responses API issues with condensing and image support #7067
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…port - Unified all OpenAI models to use /v1/responses endpoint - Fixed image handling using structured format (input_text/input_image) - Added supportsTemperature capability to ModelInfo - Configured temperature support for each model (disabled for GPT-5, o1/o3/o4, codex-mini) - Removed all model-specific handlers and GPT-5 references - Updated package.json to use OpenAI SDK v5.12.2 Fixes #7012 - OpenAI image attachments not working
- Fix manual condensing bug by setting skipPrevResponseIdOnce flag in Task.ts - Revert to string-based format for full conversations after condensing - Add proper image support with structured format when using previous_response_id - Update test suite to match new implementation where all models use Responses API - Handle 400 errors gracefully when previous_response_id is not found Fixes issues introduced in PR #6864
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your contribution! I've reviewed the changes and the implementation correctly addresses both the manual condensing bug and image support issues.
✅ What Works Well
- The fix for manual condensing using the
skipPrevResponseIdOnce
flag is well implemented - Image support with proper content types (
input_text
,output_text
,input_image
) is correctly handled - Automatic retry mechanism when
previous_response_id
is not found - Comprehensive test coverage for the new structured format
- Clear separation of concerns between SDK and SSE fallback paths
💭 Suggestions for Improvement
-
Console.log statements: There are many console.log statements throughout openai-native.ts (lines 92, 129, 153, 163, 172-173, 175, 281, 621, 913-914, 1040, etc.). Consider using a debug flag or removing them before merging to production.
-
Method naming clarity: The methods
formatStructuredInput
andprepareStructuredInput
have very similar names which could be confusing. Consider renaming for better clarity. -
API documentation: The new
store
parameter in ApiHandlerCreateMessageMetadata would benefit from more detailed documentation about its purpose (storing responses for 30 days) and default behavior. -
Type safety: The GPT-5 metadata persistence uses
any
type casting extensively in Task.ts. Consider defining proper TypeScript types for the metadata structure. -
Magic strings: The string "gpt-5" is used in multiple places to check model type. Consider extracting this to a constant or utility function for better maintainability.
Overall, this is a solid fix that addresses the critical issues. The suggestions above are mostly for code quality and maintainability improvements. Great work on fixing these GPT-5 API issues! 🎉
- Remove console.log statements from openai-native.ts - Remove console.log statements from Task.ts related to GPT-5 response IDs - Remove console.log statement for manual condense operation - Clean up debugging artifacts from PR #7067
- Remove console.log statements from Task.ts (use comments instead) - Fix method naming confusion in openai-native.ts: - Renamed formatStructuredInput to formatFullConversation (clearer purpose) - Renamed prepareStructuredInput to prepareResponsesApiInput (clearer purpose) - Add proper TypeScript types for GPT-5 metadata (new types.ts file) - Fix error message inconsistency (GPT-5 -> Responses API) - Extract magic string 'gpt-5' to constant GPT5_MODEL_PREFIX - Remove dead code (isResponsesApiModel method that always returned true) - Improve JSDoc for store parameter with detailed explanation - Update tests to use new method names All tests passing (30/30)
Summary
This PR fixes critical issues with the GPT-5 Responses API implementation introduced in PR #6864.
Issues Fixed
1. Manual Condensing Bug 🐛
Problem: Manual condensing (when users click the condense button) wasn't working properly - the API was still sending the old
previous_response_id
after condensing, causing errors.Solution: Added
skipPrevResponseIdOnce
flag in Task.ts that gets set after manual condensing, ensuring the next API call sends the full conversation context without the previous_response_id.2. Image Support 🖼️
Problem: Images were not being sent correctly to the API. The format was using incorrect content types (
text
instead ofinput_text
,output_text
,input_image
).Solution:
input_text
for user text,output_text
for assistant text,input_image
for images3. Error Handling 🔄
previous_response_id
is not foundChanges Made
skipPrevResponseIdOnce = true
after manual condensingTesting
Impact
This fix ensures that:
Fixes issues reported after PR #6864
Important
Fixes GPT-5 Responses API issues with manual condensing, image support, and error handling, including updates to
Task.ts
,openai-native.ts
, and tests.skipPrevResponseIdOnce
flag inTask.ts
to manageprevious_response_id
.openai-native.ts
(input_text
,output_text
,input_image
).openai-native.ts
by retrying withoutprevious_response_id
.supportsTemperature
flag tomodelInfoSchema
inmodel.ts
.openAiNativeModels
inopenai.ts
to includesupportsTemperature
.openai-native.spec.ts
to reflect changes in API handling and error management.package.json
to useopenai
version^5.12.2
.This description was created by
for 52c6b8b. You can customize this summary. It will automatically update as commits are pushed.