Skip to content

fix(codex): return proper error instead of empty response on context overflow#1786

Open
SethBurkart123 wants to merge 6 commits intorouter-for-me:mainfrom
SethBurkart123:fix/codex-sse-error-forwarding
Open

fix(codex): return proper error instead of empty response on context overflow#1786
SethBurkart123 wants to merge 6 commits intorouter-for-me:mainfrom
SethBurkart123:fix/codex-sse-error-forwarding

Conversation

@SethBurkart123
Copy link

Currently, when a client requests a chat completion from a codex model which exceeds the context of the model, an empty response is given which messes with a large number of clients such as Droid. These changes pass and emit the error which fully fixes long conversations for many third party cli tools.

If you have any issues please reach out I'll be happy to fix it up (I'd love if this issue would be fully fixed).

Ensure context overflow and similar upstream terminal SSE failures are surfaced as OpenAI-compatible errors instead of falling through to empty successful completions.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue where Codex model chat completions exceeding the context window would result in an empty response, disrupting client applications. The changes introduce proper handling and propagation of terminal Server-Sent Event (SSE) errors, such as context length overruns and rate limits, directly within the response stream. This ensures that client-side tools receive meaningful error messages instead of ambiguous empty responses, significantly improving the reliability and user experience for long conversations.

Highlights

  • Error Propagation: Implemented robust error propagation for Server-Sent Events (SSE) in Codex responses, specifically handling "error" and "response.failed" event types.
  • Context Window Errors: Ensured that "context_length_exceeded" errors are correctly parsed and returned as "http.StatusBadRequest".
  • Rate Limit Handling: Mapped "rate_limit_exceeded" and "usage_limit_reached" errors to "http.StatusTooManyRequests".
  • New Error Parsing Function: Introduced "parseCodexSSEError" to standardize the interpretation of SSE error payloads.
  • Comprehensive Testing: Added new unit tests to validate the "parseCodexSSEError" function's behavior across different error scenarios.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • internal/runtime/executor/codex_executor.go
    • Imported the "encoding/json" package.
    • Modified the "Execute" function to use a "switch" statement for SSE event types, explicitly handling "error" and "response.failed" events to return parsed errors.
    • Integrated "parseCodexSSEError" into the "ExecuteStream" function to detect and propagate SSE errors immediately.
    • Added a new helper function "parseCodexSSEError" to extract error details from SSE payloads, map them to appropriate HTTP status codes (e.g., 400 for context length, 429 for rate limits), and wrap them in a standardized JSON format.
  • internal/runtime/executor/codex_executor_retry_test.go
    • Imported "encoding/json" and "strings" packages.
    • Added "TestParseCodexSSEError" with multiple sub-tests to verify the correct parsing of "context_length_exceeded" and "rate_limit_exceeded" errors, as well as "response.failed" events with nested errors, and to confirm that non-error events are ignored.
Activity
  • No specific activity (comments, reviews, progress) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@SethBurkart123 SethBurkart123 changed the title fix(codex): propagate terminal SSE errors in responses stream fix(codex): return proper error instead of empty response on context overflow Mar 2, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly propagates Server-Sent Event (SSE) errors from the codex backend to the client, which is a great improvement for error handling in streaming responses. My review includes suggestions to improve the implementation's robustness and maintainability. Specifically, I've recommended using structs instead of map[string]any for building JSON, refining the error message fallback logic, and making type assertions in tests safer to prevent panics.

SethBurkart123 and others added 2 commits March 2, 2026 14:56
Use a human-readable fallback message when upstream omits error.message, and harden test type assertions for clearer failures.
@SethBurkart123
Copy link
Author

bump

Copy link
Collaborator

@luispater luispater left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing the empty-success response path here — surfacing terminal SSE failures is the right direction. I found one blocking gap before I’d approve: the new SSE error path drops retry-after semantics for quota/usage-limit events. The existing HTTP error path preserves that via parseCodexRetryAfter / newCodexStatusErr, and the auth conductor uses RetryAfter() to mark credentials unavailable until the real reset time. With this patch, usage_limit_reached delivered over SSE falls back to generic quota backoff instead of the upstream reset window. Please preserve or compute retryAfter in parseCodexSSEError and add a test covering a usage_limit_reached payload with resets_at / resets_in_seconds.

@SethBurkart123 SethBurkart123 force-pushed the fix/codex-sse-error-forwarding branch 2 times, most recently from e207c39 to 7913502 Compare March 8, 2026 06:47
@SethBurkart123 SethBurkart123 requested a review from luispater March 8, 2026 06:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants