Skip to content

Conversation

roomote[bot]
Copy link

@roomote roomote bot commented Aug 22, 2025

Related GitHub Issue

Closes: #7334

Roo Code Task Context (Optional)

This PR was created with assistance from Roo Code to fix the GPT-5 prompt enhancement issue.

Description

This PR fixes the "completePrompt is not supported" error that occurs when using the Enhance Prompt feature with GPT-5 models (gpt-5, gpt-5-mini, gpt-5-nano) and Codex Mini.

Key implementation details:

  • Modified the completePrompt method in the OpenAI Native provider to support models that use the Responses API
  • Implemented a new completePromptViaResponsesApi method that collects streaming responses into a complete string
  • Added an SSE fallback implementation (completePromptViaResponsesApiSSE) for robustness when the SDK fails
  • The solution maintains backward compatibility with existing models while enabling prompt enhancement for GPT-5 and Codex Mini

Design choices:

  • Instead of throwing an error for Responses API models, we now collect the stream and return the complete text
  • Properly handles both text content and reasoning responses (for o1/o3 models)
  • Includes comprehensive error handling and fallback mechanisms

Test Procedure

How I tested:

  1. Added comprehensive unit tests for all GPT-5 model variants (gpt-5, gpt-5-mini, gpt-5-nano)
  2. Updated existing Codex Mini tests to verify the new behavior
  3. All tests pass successfully: npx vitest run api/providers/__tests__/openai-native.spec.ts
  4. Linting passes with no warnings: npx eslint api/providers/openai-native.ts --max-warnings 0

How reviewers can verify:

  1. Run the test suite: cd src && npx vitest run api/providers/__tests__/openai-native.spec.ts
  2. Test the Enhance Prompt feature with a GPT-5 model configured in the extension
  3. Verify that prompt enhancement now works without throwing errors

Pre-Submission Checklist

  • Issue Linked: This PR is linked to an approved GitHub Issue (see "Related GitHub Issue" above).
  • Scope: My changes are focused on the linked issue (one major feature/fix per PR).
  • Self-Review: I have performed a thorough self-review of my code.
  • Testing: New and/or updated tests have been added to cover my changes (if applicable).
  • Documentation Impact: I have considered if my changes require documentation updates (see "Documentation Updates" section below).
  • Contribution Guidelines: I have read and agree to the Contributor Guidelines.

Screenshots / Videos

N/A - This is a backend fix for API compatibility.

Documentation Updates

  • No documentation updates are required.

The fix is transparent to users - the Enhance Prompt feature will simply start working with GPT-5 models without any user-facing changes needed.

Additional Notes

This implementation follows the same pattern used for other streaming-to-completion conversions in the codebase. The SSE fallback ensures robustness even if the OpenAI SDK has issues with the Responses API.

Get in Touch

Available for any questions about this implementation.


Important

Fixes completePrompt for GPT-5 and Codex Mini models by supporting streaming responses and adding robust error handling.

  • Behavior:
    • Fixes "completePrompt is not supported" error for GPT-5 and Codex Mini models by modifying completePrompt in openai-native.ts.
    • Introduces completePromptViaResponsesApi to handle streaming responses for these models.
    • Adds SSE fallback with completePromptViaResponsesApiSSE for robustness.
  • Testing:
    • Adds unit tests for GPT-5 variants and Codex Mini in openai-native.spec.ts.
    • Tests cover successful completions, API errors, and reasoning responses.
  • Design:
    • Collects streaming responses into a complete string instead of throwing errors.
    • Handles text and reasoning responses, ensuring backward compatibility.

This description was created by Ellipsis for 6af1f2f. You can customize this summary. It will automatically update as commits are pushed.

- Implement completePromptViaResponsesApi to collect streaming responses
- Add SSE fallback for robustness when SDK fails
- Support all GPT-5 variants (gpt-5, gpt-5-mini, gpt-5-nano) and Codex Mini
- Add comprehensive test coverage for new functionality
- Fixes #7334
@roomote roomote bot requested review from mrubens, cte and jr as code owners August 22, 2025 19:15
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. bug Something isn't working labels Aug 22, 2025
@hannesrudolph hannesrudolph added the Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. label Aug 22, 2025
Copy link
Author

@roomote roomote bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewing my own code is like debugging in a mirror - everything looks backwards but the bugs are still mine.

* Handles non-streaming completion for models that use the Responses API (GPT-5 and Codex Mini)
* by collecting the streaming response into a complete string.
*/
private async completePromptViaResponsesApi(prompt: string): Promise<string> {
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These two methods have significant duplication in their event handling logic. Consider extracting the common event processing into a shared helper method to reduce code duplication and improve maintainability. Both methods handle the same event types (response.text.delta, response.reasoning.delta, response.output_item.added, etc.) with identical logic.


if (typeof (stream as any)[Symbol.asyncIterator] !== "function") {
// Fall back to SSE if SDK doesn't return an AsyncIterable
return await this.completePromptViaResponsesApiSSE(requestBody)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When the SDK doesn't return an AsyncIterable, we fall back to SSE silently. Consider adding a console.warn here to help with debugging:

Suggested change
return await this.completePromptViaResponsesApiSSE(requestBody)
// Fall back to SSE if SDK doesn't return an AsyncIterable
console.warn('[GPT-5] SDK did not return AsyncIterable, falling back to SSE implementation');
return await this.completePromptViaResponsesApiSSE(requestBody)

reader.releaseLock()
} catch (error) {
if (error instanceof Error) {
throw new Error(`Failed to complete prompt via GPT-5 API: ${error.message}`)
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error handling here wraps errors with a specific message, but in the main method (line 1393) the fallback happens silently. Consider consistent error handling - either log both fallbacks or wrap both errors consistently.

@@ -470,6 +470,182 @@ describe("OpenAiNativeHandler", () => {
})
})

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great test coverage! Consider adding a few edge case tests:

  • Network timeout scenarios during streaming
  • Partial response handling when stream is interrupted
  • Mixed content types in a single response (text + reasoning together)

These would help ensure robustness in production edge cases.

@@ -1276,4 +1277,254 @@ export class OpenAiNativeHandler extends BaseProvider implements SingleCompletio
throw error
}
}

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding JSDoc comments to document the purpose and behavior of these new methods. This would help future maintainers understand the fallback strategy and the dual-path approach (SDK with SSE fallback).

@daniel-lxs daniel-lxs moved this from Triage to PR [Needs Prelim Review] in Roo Code Roadmap Aug 23, 2025
@hannesrudolph hannesrudolph added PR - Needs Preliminary Review and removed Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. labels Aug 23, 2025
@daniel-lxs
Copy link
Collaborator

Closing in favor of #7067

@daniel-lxs daniel-lxs closed this Aug 25, 2025
@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Aug 25, 2025
@github-project-automation github-project-automation bot moved this from PR [Needs Prelim Review] to Done in Roo Code Roadmap Aug 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working PR - Needs Preliminary Review size:L This PR changes 100-499 lines, ignoring generated files.
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

"Failed to enhance prompt" with GPT-5 models; "completePrompt" not supported
3 participants