Skip to content

Conversation

@hannesrudolph
Copy link
Collaborator

@hannesrudolph hannesrudolph commented Oct 10, 2025

Summary

Adds GPT-5 Pro model with OpenAI Responses API background mode support. Background requests can take several minutes, so this implements resilient streaming with automatic recovery.

Why

GPT-5 Pro is a slow, reasoning-focused model that can take several minutes to respond. The standard streaming approach times out or appears stuck. OpenAI's Responses API background mode is designed for these long-running requests.

What Changed

Model Addition

  • Added gpt-5-pro-2025-10-06 with backgroundMode: true flag in model metadata

Background Mode Implementation

  • Auto-enables background: true, stream: true, store: true for flagged models
  • Emits status events: queuedin_progresscompleted/failed
  • Shows status labels in UI spinner ("background mode (queued)…", etc.)

Resilient Streaming

  • Auto-resume: If stream drops, resumes from last sequence number using GET /v1/responses/{id}?starting_after={seq}
  • Poll fallback: If resume fails after 3 retries, polls every 2s until completion (up to 20 minutes)
  • Synthesizes final output and usage data when polling completes

Files Changed

  • packages/types/src/providers/openai.ts - Model metadata
  • src/api/providers/openai-native.ts - Background mode logic, auto-resume, polling
  • src/core/task/Task.ts - Status event handling
  • webview-ui/src/utils/backgroundStatus.ts - Status label mapping
  • webview-ui/src/components/chat/* - UI status display

Testing

  • Background mode status emission and lifecycle
  • Auto-resume on stream drop with exponential backoff
  • Poll fallback when resume exhausts retries
  • Usage tracking parity with non-background requests
  • UI status label mapping
image

Important

Adds GPT-5 Pro model with background mode, implementing resilient streaming, auto-resume, and UI status updates.

  • Model Addition:
    • Added gpt-5-pro-2025-10-06 with backgroundMode: true in openai.ts.
  • Background Mode Implementation:
    • Auto-enables background: true, stream: true, store: true for flagged models in openai-native.ts.
    • Emits status events: queuedin_progresscompleted/failed.
    • Shows status labels in UI spinner in ChatRow.tsx and ChatView.tsx.
  • Resilient Streaming:
    • Auto-resume: Resumes from last sequence number using GET /v1/responses/{id}?starting_after={seq} in openai-native.ts.
    • Poll fallback: Polls every 2s until completion if resume fails, up to 20 minutes.
  • Files Changed:
    • openai.ts, openai-native.ts, Task.ts - Background mode logic, auto-resume, polling.
    • ChatRow.tsx, ChatView.tsx - UI status display.
    • backgroundStatus.ts, backgroundStatus.spec.ts - Status label mapping.
  • Testing:
    • Background mode status emission and lifecycle.
    • Auto-resume on stream drop with exponential backoff.
    • Poll fallback when resume exhausts retries.
    • Usage tracking parity with non-background requests.
    • UI status label mapping.

This description was created by Ellipsis for 3a0add7. You can customize this summary. It will automatically update as commits are pushed.

Copilot AI review requested due to automatic review settings October 10, 2025 20:02
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. enhancement New feature or request labels Oct 10, 2025

This comment was marked as outdated.

@hannesrudolph hannesrudolph added the Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. label Oct 10, 2025
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Oct 10, 2025
@daniel-lxs daniel-lxs moved this from Triage to PR [Needs Review] in Roo Code Roadmap Oct 10, 2025
@hannesrudolph hannesrudolph moved this from PR [Needs Review] to PR [Needs Prelim Review] in Roo Code Roadmap Oct 10, 2025
@hannesrudolph hannesrudolph moved this from PR [Needs Prelim Review] to PR [Draft / In Progress] in Roo Code Roadmap Oct 10, 2025
@dosubot dosubot bot added size:S This PR changes 10-29 lines, ignoring generated files. and removed size:L This PR changes 100-499 lines, ignoring generated files. labels Oct 10, 2025
@hannesrudolph hannesrudolph added PR - Draft / In Progress and removed Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. labels Oct 10, 2025
@dosubot dosubot bot added size:XXL This PR changes 1000+ lines, ignoring generated files. and removed size:S This PR changes 10-29 lines, ignoring generated files. labels Oct 12, 2025
…for long-running models (e.g., gpt-5-pro)

- Introduce ModelInfo.disableTimeout to opt out of request timeouts on a per-model basis
- Apply in OpenAI-compatible, Ollama, and LM Studio providers (timeout=0 when flag is true)
- Preserve global “API Request Timeout” behavior (0 still disables globally); per-model flag takes precedence for that model
- Motivation: gpt-5-pro often requires longer runtimes; per-model override avoids forcing a global setting that impacts all models
- Add/extend unit tests to validate provider behavior
…nd non‑streaming notice

- Add GPT‑5 Pro to model registry with:
  - contextWindow: 400k, maxTokens: 272k
  - supportsImages: true, supportsPromptCache: true, supportsVerbosity: true, supportsTemperature: false
  - reasoningEffort: high (Responses API only)
  - pricing: $15/1M input tokens, $120/1M output tokens
- Set disableTimeout: true to avoid requiring a global timeout override
- Description clarifies: this is a slow, reasoning‑focused model designed for tough problems; requests may take several minutes; it does not stream (UI may appear idle until completion)
…verride for long-running models (e.g., gpt-5-pro)"

This reverts commit ed2a17a.
…-5-pro model entry (server-side timeouts). Prep for background mode approach.
Enable OpenAI Responses background mode with resilient streaming for GPT‑5 Pro and any model flagged via metadata.

Key changes:

- Background mode enablement

  • Auto-enable for models with info.backgroundMode === true (e.g., gpt-5-pro-2025-10-06) defined in [packages/types/src/providers/openai.ts](packages/types/src/providers/openai.ts).

  • Also respects manual override (openAiNativeBackgroundMode) from ProviderSettings/ApiHandlerOptions.

- Request shape (Responses API)

  • background:true, stream:true, store:true set in [OpenAiNativeHandler.buildRequestBody()](src/api/providers/openai-native.ts:224).

- Streaming UX and status events

  • New ApiStreamStatusChunk in [src/api/transform/stream.ts](src/api/transform/stream.ts) with statuses: queued, in_progress, completed, failed, canceled, reconnecting, polling.

  • Provider emits status chunks in SDK + SSE paths via [OpenAiNativeHandler.processEvent()](src/api/providers/openai-native.ts:1100) and [OpenAiNativeHandler.handleStreamResponse()](src/api/providers/openai-native.ts:651).

  • UI spinner shows background lifecycle labels in [webview-ui/src/components/chat/ChatRow.tsx](webview-ui/src/components/chat/ChatRow.tsx) using [webview-ui/src/utils/backgroundStatus.ts](webview-ui/src/utils/backgroundStatus.ts).

- Resilience: auto-resume + poll fallback

  • On stream drop for background tasks, attempt SSE resume using response.id and last sequence_number with exponential backoff in [OpenAiNativeHandler.attemptResumeOrPoll()](src/api/providers/openai-native.ts:1215).

  • If resume fails, poll GET /v1/responses/{id} every 2s until terminal and synthesize final output/usage.

  • Deduplicate resumed events via resumeCutoffSequence in [handleStreamResponse()](src/api/providers/openai-native.ts:737).

- Settings (no new UI switch)

  • Added optional provider settings and ApiHandlerOptions: autoResume, resumeMaxRetries, resumeBaseDelayMs, pollIntervalMs, pollMaxMinutes in [packages/types/src/provider-settings.ts](packages/types/src/provider-settings.ts) and [src/shared/api.ts](src/shared/api.ts).

- Cleanup

  • Removed VS Code contributes toggle for background mode; behavior now model-driven + programmatic override.

- Tests

  • Provider: coverage for background status emission, auto-resume success, resume→poll fallback, non-background negative in [src/api/providers/__tests__/openai-native.spec.ts](src/api/providers/__tests__/openai-native.spec.ts).

  • Usage parity unchanged validated in [src/api/providers/__tests__/openai-native-usage.spec.ts](src/api/providers/__tests__/openai-native-usage.spec.ts).

  • UI: label mapping tests for background statuses in [webview-ui/src/utils/__tests__/backgroundStatus.spec.ts](webview-ui/src/utils/__tests__/backgroundStatus.spec.ts).

Notes:

- Aligns with TEMP_OPENAI_BACKGROUND_TASK_DOCS.DM: background requires store=true; supports streaming resume via response.id + sequence_number.

- Default behavior unchanged for non-background models; no breaking changes.
…description, remove duplicate test, revert gitignore
…ded dep to useMemo; test: remove duplicate GPT-5 Pro background-mode test; chore(core): remove temp debug log
…background labels; fix deps warning in ChatRow useMemo
…assify permanent vs transient errors; chore(task): remove temporary debug log
@hannesrudolph hannesrudolph moved this from PR [Changes Requested] to PR [Needs Review] in Roo Code Roadmap Oct 25, 2025
…core/task): avoid full-state refresh on each background status chunk to reduce re-renders
Copy link
Collaborator Author

@hannesrudolph hannesrudolph left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Implemented minor fixes and a small performance improvement based on the review. See inline notes.

@hannesrudolph hannesrudolph moved this from PR [Needs Review] to PR [Needs Prelim Review] in Roo Code Roadmap Oct 25, 2025
@RooCodeInc RooCodeInc deleted a comment from roomote bot Nov 3, 2025
@roomote
Copy link

roomote bot commented Nov 3, 2025

See this task on Roo Code Cloud

Status: Posted targeted inline review comments. Focused on metadata consistency and text style. Background-mode implementation (auto-resume/poll) appears solid with explicit logging and error classification.

  • Resolve reasoning effort config mismatch for GPT-5 Pro model metadata (supportsReasoningEffort is false while a default reasoningEffort of "high" is set). Decide whether to remove the default or enable configurability.
  • Replace em dash in GPT‑5 Pro description with the standard hyphen form to match existing UI copy style.
  • Verified background-mode resilience and logging (resume backoff, polling classification) in provider path.

Mention @roomote in a comment to trigger your PR Fixer agent and make changes to this pull request.

contextWindow: 400000,
supportsImages: true,
supportsPromptCache: false,
supportsReasoningEffort: false, // This is set to false to prevent the ui from displaying the reasoning effort selector
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reasoning effort config is contradictory here: supportsReasoningEffort is false but reasoningEffort is set to "high". With this combination the UI hides the selector while the backend still injects a reasoning parameter (see src/shared/api.ts and src/api/providers/openai-native.ts). This can be confusing for users and maintainers. Consider either enabling supportsReasoningEffort to reflect configurability or removing the default reasoningEffort and documenting that the model runs with provider defaults.

Fix it with Roo Code or mention @roomote and request a fix.

inputPrice: 15.0,
outputPrice: 120.0,
description:
"GPT-5 Pro: A slow, reasoning-focused model for complex problems. Uses background mode with resilient streaming — requests may take some time and will automatically reconnect if they time out.",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Style nit: The description uses an em dash (—). Project text typically avoids em dashes for consistency with UI strings. Consider replacing with a spaced hyphen form to match style elsewhere, e.g.:

"GPT-5 Pro: A slow, reasoning-focused model for complex problems. Uses background mode with resilient streaming - requests may take some time and will automatically reconnect if they time out."

Fix it with Roo Code or mention @roomote and request a fix.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request lgtm This PR has been approved by a maintainer PR - Needs Preliminary Review size:XXL This PR changes 1000+ lines, ignoring generated files. UI/UX UI/UX related or focused

Projects

Status: PR [Needs Prelim Review]

Development

Successfully merging this pull request may close these issues.

3 participants