You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
feat(openai-native): background mode + auto-resume and poll fallback
Enable OpenAI Responses background mode with resilient streaming for GPT‑5 Pro and any model flagged via metadata.
Key changes:
- Background mode enablement
• Auto-enable for models with info.backgroundMode === true (e.g., gpt-5-pro-2025-10-06) defined in [packages/types/src/providers/openai.ts](packages/types/src/providers/openai.ts).
• Also respects manual override (openAiNativeBackgroundMode) from ProviderSettings/ApiHandlerOptions.
- Request shape (Responses API)
• background:true, stream:true, store:true set in [OpenAiNativeHandler.buildRequestBody()](src/api/providers/openai-native.ts:224).
- Streaming UX and status events
• New ApiStreamStatusChunk in [src/api/transform/stream.ts](src/api/transform/stream.ts) with statuses: queued, in_progress, completed, failed, canceled, reconnecting, polling.
• Provider emits status chunks in SDK + SSE paths via [OpenAiNativeHandler.processEvent()](src/api/providers/openai-native.ts:1100) and [OpenAiNativeHandler.handleStreamResponse()](src/api/providers/openai-native.ts:651).
• UI spinner shows background lifecycle labels in [webview-ui/src/components/chat/ChatRow.tsx](webview-ui/src/components/chat/ChatRow.tsx) using [webview-ui/src/utils/backgroundStatus.ts](webview-ui/src/utils/backgroundStatus.ts).
- Resilience: auto-resume + poll fallback
• On stream drop for background tasks, attempt SSE resume using response.id and last sequence_number with exponential backoff in [OpenAiNativeHandler.attemptResumeOrPoll()](src/api/providers/openai-native.ts:1215).
• If resume fails, poll GET /v1/responses/{id} every 2s until terminal and synthesize final output/usage.
• Deduplicate resumed events via resumeCutoffSequence in [handleStreamResponse()](src/api/providers/openai-native.ts:737).
- Settings (no new UI switch)
• Added optional provider settings and ApiHandlerOptions: autoResume, resumeMaxRetries, resumeBaseDelayMs, pollIntervalMs, pollMaxMinutes in [packages/types/src/provider-settings.ts](packages/types/src/provider-settings.ts) and [src/shared/api.ts](src/shared/api.ts).
- Cleanup
• Removed VS Code contributes toggle for background mode; behavior now model-driven + programmatic override.
- Tests
• Provider: coverage for background status emission, auto-resume success, resume→poll fallback, non-background negative in [src/api/providers/__tests__/openai-native.spec.ts](src/api/providers/__tests__/openai-native.spec.ts).
• Usage parity unchanged validated in [src/api/providers/__tests__/openai-native-usage.spec.ts](src/api/providers/__tests__/openai-native-usage.spec.ts).
• UI: label mapping tests for background statuses in [webview-ui/src/utils/__tests__/backgroundStatus.spec.ts](webview-ui/src/utils/__tests__/backgroundStatus.spec.ts).
Notes:
- Aligns with TEMP_OPENAI_BACKGROUND_TASK_DOCS.DM: background requires store=true; supports streaming resume via response.id + sequence_number.
- Default behavior unchanged for non-background models; no breaking changes.
Run long running tasks asynchronously in the background.
5
+
6
+
Agents like [Codex](https://openai.com/index/introducing-codex/) and [Deep Research](https://openai.com/index/introducing-deep-research/) show that reasoning models can take several minutes to solve complex problems. Background mode enables you to execute long-running tasks on models like o3 and o1-pro reliably, without having to worry about timeouts or other connectivity issues.
7
+
8
+
Background mode kicks off these tasks asynchronously, and developers can poll response objects to check status over time. To start response generation in the background, make an API request with `background` setto `true`:
9
+
10
+
Generate a response in the background
11
+
12
+
```bash
13
+
curl https://api.openai.com/v1/responses \
14
+
-H "Content-Type: application/json" \
15
+
-H "Authorization: Bearer $OPENAI_API_KEY" \
16
+
-d '{
17
+
"model":"o3",
18
+
"input":"Write a very long novel about otters in space.",
19
+
"background": true
20
+
}'
21
+
```
22
+
23
+
```javascript
24
+
import OpenAI from "openai";
25
+
const client =new OpenAI();
26
+
27
+
const resp = await client.responses.create({
28
+
model:"o3",
29
+
input:"Write a very long novel about otters in space.",
30
+
background: true,
31
+
});
32
+
33
+
console.log(resp.status);
34
+
```
35
+
36
+
```python
37
+
from openai import OpenAI
38
+
39
+
client = OpenAI()
40
+
41
+
resp = client.responses.create(
42
+
model="o3",
43
+
input="Write a very long novel about otters in space.",
44
+
background=True,
45
+
)
46
+
47
+
print(resp.status)
48
+
```
49
+
50
+
Polling background responses
51
+
----------------------------
52
+
53
+
To check the status of background requests, use the GET endpoint for Responses. Keep polling while the request is in the queued or in\_progress state. When it leaves these states, it has reached a final (terminal) state.
Cancelling twice is idempotent - subsequent calls simply return the final `Response` object.
134
+
135
+
Streaming a background response
136
+
-------------------------------
137
+
138
+
You can create a background Response and start streaming events from it right away. This may be helpful if you expect the client to drop the stream and want the option of picking it back up later. To do this, create a Response with both `background` and `stream` setto `true`. You will want to keep track of a "cursor" corresponding to the `sequence_number` you receive in each streaming event.
139
+
140
+
Currently, the time to first token you receive from a background response is higher than what you receive from a synchronous one. We are working to reduce this latency gap in the coming weeks.
141
+
142
+
Generate and stream a background response
143
+
144
+
```bash
145
+
curl https://api.openai.com/v1/responses \
146
+
-H "Content-Type: application/json" \
147
+
-H "Authorization: Bearer $OPENAI_API_KEY" \
148
+
-d '{
149
+
"model":"o3",
150
+
"input":"Write a very long novel about otters in space.",
"GPT-5 Pro: a slow, reasoning-focused model built to tackle tough problems. Requests can take several minutes to finish. Responses API only; no streaming, so it may appear stuck until the reply is ready.",
0 commit comments