-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
Confirm this is a Node library issue and not an underlying OpenAI API issue
- This is an issue with the Node library
Describe the bug
When using the OpenAI Node.js SDK with the new responses API in streaming mode, the finalResponse()
method returns an object that doesn't include the output_text
field, even though this field is present in non-streaming responses. This creates an inconsistency between streaming and non-streaming modes.
Current Behavior
When using client.responses.stream()
and then calling finalResponse()
:
const stream = client.responses.stream({
model: "gpt-5",
reasoning: { effort: "low" },
input: [
{
role: "user",
content: "Are semicolons optional in JavaScript?",
},
],
});
// Process stream events...
for await (const event of stream) {
// Handle events
}
const finalResponse = await stream.finalResponse();
console.log(finalResponse.output_text); // undefined
The finalResponse
object contains:
id
: The response IDmodel
: The model nameusage
: Token usage statisticsreasoning_text
: The reasoning text (if applicable)- But NOT
output_text
Expected Behavior
For consistency with non-streaming responses, finalResponse()
should include the output_text
field containing the complete assembled response text:
// Non-streaming mode
const response = await client.responses.create({
model: "gpt-5",
input: [/* ... */]
});
console.log(response.output_text); // "The response text"
// Streaming mode (expected)
const stream = client.responses.stream({/* ... */});
const finalResponse = await stream.finalResponse();
console.log(finalResponse.output_text); // Should also contain "The response text"
Use Case
This inconsistency makes it difficult to write code that works with both streaming and non-streaming modes. Users have to manually accumulate the text from stream events even when they want to use finalResponse()
to get the complete response object.
Workaround
Currently, users must manually accumulate the output text.
Environment
- OpenAI Node.js SDK version: [latest]
- Node.js version: 22.14.0
- Model: gpt-5
To Reproduce
- Create a streaming response using
client.responses.stream()
- Iterate through the stream events (or skip iteration)
- Call
await stream.finalResponse()
- Check the
output_text
field - it will beundefined
see the code snippet below.
apologize in advance if this is an API issue rather than a node issue. I could not tell.
Code snippets
import OpenAI from "openai";
const client = new OpenAI();
console.log("=== Testing Streaming Mode with client.responses.stream() ===\n");
// Use the stream() method which returns a ResponseStream with finalResponse()
const stream = client.responses.stream({
model: "gpt-5",
reasoning: { effort: "low" },
input: [
{
role: "developer",
content: "Talk like a pirate."
},
{
role: "user",
content: "Are semicolons optional in JavaScript?",
},
],
});
console.log("Streaming chunks as they arrive:");
console.log("---------------------------------");
let chunkCount = 0;
let fullText = "";
for await (const event of stream) {
chunkCount++;
switch (event.type) {
case 'response.output_text.delta':
process.stdout.write(event.delta);
fullText += event.delta;
break;
case 'response.reasoning_text.delta':
// Reasoning chunks (if you want to see them)
// console.log("[REASONING]", event.delta);
break;
default:
break;
}
}
console.log("\n\n=== Stream Complete ===");
console.log(`Total chunks/events received: ${chunkCount}`);
// Now get the final response object
console.log("\n=== Getting Final Response ===");
const finalResponse = await stream.finalResponse();
console.log("\nFinal response details:");
console.log("ID:", finalResponse.id);
console.log("Model:", finalResponse.model);
console.log("Output text:", finalResponse.output_text);
console.log("Finish reason:", finalResponse.finish_reason);
if (finalResponse.usage) {
console.log("\nToken usage:");
console.log(JSON.stringify(finalResponse.usage, null, 2));
}
if (finalResponse.reasoning_text) {
console.log("\nReasoning text:");
console.log(finalResponse.reasoning_text);
}
console.log("\n\n=== Testing Another Stream Example ===\n");
const stream2 = client.responses.stream({
model: "gpt-5",
reasoning: { effort: "low" },
input: [
{
role: "developer",
content: "Talk like a pirate."
},
{
role: "user",
content: "What is 2 + 2?",
},
],
});
// Skip iteration, just get the final response directly
console.log("Skipping iteration, getting final response directly...");
const finalResponse2 = await stream2.finalResponse();
console.log("\nFinal answer:", finalResponse2.output_text);
import OpenAI from "openai";
const client = new OpenAI();
const response = await client.responses.create({
model: "gpt-5",
reasoning: { effort: "low" },
input: [
{
role: "developer",
content: "Talk like a pirate."
},
{
role: "user",
content: "Are semicolons optional in JavaScript?",
},
],
});
console.log(response.output_text);
OS
macOS/unix
Node version
22.14.0
Library version
5.23.1