Update dependencies and refactor OpenAI stream handling#216
Conversation
- Updated various dependencies in `package.json` and `bun.lock`, including `@react-spring/web`, `ai`, and `openai`. - Refactored the OpenAI stream handling in `landing-page-quote.tsx` to utilize a new `ReadableStream` implementation for better performance and error handling. - Adjusted the linting command order in `package.json` for improved consistency.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
Pull Request Overview
This PR updates several key dependencies and refactors the OpenAI streaming implementation to use native ReadableStream instead of the deprecated OpenAIStream helper from the ai package.
Key Changes
- Updated
aipackage from 3.1.12 to 5.0.97 (major version upgrade) - Updated
openaipackage from 5.12.2 to 6.9.1 (major version upgrade) - Updated
@react-spring/webfrom 9.7.5 to 10.0.3 (major version upgrade) - Replaced
OpenAIStreamhelper with customReadableStreamimplementation - Removed text extraction regex that handled quoted strings and escape sequences
Reviewed Changes
Copilot reviewed 2 out of 3 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| package.json | Updated dependency versions for ai, openai, @react-spring/web, and @vercel/node; reordered lint commands to run Biome before ESLint |
| bun.lock | Lockfile updates reflecting the new dependency versions and their transitive dependencies |
| app/landing-page-quote.tsx | Refactored from using OpenAIStream helper to custom ReadableStream implementation with manual chunk processing; removed unused import and text extraction regex |
| try { | ||
| for await (const chunk of response) { | ||
| const content = chunk.choices[0]?.delta?.content; | ||
| if (content) { | ||
| fullCompletion += content; | ||
| const encoder = new TextEncoder(); |
There was a problem hiding this comment.
The TextEncoder is instantiated inside the loop for each chunk. This creates unnecessary object allocations during streaming. Move the encoder instantiation outside the loop before the for await statement to improve performance.
Example:
const encoder = new TextEncoder();
for await (const chunk of response) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
fullCompletion += content;
controller.enqueue(encoder.encode(content));
}
}| try { | |
| for await (const chunk of response) { | |
| const content = chunk.choices[0]?.delta?.content; | |
| if (content) { | |
| fullCompletion += content; | |
| const encoder = new TextEncoder(); | |
| const encoder = new TextEncoder(); | |
| try { | |
| for await (const chunk of response) { | |
| const content = chunk.choices[0]?.delta?.content; | |
| if (content) { | |
| fullCompletion += content; |
| let fullCompletion = ""; | ||
| const stream = new ReadableStream({ | ||
| async start(controller) { |
There was a problem hiding this comment.
The variable fullCompletion is declared in the outer scope but mutated inside the async stream handler. This pattern makes the variable's lifecycle and ownership unclear. Consider encapsulating this state within the start function's closure or documenting why it's scoped outside if there's a specific reason (though in this case, it doesn't need to be).
| let fullCompletion = ""; | |
| const stream = new ReadableStream({ | |
| async start(controller) { | |
| const stream = new ReadableStream({ | |
| async start(controller) { | |
| let fullCompletion = ""; |
| const content = chunk.choices[0]?.delta?.content; | ||
| if (content) { |
There was a problem hiding this comment.
The code uses optional chaining (chunk.choices[0]?.delta?.content) but doesn't handle the case where chunk.choices[0] exists but delta or content is null or undefined. This silently skips chunks without content. While this may be intentional (since OpenAI streams can include chunks without content), it would be clearer to add a comment explaining this behavior or to explicitly check choices.length > 0 for better readability.
| const content = chunk.choices[0]?.delta?.content; | |
| if (content) { | |
| // OpenAI streams can include chunks without content; skip those. | |
| if ( | |
| Array.isArray(chunk.choices) && | |
| chunk.choices.length > 0 && | |
| chunk.choices[0].delta && | |
| typeof chunk.choices[0].delta.content === "string" | |
| ) { | |
| const content = chunk.choices[0].delta.content; |
| return ( | ||
| <> | ||
| {extractedText} | ||
| {text} |
There was a problem hiding this comment.
The removal of the text extraction logic that previously parsed quoted strings and replaced \n with actual newlines may cause formatting issues. The old code handled escaped newlines from the AI response, but now raw text is displayed directly. If the OpenAI API returns escape sequences like \n as literal strings, they won't be converted to actual line breaks. Verify that the stream format has changed with the new openai package version, or restore the text processing logic if needed.
package.jsonandbun.lock, including@react-spring/web,ai, andopenai.landing-page-quote.tsxto utilize a newReadableStreamimplementation for better performance and error handling.package.jsonfor improved consistency.