-
Notifications
You must be signed in to change notification settings - Fork 0
Update dependencies and refactor OpenAI stream handling #216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -1,7 +1,6 @@ | ||||||||||||||||||||||||||
| import { documentToHtmlString } from "@contentful/rich-text-html-renderer"; | ||||||||||||||||||||||||||
| import { RiOpenaiFill } from "@remixicon/react"; | ||||||||||||||||||||||||||
| import { kv } from "@vercel/kv"; | ||||||||||||||||||||||||||
| import { OpenAIStream } from "ai"; | ||||||||||||||||||||||||||
| import type { AllInOnePageQuery } from "gql/graphql"; | ||||||||||||||||||||||||||
| import Link from "next/link"; | ||||||||||||||||||||||||||
| import OpenAi from "openai"; | ||||||||||||||||||||||||||
|
|
@@ -84,12 +83,26 @@ Strikte Regeln | |||||||||||||||||||||||||
| ], | ||||||||||||||||||||||||||
| }); | ||||||||||||||||||||||||||
|
|
||||||||||||||||||||||||||
| // Convert the response into a friendly text-stream | ||||||||||||||||||||||||||
| // @ts-expect-error TODO: migrate to new OpenAIStream API | ||||||||||||||||||||||||||
| const stream = OpenAIStream(response, { | ||||||||||||||||||||||||||
| async onCompletion(completion) { | ||||||||||||||||||||||||||
| await kv.set(cacheKey, completion); | ||||||||||||||||||||||||||
| await kv.expire(cacheKey, 60 * 60); | ||||||||||||||||||||||||||
| // Convert OpenAI stream to ReadableStream and handle completion callback | ||||||||||||||||||||||||||
| let fullCompletion = ""; | ||||||||||||||||||||||||||
| const stream = new ReadableStream({ | ||||||||||||||||||||||||||
| async start(controller) { | ||||||||||||||||||||||||||
| try { | ||||||||||||||||||||||||||
| for await (const chunk of response) { | ||||||||||||||||||||||||||
| const content = chunk.choices[0]?.delta?.content; | ||||||||||||||||||||||||||
| if (content) { | ||||||||||||||||||||||||||
|
Comment on lines
+92
to
+93
|
||||||||||||||||||||||||||
| const content = chunk.choices[0]?.delta?.content; | |
| if (content) { | |
| // OpenAI streams can include chunks without content; skip those. | |
| if ( | |
| Array.isArray(chunk.choices) && | |
| chunk.choices.length > 0 && | |
| chunk.choices[0].delta && | |
| typeof chunk.choices[0].delta.content === "string" | |
| ) { | |
| const content = chunk.choices[0].delta.content; |
Copilot
AI
Nov 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TextEncoder is instantiated inside the loop for each chunk. This creates unnecessary object allocations during streaming. Move the encoder instantiation outside the loop before the for await statement to improve performance.
Example:
const encoder = new TextEncoder();
for await (const chunk of response) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
fullCompletion += content;
controller.enqueue(encoder.encode(content));
}
}| try { | |
| for await (const chunk of response) { | |
| const content = chunk.choices[0]?.delta?.content; | |
| if (content) { | |
| fullCompletion += content; | |
| const encoder = new TextEncoder(); | |
| const encoder = new TextEncoder(); | |
| try { | |
| for await (const chunk of response) { | |
| const content = chunk.choices[0]?.delta?.content; | |
| if (content) { | |
| fullCompletion += content; |
Copilot
AI
Nov 20, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The removal of the text extraction logic that previously parsed quoted strings and replaced \n with actual newlines may cause formatting issues. The old code handled escaped newlines from the AI response, but now raw text is displayed directly. If the OpenAI API returns escape sequences like \n as literal strings, they won't be converted to actual line breaks. Verify that the stream format has changed with the new openai package version, or restore the text processing logic if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The variable
fullCompletionis declared in the outer scope but mutated inside the async stream handler. This pattern makes the variable's lifecycle and ownership unclear. Consider encapsulating this state within thestartfunction's closure or documenting why it's scoped outside if there's a specific reason (though in this case, it doesn't need to be).