-
|
OpenAI introduced Structured Outputs. It seems Langfuse does not log calls using it yet. Or am I doing something wrong? See the attached image showing the first completion appearing, but not the second. export async function testLangfuse() {
const trace = langfuse.trace({ name: "testLangfuse" });
const span = trace.span({ name: "testLangfuseSpan" });
const llm = observeOpenAI(
new OpenAI({
apiKey: process.env["OPENAI_API_KEY"],
}),
{
parent: span,
generationName: "testLangfuse",
}
);
const chatCompletion = await llm.chat.completions.create({
messages: [{ role: "system", content: "Say this is a test!" }],
model: "gpt-3.5-turbo",
user: "langfuse",
max_tokens: 300,
});
console.log("chatCompletion", chatCompletion);
// Does not appear in span. zodResponseFormat requires gpt-4o-2024-08-06 or gpt-4o-mini.
const GeneratedSchema = z.object({
output: z.string(),
});
const chatCompletion2 = await llm.chat.completions.create({
messages: [{ role: "system", content: "Say this is a test!" }],
model: "gpt-4o-2024-08-06",
user: "langfuse",
max_tokens: 300,
response_format: zodResponseFormat(GeneratedSchema, "output"),
});
console.log("chatCompletion2", chatCompletion2);
console.log("TraceURL:", trace.getTraceUrl());
langfuse.flush();
}
|
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
|
thanks for sharing, currently extending support in our python libraries and will look into adding this in js as well |
Beta Was this translation helpful? Give feedback.
-
|
Issue to track this: #2862 |
Beta Was this translation helpful? Give feedback.
-
|
Hi @fabstu - we have just released a new JS SDK version that parses the Also, please add |
Beta Was this translation helpful? Give feedback.

Hi @fabstu - we have just released a new JS SDK version that parses the
response_formatto the generation's metadata.Also, please add
await llm.flushAsync()to the end of your script to also flush the Langfuse client utilized by the OpenAI integration.