how to return trace_id in openai completion response when using langfuse as a proxy in litellm? #2933
-
|
Hi everyone, I’m using the OpenAI Completion API and using langfuse for the callback as a proxy. i have included a trace_id in my request. However, I’m not seeing the trace_id in the response. I expected the trace_id to be echoed back in the response for tracking purposes Here’s a summary of what I’ve done: Request: I’m passing the trace_id as part of the request payload. Thanks in advance for your help! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 3 replies
-
|
Yes, the Here is an example of how to do this: from litellm import completion
# set custom langfuse trace params and generation params
response = completion(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "Hi 👋 - i'm openai"}
],
metadata={
"trace_id": "trace-id", # set langfuse Trace ID
},
)
print(response)Additionally, ensure that you are using the const client = observeOpenAI(openai, { generationName: "your-generation-name" });
const res = await client.chat.completions.create({
model: "gpt-3.5-turbo",
messages: [{ role: "user", content: "Explain how to assemble a PC" }],
metadata: { trace_id: "trace-id" },
user: "langfuse-user@gmail.com",
max_tokens: 300,
});
await client.flushAsync();
const response = await getGeneration("your-generation-name");
const traceId = response.data.data[0].traceId;
console.log(traceId);To further ensure the def test_openai_chat_completion_with_trace():
api = get_api()
generation_name = create_uuid()
trace_id = create_uuid()
langfuse = Langfuse()
langfuse.trace(id=trace_id)
chat_func(
name=generation_name,
model="gpt-3.5-turbo",
trace_id=trace_id,
messages=[{"role": "user", "content": "1 + 1 = "}],
temperature=0,
metadata={"someKey": "someResponse"},
)
openai.flush_langfuse()
generation = api.observations.get_many(name=generation_name, type="GENERATION")
assert len(generation.data) != 0
assert generation.data[0].name == generation_name
assert generation.data[0].trace_id == trace_idThis will ensure that the |
Beta Was this translation helpful? Give feedback.
-
|
I'm unsure whether I understand your problem. Do you use LiteLLM proxy or how do you exactly integrate with Langfuse? Please provide some code. Happy to have a look. |
Beta Was this translation helpful? Give feedback.
Have a look here for the different options that are available: https://github.com/BerriAI/litellm/blob/main/litellm/integrations/langfuse.py
I think it is
trace_idandexisting_trace_id