How to emulate same trace format in experiment run in sdk as prompt experiment in ui. #11700
Replies: 5 comments 1 reply
-
|
The key difference is that Prompt Experiments via UI automatically execute prompts from Prompt Management and create detailed traces showing the system and user prompts(1). When using Experiments via SDK, you define your own task function and control what gets traced(2). To emulate the same trace format as UI Prompt Experiments in SDK experiments, you need to:
from langfuse import get_client
from langfuse.openai import OpenAI
langfuse = get_client()
def my_task(*, item, **kwargs):
question = item["input"]
response = OpenAI().chat.completions.create(
model="gpt-4.1",
messages=[{"role": "user", "content": question}]
)
return response.choices[0].message.content
The If you're using a different LLM provider or framework, ensure you're using Langfuse's integration for that provider (like 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
@inkeep def task(*, item, **kwargs): I am already doing that still i can only dataset input and not the prompt and user messages |
Beta Was this translation helpful? Give feedback.
-
|
The issue is that you're using Replace your OpenAI client initialization with: from langfuse.openai import OpenAIThen in your task function, use: def task(*, item, **kwargs):
# Get the prompt
prompt = self.langfuse.get_prompt(prompt_name)
# Get input data
input_data = item.input
# Compile prompt with input data
compiled_prompt = prompt.compile(**input_data)
# Call OpenAI API using Langfuse-wrapped client
response = OpenAI().chat.completions.create(
model=self.model_name,
messages=compiled_prompt,
langfuse_prompt=prompt
)
return response.choices[0].message.contentThe key difference is that Additionally, ensure you have proper OpenTelemetry setup if you're using the Python SDK v3, as tracing relies on it(2). Make sure to call 📚 Sources: Have another question? Just tag @inkeep. |
Beta Was this translation helpful? Give feedback.
-
|
@inkeep from langfuse.openai import OpenAI I am already doing that |
Beta Was this translation helpful? Give feedback.
-
|
Hi @savitha-suresh, the LLM call in the task function you are passing to the experiment method needs to be instrumented. Is this the case? Could you share how how task function looks like? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Describe your question
When i run a datasetrun from ui. The trace shows user prompt, system prompt. but when i run an experiment from sdk, the trace shows only the dataset input and not the system and user prompt sent to llm. Can someone please tell how to emulate traces and the exact flow of prompt experiment in ui using sdk?
https://langfuse.com/docs/evaluation/experiments/experiments-via-ui
Langfuse Cloud or Self-Hosted?
Langfuse Cloud
If Self-Hosted
No response
If Langfuse Cloud
No response
SDK and integration versions
No response
Pre-Submission Checklist
Beta Was this translation helpful? Give feedback.
All reactions