Skip to content

Commit c4f0bf1

Browse files
Lotte/gtm 1790 trace inputoutput missing troubleshooting (#2461)
* Updated page * add link in document * small change
1 parent 73b7555 commit c4f0bf1

File tree

1 file changed

+348
-24
lines changed

1 file changed

+348
-24
lines changed

pages/faq/all/empty-trace-input-and-output.mdx

Lines changed: 348 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -4,47 +4,371 @@ description: This article explains why the input and output of a trace might be
44
tags: [observability, observability-get-started]
55
---
66

7-
# Why are the input and output of a trace empty?
7+
# Why are the input and output of my trace empty?
88

9-
By default, the input and output of a trace are set from the root observation (the first span or generation in your trace). If these fields are empty, it might be because the root observation did not have its input/output set, or they were not explicitly set at the trace level. You can explicitly set trace input/output using the update_trace method:
9+
Having input and output on your traces affects several important features:
1010

11+
- **Browsing traces**: The traces list shows input/output previews, making it easier to find what you're looking for
12+
- **Evaluations**: LLM-as-a-judge evaluators use trace input/output to assess quality. Empty fields mean evaluations won't work
13+
- **Search**: You can search traces by their input/output content, but only if it's populated
14+
15+
## How trace input/output works
16+
17+
Before jumping into solutions, this section helps to understand how Langfuse populates trace input/output.
18+
19+
### Traces and observations
20+
21+
In Langfuse, a **trace** represents a complete request or operation. Inside each trace, you have **observations**.
22+
23+
```
24+
Trace: "User asks about weather"
25+
├── Observation: Parse user intent
26+
├── Observation: Call weather API
27+
└── Observation: Generate response (LLM call)
28+
```
29+
30+
Both traces and observations can have their own input/output fields. They serve different purposes:
31+
32+
| | Trace input/output | Observation input/output |
33+
|---|---|---|
34+
| **What it represents** | The overall request and response | Each individual step |
35+
| **Where you see it** | Traces list, evaluations | Inside the trace detail view |
36+
| **How it's set** | Inherited from root observation OR set explicitly | Set on each observation |
37+
38+
### The "root observation" rule
39+
40+
By default, trace input/output is copied from the observation at the top level (called the "root observation").
41+
42+
This means:
43+
- If your root observation has input/output → the trace will too
44+
- If your root observation has no input/output → your trace will be empty (unless you set it explicitly, see [below](#solution-b))
45+
46+
This is why your trace might show empty fields even though you can see data in the individual observations inside it.
47+
48+
## Troubleshooting
49+
50+
The most common reasons for empty trace input/output are:
51+
52+
### 1. You're using a short-lived application (scripts, serverless, notebooks)
53+
54+
**Symptoms**: Traces sometimes appear with missing data, or don't appear at all.
55+
56+
[Langfuse sends data in the background to keep your application fast](/docs/observability/data-model#background-processing). If your script or serverless function exits before the data is sent, it gets lost.
57+
58+
**Solution**: Call `flush()` before your application exits.
59+
60+
<Tabs items={["Python", "JS/TS"]}>
61+
<Tab>
1162
```python
1263
from langfuse import get_client
1364

1465
langfuse = get_client()
1566

16-
with langfuse.start_as_current_observation(as_type="span", name="complex-pipeline") as root_span:
17-
# Root span has its own input/output
18-
root_span.update(input="Step 1 data", output="Step 1 result")
67+
# Your code here...
1968

20-
# But trace should have different input/output
21-
root_span.update_trace(
22-
input={"original_query": "User's actual question"},
23-
output={"final_answer": "Complete response", "confidence": 0.95}
69+
# Before your script ends:
70+
langfuse.flush()
71+
```
72+
73+
If using the `@observe()` decorator:
74+
75+
```python
76+
from langfuse import observe, langfuse_context
77+
78+
@observe()
79+
def main():
80+
# Your code here...
81+
pass
82+
83+
main()
84+
langfuse_context.flush()
85+
```
86+
</Tab>
87+
<Tab>
88+
```typescript
89+
import { Langfuse } from "langfuse";
90+
91+
const langfuse = new Langfuse();
92+
93+
async function main() {
94+
// Your code here...
95+
}
96+
97+
// Ensure data is sent before exit
98+
main().finally(() => langfuse.shutdown());
99+
```
100+
</Tab>
101+
</Tabs>
102+
103+
---
104+
105+
### 2. You haven't set input/output on your root span
106+
107+
**Symptoms**: Trace input/output is always empty, but observations inside the trace have data.
108+
109+
If you're manually creating spans, you need to either:
110+
- Set input/output on your root span, OR
111+
- Explicitly set input/output on the trace itself
112+
113+
**Solution A**: Set input/output on your root span
114+
115+
<Tabs items={["Python", "JS/TS"]}>
116+
<Tab>
117+
```python
118+
from langfuse import get_client
119+
120+
langfuse = get_client()
121+
122+
with langfuse.start_as_current_observation(
123+
as_type="span",
124+
name="my-pipeline"
125+
) as root_span:
126+
user_input = "What's the weather like?"
127+
result = process_request(user_input)
128+
129+
# Set input/output on the root span
130+
# This will automatically populate the trace
131+
root_span.update(
132+
input={"query": user_input},
133+
output={"response": result}
24134
)
25135
```
26-
[(1)](https://langfuse.com/docs/observability/sdk/overview)
136+
</Tab>
137+
<Tab>
138+
```typescript
139+
import { Langfuse } from "langfuse";
140+
141+
const langfuse = new Langfuse();
27142

28-
If you want to set trace input/output for evaluation features, you can also use:
143+
const trace = langfuse.trace({ name: "my-pipeline" });
144+
const span = trace.span({ name: "process-request" });
29145

146+
const userInput = "What's the weather like?";
147+
const result = await processRequest(userInput);
148+
149+
// Set input/output on the span
150+
span.update({
151+
input: { query: userInput },
152+
output: { response: result },
153+
});
154+
155+
span.end();
156+
```
157+
</Tab>
158+
</Tabs>
159+
160+
<span id="solution-b"></span>
161+
162+
**Solution B**: Set input/output directly on the trace
163+
164+
Sometimes the trace input/output should be different from any observation (e.g., you want a clean summary). You can set it explicitly:
165+
166+
<Tabs items={["Python", "JS/TS"]}>
167+
<Tab>
30168
```python
31-
from langfuse import observe, get_client
169+
from langfuse import get_client
32170

33171
langfuse = get_client()
34172

35-
@observe()
36-
def process_user_query(user_question: str):
37-
# LLM processing...
38-
answer = call_llm(user_question)
39-
40-
# Explicitly set trace input/output
41-
langfuse.update_current_trace(
42-
input={"question": user_question},
43-
output={"answer": answer}
173+
with langfuse.start_as_current_observation(
174+
as_type="span",
175+
name="my-pipeline"
176+
) as root_span:
177+
user_input = "What's the weather like?"
178+
result = process_request(user_input)
179+
180+
# Set trace input/output explicitly (separate from span)
181+
root_span.update_trace(
182+
input={"user_question": user_input},
183+
output={"answer": result}
44184
)
185+
```
186+
</Tab>
187+
<Tab>
188+
```typescript
189+
import { Langfuse } from "langfuse";
190+
191+
const langfuse = new Langfuse();
192+
193+
const trace = langfuse.trace({ name: "my-pipeline" });
194+
195+
const userInput = "What's the weather like?";
196+
const result = await processRequest(userInput);
45197

46-
return answer
198+
// Set input/output directly on the trace
199+
trace.update({
200+
input: { user_question: userInput },
201+
output: { answer: result },
202+
});
47203
```
48-
[(1)](https://langfuse.com/docs/observability/sdk/overview)
204+
</Tab>
205+
</Tabs>
206+
207+
---
208+
209+
### 3. You're using the `@observe()` decorator but input/output capture is disabled
210+
211+
**Symptoms**: Decorated functions don't show input/output, even though they return values.
212+
213+
The `@observe()` decorator [automatically captures function arguments as input and return values as output](/docs/observability/sdk/instrumentation#observe-wrapper). But this can be disabled.
214+
215+
**Check if capture is disabled**:
216+
217+
```bash
218+
# Check your environment variables
219+
echo $LANGFUSE_OBSERVE_DECORATOR_IO_CAPTURE_ENABLED
220+
```
221+
222+
If this is set to `false`, input/output won't be captured.
223+
224+
**Solution**: Enable capture
225+
226+
```bash
227+
# In your environment
228+
export LANGFUSE_OBSERVE_DECORATOR_IO_CAPTURE_ENABLED=true
229+
```
230+
231+
Or enable it per-function:
232+
233+
```python
234+
from langfuse import observe
235+
236+
@observe(capture_input=True, capture_output=True)
237+
def my_function(data):
238+
return process(data)
239+
```
240+
241+
---
242+
243+
### 4. You're using an OpenTelemetry-based integration
244+
245+
**Symptoms**: Traces from OTEL integrations (OpenLLMetry, Logfire, etc.) show empty input/output.
246+
247+
Different OpenTelemetry providers use different attribute names for input/output. Langfuse looks for specific attributes and may not find them if your provider uses different names.
248+
249+
**Solution**: Set the attributes Langfuse expects
250+
251+
Langfuse maps these OTEL span attributes to observation input/output (checked in this order):
252+
253+
| Observation field | OTEL attributes (in priority order) |
254+
|-------------------|-------------------------------------|
255+
| `input` | `langfuse.observation.input`, `gen_ai.prompt`, `input.value`, `mlflow.spanInputs` |
256+
| `output` | `langfuse.observation.output`, `gen_ai.completion`, `output.value`, `mlflow.spanOutputs` |
257+
258+
For trace-level input/output, Langfuse looks for:
259+
- `langfuse.trace.input` / `langfuse.trace.output`, OR
260+
- The root span's observation input/output (using the attributes above)
261+
262+
See the [complete property mapping reference](/integrations/native/opentelemetry#property-mapping) for all supported attributes.
263+
264+
**Example**: Manually set the attributes Langfuse recognizes:
265+
266+
<Tabs items={["Python", "JS/TS"]}>
267+
<Tab>
268+
```python
269+
from opentelemetry import trace
270+
import json
271+
272+
tracer = trace.get_tracer(__name__)
273+
274+
with tracer.start_as_current_span("my-operation") as span:
275+
# Set attributes that Langfuse recognizes
276+
span.set_attribute("input.value", str(input_data))
277+
span.set_attribute("output.value", str(output_data))
278+
279+
# Or use the langfuse namespace for guaranteed mapping
280+
span.set_attribute("langfuse.observation.input", json.dumps(input_data))
281+
span.set_attribute("langfuse.observation.output", json.dumps(output_data))
282+
```
283+
</Tab>
284+
<Tab>
285+
```typescript
286+
import { trace } from "@opentelemetry/api";
287+
288+
const tracer = trace.getTracer("my-service");
289+
290+
tracer.startActiveSpan("my-operation", (span) => {
291+
// Set attributes that Langfuse recognizes
292+
span.setAttribute("input.value", JSON.stringify(inputData));
293+
span.setAttribute("output.value", JSON.stringify(outputData));
294+
295+
// Or use the langfuse namespace for guaranteed mapping
296+
span.setAttribute("langfuse.observation.input", JSON.stringify(inputData));
297+
span.setAttribute("langfuse.observation.output", JSON.stringify(outputData));
298+
299+
span.end();
300+
});
301+
```
302+
</Tab>
303+
</Tabs>
304+
305+
<details>
306+
<summary>**Which attributes does my OTEL provider use?**</summary>
307+
308+
Different providers use different semantic conventions. Here's how to find out what your provider sends:
309+
310+
1. **Enable debug logging** in your OTEL exporter to see the raw span attributes
311+
2. **Check the trace in Langfuse**: Open a trace, click on an observation, and look at the "Metadata" tab to see which attributes were received
312+
3. **Consult your provider's documentation** for their semantic conventions
313+
314+
Common providers and their conventions:
315+
- **OpenLLMetry**: Uses `gen_ai.prompt` and `gen_ai.completion`
316+
- **OpenInference**: Uses `input.value` and `output.value`
317+
- **MLflow**: Uses `mlflow.spanInputs` and `mlflow.spanOutputs`
318+
- **Pydantic Logfire**: Uses custom attributes (Langfuse has specific support since PR #5841)
319+
320+
If your provider uses different attribute names, you have two options:
321+
1. Manually set the attributes Langfuse expects (as shown above)
322+
2. Open a [GitHub issue](https://github.com/langfuse/langfuse/issues) requesting support for your provider's conventions
323+
324+
</details>
325+
326+
Many OTEL-specific issues have been fixed in recent Langfuse versions. If you're self-hosting, make sure you're on the latest version.
327+
328+
---
329+
330+
## Still having issues?
331+
332+
If none of the above solutions work:
333+
334+
1. **Enable debug logging** to see what's being sent:
335+
336+
<Tabs items={["Python", "JS/TS"]}>
337+
<Tab>
338+
```python
339+
from langfuse import Langfuse
340+
341+
langfuse = Langfuse(debug=True)
342+
```
343+
</Tab>
344+
<Tab>
345+
```typescript
346+
const langfuse = new Langfuse({ debug: true });
347+
```
348+
</Tab>
349+
</Tabs>
350+
351+
2. **Check your SDK version** and update if needed:
352+
353+
<Tabs items={["Python", "JS/TS"]}>
354+
<Tab>
355+
```bash
356+
pip install --upgrade langfuse
357+
```
358+
</Tab>
359+
<Tab>
360+
```bash
361+
npm update langfuse
362+
```
363+
</Tab>
364+
</Tabs>
365+
366+
3. **Look at the trace structure** in the Langfuse dashboard:
367+
- Open a trace and look at the observation tree
368+
- Is there a single root observation?
369+
- Do the individual observations have input/output?
49370

50-
If you use the decorator-based integration, input/output are captured automatically, but you can override or disable this behavior using parameters like capture_input, capture_output, or by calling update_current_trace as shown above[(2)](/docs/observability/sdk/instrumentation#trace-inputoutput-behavior)[(1)](https://langfuse.com/docs/observability/sdk/overview).
371+
4. **Ask for help**: Open a [GitHub discussion](https://github.com/langfuse/langfuse/discussions) with:
372+
- Your SDK version
373+
- A code snippet showing how you're creating traces
374+
- A screenshot of the trace structure in the dashboard

0 commit comments

Comments
 (0)