-
-
Notifications
You must be signed in to change notification settings - Fork 896
Description
Description
Is there an example how ai-sdk and trigger.dev can be used together, with proper logging of long prompts?
When using Vercel AI SDK's experimental_telemetry feature in Trigger.dev v4 tasks, OpenTelemetry span attributes are being truncated despite setting the recommended environment variables (TRIGGER_OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT, etc.) to very high values.
The attributes containing LLM messages, responses, and tool call data are cut off, making it difficult to debug AI-powered workflows.
Environment
- Trigger.dev Version:
@trigger.dev/[email protected] - AI SDK Version:
[email protected] - Runtime: Node.js
- Deployment: Trigger.dev Cloud
Environment Variables Set
Set in Trigger.dev project environment variables (synced from Vercel):
OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT=1048576
TRIGGER_OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT=1048576
TRIGGER_OTEL_LOG_ATTRIBUTE_VALUE_LENGTH_LIMIT=1048576Confirmed these are present in the task runtime via:
console.log("OTEL limits:", {
standard: process.env.OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT,
trigger_span: process.env.TRIGGER_OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT,
trigger_log: process.env.TRIGGER_OTEL_LOG_ATTRIBUTE_VALUE_LENGTH_LIMIT,
});
// Output: All show "1048576"Task Code Example
import { task } from "@trigger.dev/sdk";
import { generateText } from "ai";
import { anthropic } from "@ai-sdk/anthropic";
export const analyzeChart = task({
id: "analyze-chart",
run: async (payload) => {
const result = await generateText({
model: anthropic("claude-sonnet-4-5"),
prompt: "Analyze this large dataset...", // Long prompt with lots of context
experimental_telemetry: {
isEnabled: true,
functionId: "analyze-chart",
recordInputs: true,
recordOutputs: true,
},
});
return result.text;
},
});Expected Behavior
With TRIGGER_OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT=1048576 (1MB), span attributes in the Trigger.dev trace view should contain:
- Full input prompts (up to 1MB)
- Full output text (up to 1MB)
- Complete tool call arguments and results (up to 1MB)
According to Trigger.dev's changelog, these limits were increased from 1,024 bytes to 131,072 bytes (128KB) and made fully configurable.
Actual Behavior
Span attributes are being truncated at what appears to be a much lower limit, 1024B. This makes debugging LLM conversations extremely difficult.
In Trigger.dev Trace View
- ✅
ai.generateTextspan appears correctly - ✅
ai.toolCallspans appear correctly - ✅ Span hierarchy is correct
- ❌ Attribute values are truncated (messages cut off mid-sentence)
- ❌ No indication that the 1MB limit is being respected
What We've Tried
1. Set Standard + Trigger.dev Environment Variables
OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT=1048576
TRIGGER_OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT=1048576
TRIGGER_OTEL_LOG_ATTRIBUTE_VALUE_LENGTH_LIMIT=1048576Result: Still truncated at ~1KB
2. Increased to 10MB
TRIGGER_OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT=10485760Result: No change, still truncated at same point
3. Set to Unlimited (0)
TRIGGER_OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT=0Result: No change, still truncated
4. Verified Environment Variables Are Present
Added logging to confirm env vars are loaded:
console.log("Limits:", process.env.TRIGGER_OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT);
// Output: "1048576" ✅5. Redeployed Multiple Times
- Deleted and recreated environment variables
- Redeployed task code
- Waited for changes to propagate
Result: No change
6. Specified all the environment variables from https://trigger.dev/changelog/increased-otel-attribute-limits
But still no changes.
Questions
Ideally if someone has a working example, that would be best. If not, these questions could help us with debugging
-
Are the
TRIGGER_OTEL_*environment variables actually being applied to the TracerProvider?- We can see they're set in
process.env, but are they being read by the OpenTelemetry SDK initialization? - Is there a specific timing requirement (must be set before SDK initialization)?
- We can see they're set in
-
Does Trigger.dev's TracerProvider initialization happen before or after environment variables are loaded?
- Could there be a race condition where the provider is initialized with defaults before our env vars are available?
-
Is there a different configuration required for AI SDK spans specifically?
- Do spans created by external libraries (like AI SDK) inherit the same limits?
- Is there additional configuration needed for third-party instrumentation?
-
Can we verify the actual limits being applied?
- Is there a way to inspect the TracerProvider configuration at runtime?
- Can we log the actual
spanLimitsbeing used?
-
Is the truncation happening at span creation or span export?
- Are attributes truncated when the AI SDK creates spans?
- Or during batch processing/export to Trigger.dev backend?