This project demonstrates how to integrate Trigger.dev with LangSmith using their legacy telemetry SDK for AI telemetry and observability. It includes a sample task that generates text using OpenAI's GPT-4o model with AI SDK 4 while capturing comprehensive telemetry data in LangSmith using the LangSmith SDK.
- Trigger.dev Integration: Background task processing with retry logic and scaling
- LangSmith SDK Integration: Native LangSmith SDK for enhanced telemetry and tracing
- AI SDK 4: Modern AI SDK for OpenAI model interactions
- OpenAI Integration: Text generation using GPT-4o model
- OTEL Telemetry: OpenTelemetry-based observability with LangSmith
- TypeScript Support: Fully typed codebase for better development experience
Before getting started, make sure you have:
- Node.js (version 18 or higher)
- npm or yarn package manager
- Trigger.dev account - Sign up here
- OpenAI API key - Get one here
- LangSmith account - Sign up here
# Install dependencies
npm install
Create a .env
file in the root directory with the following variables:
# Trigger.dev
TRIGGER_PROJECT_REF=your_trigger_project_ref_here
# OpenAI
OPENAI_API_KEY=your_openai_api_key_here
# LangSmith
LANGSMITH_API_KEY=your_langsmith_api_key_here
LANGSMITH_PROJECT=your_project_name_here
Trigger.dev:
- Go to your Trigger.dev dashboard
- Navigate to your project
- Copy the "Project Reference" (starts with
proj_
) from the project settings
OpenAI:
- Visit OpenAI API Keys
- Create a new secret key
- Copy the key (starts with
sk-
)
LangSmith:
- Go to LangSmith Settings
- Create an API key in the "API Keys" section
- Create a project or use an existing one
Start the development server:
npm run dev
This will:
- Start the Trigger.dev development environment
- Watch for file changes and reload automatically
- Connect to your Trigger.dev project
- Enable local task execution
Once the development server is running, you can trigger the task:
-
Via Trigger.dev Dashboard:
- Go to your project dashboard
- Find the "test-telemetry" task
- Click "Test" and provide a payload like:
{"input": "test message"}
-
Via API (using curl):
curl -X POST "https://api.trigger.dev/api/v1/tasks/test-telemetry/trigger" \ -H "Authorization: Bearer YOUR_TRIGGER_SECRET_KEY" \ -H "Content-Type: application/json" \ -d '{"input": "Hello from API"}'
├── src/
│ └── trigger/
│ └── task.ts # Main task definition
├── trigger.config.ts # Trigger.dev configuration
├── package.json # Dependencies and scripts
├── tsconfig.json # TypeScript configuration
├── .env # Environment variables (create this)
└── README.md # This file
The main task does the following:
- Receives Input: Takes a string input parameter
- Creates Trace: Uses Trigger.dev's logger to create a trace span
- Calls OpenAI: Uses AI SDK 4 to generate text with GPT-4o
- Captures Telemetry: Automatically sends trace data to LangSmith via OTEL
- Returns Results: Logs the generated text
The trigger.config.ts
file configures:
- LangSmith OTEL Integration: Uses LangSmith SDK's
initializeOTEL()
for seamless telemetry setup - Automatic Span Export: Configures the default LangSmith span exporter
- Retry Logic: Handles failures with exponential backoff
- Process Management: Optimizes performance with keep-alive settings
- Machine Scaling: Uses
small-2x
machines for processing
After running tasks, you can view detailed telemetry in your LangSmith dashboard:
- Traces: See complete execution traces with timing
- Model Calls: Monitor OpenAI API usage and performance
- Errors: Debug failed requests with full context
- Metrics: Track usage patterns and costs
Monitor your tasks in the Trigger.dev dashboard:
- Task Runs: View execution history and status
- Logs: Access detailed execution logs
- Performance: Monitor execution times and resource usage
- Scheduling: Set up recurring or delayed task execution
To deploy your tasks to production:
npm run deploy
This will:
- Build your TypeScript code
- Deploy to Trigger.dev's infrastructure
- Make your tasks available for production use
Variable | Description | Required |
---|---|---|
TRIGGER_PROJECT_REF |
Trigger.dev project reference ID | ✅ |
OPENAI_API_KEY |
OpenAI API key for GPT model access | ✅ |
LANGSMITH_API_KEY |
LangSmith API key for telemetry | ✅ |
LANGSMITH_PROJECT |
LangSmith project name | ✅ |
-
"Project not found" error
- Verify your
TRIGGER_PROJECT_REF
is correct - Check that the project reference in your
.env
file matches your Trigger.dev project
- Verify your
-
OpenAI API errors
- Ensure your
OPENAI_API_KEY
is valid and has sufficient credits - Check that you have access to the GPT-4o model
- Ensure your
-
LangSmith telemetry not appearing
- Verify your
LANGSMITH_API_KEY
andLANGSMITH_PROJECT
are correct - Check that your LangSmith project exists
- Verify your
-
Development server won't start
- Make sure you're using Node.js 18 or higher
- Try deleting
node_modules
and runningnpm install
again
-
Telemetry configuration issues
- Ensure the LangSmith SDK is properly initialized in
trigger.config.ts
- Verify that the OTEL setup is not conflicting with other telemetry tools
- Ensure the LangSmith SDK is properly initialized in
- Trigger.dev: Documentation | Discord
- LangSmith: Documentation | Support
- OpenAI: Documentation | Support
This project uses AI SDK version 4.3.19, which provides:
- Improved OpenAI integration
- Better TypeScript support
- Enhanced streaming capabilities
- Optimized performance
The LangSmith SDK (v0.3.67) enables:
- Native OTEL integration via
initializeOTEL()
- Automatic span collection and export
- Seamless telemetry configuration
- Enhanced debugging capabilities
- Explore more complex AI workflows
- Set up scheduled tasks for regular processing
- Add error handling and retry logic
- Implement webhook triggers for real-time processing
- Scale to multiple tasks and workflows
- Experiment with different AI models and providers
- Add custom telemetry and metrics