Skip to content

Commit f51b0f2

Browse files
Merge pull request #370 from Portkey-AI/langfuse-docs
secret-integration
2 parents 2df2506 + 8466f77 commit f51b0f2

File tree

1 file changed

+302
-0
lines changed

1 file changed

+302
-0
lines changed
Lines changed: 302 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,302 @@
1+
---
2+
title: "Langfuse"
3+
description: "Integrate Langfuse observability with Portkey's AI gateway for comprehensive LLM monitoring and advanced routing capabilities"
4+
---
5+
6+
Langfuse is an open-source LLM observability platform that helps you monitor, debug, and analyze your LLM applications. When combined with Portkey, you get the best of both worlds: Langfuse's detailed observability and Portkey's advanced AI gateway features.
7+
8+
This integration allows you to:
9+
- Track all LLM requests in Langfuse while routing through Portkey
10+
- Use Portkey's 250+ LLM providers with Langfuse observability
11+
- Implement advanced features like caching, fallbacks, and load balancing
12+
- Maintain detailed traces and analytics in both platforms
13+
14+
## Quick Start Integration
15+
16+
Since Portkey provides an OpenAI-compatible API, integrating with Langfuse is straightforward using Langfuse's OpenAI wrapper.
17+
18+
### Installation
19+
20+
```bash
21+
pip install portkey-ai langfuse openai
22+
```
23+
24+
### Basic Setup
25+
26+
```python
27+
import os
28+
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
29+
30+
# Set your Langfuse credentials
31+
os.environ["LANGFUSE_PUBLIC_KEY"] = "YOUR_LANGFUSE_PUBLIC_KEY"
32+
os.environ["LANGFUSE_SECRET_KEY"] = "YOUR_LANGFUSE_SECRET_KEY"
33+
34+
# Import OpenAI from langfuse
35+
from langfuse.openai import OpenAI
36+
37+
# Initialize the client
38+
client = OpenAI(
39+
api_key="YOUR_OPENAI_API_KEY", # Your LLM provider API key
40+
base_url=PORTKEY_GATEWAY_URL,
41+
default_headers=createHeaders(
42+
api_key="YOUR_PORTKEY_API_KEY",
43+
virtual_key="YOUR_VIRTUAL_KEY", # Optional: Use virtual keys
44+
# config="YOUR_CONFIG_ID", # Optional: Use saved configs
45+
# trace_id="YOUR_TRACE_ID", # Optional: Custom trace ID
46+
)
47+
)
48+
49+
# Make a request
50+
response = client.chat.completions.create(
51+
model="gpt-4o-mini",
52+
messages=[{"role": "user", "content": "Hello, world!"}],
53+
)
54+
55+
print(response.choices[0].message.content)
56+
```
57+
58+
<Note>
59+
This integration automatically logs requests to both Langfuse and Portkey, giving you observability data in both platforms.
60+
</Note>
61+
62+
## Using Portkey Features with Langfuse
63+
64+
### 1. Virtual Keys
65+
66+
Virtual Keys in Portkey allow you to securely manage API keys and set usage limits. Use them with Langfuse for better security:
67+
68+
```python
69+
from langfuse.openai import OpenAI
70+
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
71+
72+
client = OpenAI(
73+
api_key="dummy_key", # Not used when virtual key is provided
74+
base_url=PORTKEY_GATEWAY_URL,
75+
default_headers=createHeaders(
76+
api_key="YOUR_PORTKEY_API_KEY",
77+
virtual_key="YOUR_VIRTUAL_KEY"
78+
)
79+
)
80+
81+
response = client.chat.completions.create(
82+
model="gpt-4o",
83+
messages=[{"role": "user", "content": "Explain quantum computing"}]
84+
)
85+
```
86+
87+
### 2. Multiple Providers
88+
89+
Switch between 250+ LLM providers while maintaining Langfuse observability:
90+
91+
<Tabs>
92+
<Tab title="OpenAI">
93+
```python
94+
client = OpenAI(
95+
api_key="YOUR_OPENAI_KEY",
96+
base_url=PORTKEY_GATEWAY_URL,
97+
default_headers=createHeaders(
98+
api_key="YOUR_PORTKEY_API_KEY",
99+
provider="openai"
100+
)
101+
)
102+
```
103+
</Tab>
104+
<Tab title="Anthropic">
105+
```python
106+
client = OpenAI(
107+
api_key="YOUR_ANTHROPIC_KEY",
108+
base_url=PORTKEY_GATEWAY_URL,
109+
default_headers=createHeaders(
110+
api_key="YOUR_PORTKEY_API_KEY",
111+
provider="anthropic",
112+
metadata={"model_override": "claude-3-opus-20240229"}
113+
)
114+
)
115+
```
116+
</Tab>
117+
<Tab title="Azure OpenAI">
118+
```python
119+
client = OpenAI(
120+
api_key="dummy_key",
121+
base_url=PORTKEY_GATEWAY_URL,
122+
default_headers=createHeaders(
123+
api_key="YOUR_PORTKEY_API_KEY",
124+
virtual_key="YOUR_AZURE_VIRTUAL_KEY"
125+
)
126+
)
127+
```
128+
</Tab>
129+
</Tabs>
130+
131+
### 3. Advanced Routing with Configs
132+
133+
Use Portkey's config system for advanced features while tracking in Langfuse:
134+
135+
```python
136+
# Create a config in Portkey dashboard first, then reference it
137+
client = OpenAI(
138+
api_key="dummy_key",
139+
base_url=PORTKEY_GATEWAY_URL,
140+
default_headers=createHeaders(
141+
api_key="YOUR_PORTKEY_API_KEY",
142+
config="pc-langfuse-prod" # Your saved config ID
143+
)
144+
)
145+
```
146+
147+
Example config for fallback between providers:
148+
```json
149+
{
150+
"strategy": {
151+
"mode": "fallback"
152+
},
153+
"targets": [
154+
{
155+
"virtual_key": "openai-key",
156+
"override_params": {"model": "gpt-4o"}
157+
},
158+
{
159+
"virtual_key": "anthropic-key",
160+
"override_params": {"model": "claude-3-opus-20240229"}
161+
}
162+
]
163+
}
164+
```
165+
166+
### 4. Caching for Cost Optimization
167+
168+
Enable caching to reduce costs while maintaining full observability:
169+
170+
```python
171+
config = {
172+
"cache": {
173+
"mode": "semantic",
174+
"max_age": 3600
175+
},
176+
"virtual_key": "YOUR_VIRTUAL_KEY"
177+
}
178+
179+
client = OpenAI(
180+
api_key="dummy_key",
181+
base_url=PORTKEY_GATEWAY_URL,
182+
default_headers=createHeaders(
183+
api_key="YOUR_PORTKEY_API_KEY",
184+
config=config
185+
)
186+
)
187+
```
188+
189+
### 5. Custom Metadata and Tracing
190+
191+
Add custom metadata visible in both Langfuse and Portkey:
192+
193+
```python
194+
client = OpenAI(
195+
api_key="YOUR_API_KEY",
196+
base_url=PORTKEY_GATEWAY_URL,
197+
default_headers=createHeaders(
198+
api_key="YOUR_PORTKEY_API_KEY",
199+
provider="openai",
200+
metadata={
201+
"user_id": "user_123",
202+
"session_id": "session_456",
203+
"environment": "production"
204+
},
205+
trace_id="langfuse-trace-001"
206+
)
207+
)
208+
```
209+
210+
211+
<CardGroup cols={3}>
212+
<Card title="Fallbacks" icon="life-ring" href="/product/ai-gateway/fallbacks">
213+
Automatically switch to backup targets if the primary target fails.
214+
</Card>
215+
<Card title="Conditional Routing" icon="route" href="/product/ai-gateway/conditional-routing">
216+
Route requests to different targets based on specified conditions.
217+
</Card>
218+
<Card title="Load Balancing" icon="key" href="/docs/product/ai-gateway/load-balancing">
219+
Distribute requests across multiple targets based on defined weights.
220+
</Card>
221+
<Card title="Caching" icon="database" href="/product/ai-gateway/caching">
222+
Enable caching of responses to improve performance and reduce costs.
223+
</Card>
224+
<Card title="Smart Retries" icon="database" href="/product/ai-gateway/retries">
225+
Automatic retry handling with exponential backoff for failed requests
226+
</Card>
227+
<Card title="Budget Limits" icon="shield-check" href="/product/ai-gateway/virtual-keys/budget-limits">
228+
Set and manage budget limits across teams and departments. Control costs with granular budget limits and usage tracking.
229+
</Card>
230+
</CardGroup>
231+
232+
233+
## Observability Features
234+
235+
With this integration, you get:
236+
237+
### In Langfuse:
238+
- request/response logging
239+
- Latency tracking
240+
- Token usage analytics
241+
- Cost calculation
242+
- Trace visualization
243+
244+
### In Portkey:
245+
- Request logs with provider details
246+
- Advanced analytics across providers
247+
- Cost tracking and budgets
248+
- Performance metrics
249+
- Custom dashboards
250+
- Token usage analytics
251+
252+
253+
<Frame>
254+
<img src="/images/integrations/observability.png" width="600"/>
255+
</Frame>
256+
257+
258+
259+
## Migration Guide
260+
261+
If you're already using Langfuse with OpenAI, migrating to use Portkey is simple:
262+
263+
<CodeGroup>
264+
```python Before
265+
from langfuse.openai import OpenAI
266+
267+
client = OpenAI(
268+
api_key="YOUR_OPENAI_KEY"
269+
)
270+
```
271+
272+
```python After
273+
from langfuse.openai import OpenAI
274+
from portkey_ai import createHeaders, PORTKEY_GATEWAY_URL
275+
276+
client = OpenAI(
277+
api_key="YOUR_OPENAI_KEY",
278+
base_url=PORTKEY_GATEWAY_URL,
279+
default_headers=createHeaders(
280+
api_key="YOUR_PORTKEY_API_KEY",
281+
provider="openai"
282+
)
283+
)
284+
```
285+
</CodeGroup>
286+
287+
## Next Steps
288+
289+
- [Create Virtual Keys](/product/ai-gateway/virtual-keys) for secure API key management
290+
- [Build Configs](/product/ai-gateway/configs) for advanced routing
291+
- [Set up Guardrails](/product/guardrails) for content filtering
292+
- [Implement Caching](/product/ai-gateway/cache-simple-and-semantic) for cost optimization
293+
294+
## Resources
295+
296+
- [Langfuse Documentation](https://langfuse.com/docs)
297+
- [Portkey AI Gateway Guide](/product/ai-gateway)
298+
- [Portkey Python SDK Reference](/api-reference/portkey-sdk-client)
299+
300+
<Note>
301+
For enterprise support and custom features, contact our [enterprise team](https://calendly.com/portkey-ai).
302+
</Note>

0 commit comments

Comments
 (0)