Skip to content

Commit 9631bfa

Browse files
Merge pull request #354 from Portkey-AI/arize-phenix-support
arize phoenix docs
2 parents ee2a6c1 + 366e9e2 commit 9631bfa

File tree

6 files changed

+381
-1
lines changed

6 files changed

+381
-1
lines changed

docs.json

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -406,6 +406,10 @@
406406
"integrations/libraries/openai-compatible",
407407
"integrations/libraries/microsoft-semantic-kernel"
408408
]
409+
},
410+
{
411+
"group": "Tracing Providers",
412+
"pages": ["integrations/tracing-providers/arize"]
409413
}
410414
]
411415
},
462 KB
Loading

integrations/ecosystem.mdx

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,8 @@ title: "Integrations"
99
<Card icon="brain-circuit" title="AI Apps" href="/integrations/ai-apps" />
1010
<Card icon="book" title="Libraries" href="/integrations/libraries" />
1111
<Card icon="cloud" title="Cloud Platforms" href="/integrations/cloud" />
12+
<Card icon="list" title="Tracing Providers" href="/integrations/tracing-providers" />
13+
1214
</CardGroup>
1315

1416
# Preferred Partners

integrations/tracing-providers.mdx

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,9 @@
1+
---
2+
title: "Overview"
3+
---
4+
5+
<CardGroup cols={3}>
6+
7+
<Card title="Arize Phoenix" href="/integrations/tracing-providers/arize" />
8+
9+
</CardGroup>
Lines changed: 363 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,363 @@
1+
---
2+
title: 'Arize Phoenix'
3+
description: 'Extend Portkey’s powerful AI Gateway with Arize Phoenix for unified LLM observability, tracing, and analytics across your ML stack.'
4+
---
5+
6+
Portkey is a **production-grade AI Gateway and Observability platform** for AI applications. It offers built-in observability, reliability features and over 40+ key LLM metrics. For teams standardizing observability in Arize Phoenix, Portkey also supports seamless integration.
7+
8+
<Note>
9+
Portkey provides comprehensive observability out-of-the-box. This integration is for teams who want to consolidate their ML observability in Arize Phoenix alongside Portkey's AI Gateway capabilities.
10+
</Note>
11+
12+
## Why Portkey + Arize Phoenix?
13+
14+
[Arize Phoenix](https://github.com/Arize-ai/openinference/tree/main/python/instrumentation/openinference-instrumentation-portkey) brings observability to LLM workflows with tracing, prompt debugging, and performance monitoring.
15+
16+
Thanks to Phoenix's OpenInference instrumentation, Portkey can emit structured traces automatically — no extra setup needed. This gives you clear visibility into every LLM call, making it easier to debug and improve your app.
17+
18+
19+
<CardGroup cols={2}>
20+
<Card title="AI Gateway Features" icon="shield">
21+
- **1600+ LLM Providers**: Single API for OpenAI, Anthropic, AWS Bedrock, and more
22+
- **Advanced Routing**: Fallbacks, load balancing, conditional routing
23+
- **Cost Optimization**: Semantic caching, request
24+
- **Security**: PII detection, content filtering, compliance controls
25+
</Card>
26+
27+
<Card title="Built-in Observability" icon="chart-line">
28+
- **40+ Key Metrics**: Cost, latency, tokens, error rates
29+
- **Detailed Logs & Traces**: Request/response bodies and custom tracing
30+
- **Custom Metadata**: Attach custom metadata to your requests
31+
- **Custom Alerts**: Real-time monitoring and notifications
32+
</Card>
33+
</CardGroup>
34+
35+
With this integration, you can route LLM traffic through Portkey and gain deep observability in Arize Phoenix—bringing together the best of gateway orchestration and ML observability.
36+
37+
38+
## Getting Started
39+
40+
### Installation
41+
42+
Install the required packages to enable Arize Phoenix integration with your Portkey deployment:
43+
44+
<CodeGroup>
45+
46+
```bash arize
47+
pip install portkey-ai openinference-instrumentation-portkey arize-otel
48+
```
49+
50+
```bash phoenix
51+
pip install portkey-ai openinference-instrumentation-portkey arize-phoenix-otel
52+
```
53+
</CodeGroup>
54+
55+
### Setting up the Integration
56+
57+
<Steps>
58+
<Step title="Configure Arize Phoenix">
59+
First, set up the Arize OpenTelemetry configuration:
60+
61+
<CodeGroup>
62+
```python arize
63+
from arize.otel import register
64+
65+
# Configure Arize as your telemetry backend
66+
tracer_provider = register(
67+
space_id="your-space-id", # Found in Arize app settings
68+
api_key="your-api-key", # Your Arize API key
69+
project_name="portkey-gateway" # Name your project
70+
)
71+
```
72+
73+
```python phoenix
74+
from phoenix.otel import register
75+
76+
# Configure Arize as your telemetry backend
77+
tracer_provider = register(
78+
space_id="your-space-id", # Found in Arize app settings
79+
api_key="your-api-key", # Your Arize API key
80+
project_name="portkey-gateway" # Name your project
81+
)
82+
```
83+
84+
85+
</CodeGroup>
86+
</Step>
87+
88+
<Step title="Enable Portkey Instrumentation">
89+
Initialize the Portkey instrumentor to format traces for Arize:
90+
91+
```python
92+
from openinference.instrumentation.portkey import PortkeyInstrumentor
93+
94+
# Enable instrumentation
95+
PortkeyInstrumentor().instrument(tracer_provider=tracer_provider)
96+
```
97+
</Step>
98+
99+
<Step title="Configure Portkey AI Gateway">
100+
Set up Portkey with all its powerful features:
101+
102+
```python
103+
from portkey_ai import Portkey
104+
105+
# Initialize Portkey client
106+
portkey = Portkey(
107+
api_key="your-portkey-api-key", # Optional for self-hosted
108+
virtual_key="your-openai-virtual-key" # Or use provider-specific virtual keys
109+
)
110+
111+
response = portkey.chat.completions.create(
112+
model="gpt-4o-mini",
113+
messages=[{"role": "user", "content": "Explain machine learning"}]
114+
)
115+
116+
print(response.choices[0].message.content)
117+
```
118+
</Step>
119+
</Steps>
120+
121+
## Complete Integration Example
122+
123+
Here's a complete working example that connects Portkey's AI Gateway with Arize Phoenix for centralized monitoring:
124+
125+
126+
```python [expandable]
127+
import os
128+
from openai import OpenAI
129+
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
130+
from arize.otel import register # OR from phoenix.otel import register
131+
132+
# configure the Phoenix tracer
133+
from openinference.instrumentation.portkey import PortkeyInstrumentor
134+
135+
# Step 1: Configure Arize Phoenix
136+
tracer_provider = register(
137+
space_id="your-space-id",
138+
api_key="your-arize-api-key",
139+
project_name="portkey-production"
140+
)
141+
142+
# Step 2: Enable Portkey instrumentation
143+
PortkeyInstrumentor().instrument(tracer_provider=tracer_provider)
144+
145+
# Step 3: Configure Portkey's Advanced AI Gateway
146+
advanced_config = {
147+
"strategy": {
148+
"mode": "loadbalance" # Distribute load across providers
149+
},
150+
"targets": [
151+
{
152+
"virtual_key": "openai-vk",
153+
"weight": 0.7,
154+
"override_params": {"model": "gpt-4o"}
155+
},
156+
{
157+
"virtual_key": "anthropic-vk",
158+
"weight": 0.3,
159+
"override_params": {"model": "claude-3-opus-20240229"}
160+
}
161+
],
162+
"cache": {
163+
"mode": "semantic", # Intelligent caching
164+
"max_age": 3600
165+
},
166+
"retry": {
167+
"attempts": 3,
168+
"on_status_codes": [429, 500, 502, 503, 504]
169+
},
170+
"request_timeout": 30000
171+
}
172+
173+
# Initialize Portkey-powered client
174+
client = OpenAI(
175+
api_key="not-needed", # Virtual keys handle auth
176+
base_url=PORTKEY_GATEWAY_URL,
177+
default_headers=createHeaders(
178+
api_key=os.environ.get("PORTKEY_API_KEY"),
179+
config=advanced_config,
180+
metadata={
181+
"user_id": "user-123",
182+
"session_id": "session-456",
183+
"feature": "chat-assistant"
184+
}
185+
)
186+
)
187+
188+
# Make requests through Portkey's AI Gateway
189+
response = client.chat.completions.create(
190+
model="gpt-4o", # Portkey handles provider routing
191+
messages=[
192+
{"role": "system", "content": "You are a helpful assistant."},
193+
{"role": "user", "content": "Explain quantum computing in simple terms."}
194+
],
195+
temperature=0.7
196+
)
197+
198+
print(response.choices[0].message.content)
199+
```
200+
201+
## Portkey AI Gateway Features
202+
203+
While Arize Phoenix provides observability, Portkey delivers a complete AI infrastructure platform. Here's everything you get with Portkey:
204+
205+
### 🚀 Core Gateway Capabilities
206+
207+
<CardGroup cols={2}>
208+
<Card title="1600+ LLM Providers" icon="layer-group" href="/integrations">
209+
Access OpenAI, Anthropic, Google, Cohere, Mistral, Llama, and 1600+ models through a single unified API. No more managing different SDKs or endpoints.
210+
</Card>
211+
212+
<Card title="Universal API" icon="key" href="/product/ai-gateway/universal-api">
213+
Use the same code to call any LLM provider. Switch between models and providers without changing your application code.
214+
</Card>
215+
216+
<Card title="Virtual Keys" icon="key" href="/product/ai-gateway/virtual-keys">
217+
Secure vault for API keys with budget limits, rate limiting, and access controls. Never expose raw API keys in your code.
218+
</Card>
219+
220+
<Card title="Advanced Configs" icon="cog" href="/product/ai-gateway/configs">
221+
Define routing strategies, model parameters, and reliability settings in reusable configurations. Version control your AI infrastructure.
222+
</Card>
223+
</CardGroup>
224+
225+
### 🛡️ Reliability & Performance
226+
227+
<CardGroup cols={3}>
228+
<Card title="Smart Fallbacks" icon="life-ring" href="/product/ai-gateway/fallbacks">
229+
Automatically switch to backup providers when primary fails. Define fallback chains across multiple providers.
230+
</Card>
231+
232+
<Card title="Load Balancing" icon="shield" href="/product/ai-gateway/load-balancing">
233+
Distribute requests across multiple API keys or providers based on custom weights and strategies.
234+
</Card>
235+
236+
<Card title="Automatic Retries" icon="rotate-ccw" href="/product/ai-gateway/automatic-retries">
237+
Configurable retry logic with exponential backoff for transient failures and rate limits.
238+
</Card>
239+
240+
<Card title="Request Timeouts" icon="clock" href="/product/ai-gateway/request-timeouts">
241+
Set custom timeouts to prevent hanging requests and improve application responsiveness.
242+
</Card>
243+
244+
<Card title="Conditional Routing" icon="route" href="/product/ai-gateway/conditional-routing">
245+
Route requests to different models based on content, metadata, or custom conditions.
246+
</Card>
247+
248+
<Card title="Canary Testing" icon="flask" href="/product/ai-gateway/canary-testing">
249+
Gradually roll out new models or providers with percentage-based traffic splitting.
250+
</Card>
251+
</CardGroup>
252+
253+
### 💰 Cost Optimization
254+
255+
<CardGroup cols={2}>
256+
<Card title="Semantic Caching" icon="database" href="/product/ai-gateway/cache-simple-and-semantic">
257+
Intelligent caching that understands semantic similarity. Reduce costs by up to 90% on repeated queries.
258+
</Card>
259+
260+
<Card title="Budget Limits" icon="dollar-sign" href="/product/ai-gateway/virtual-keys/budget-limits">
261+
Set spending limits per API key, team, or project. Get alerts before hitting limits.
262+
</Card>
263+
264+
<Card title="Rate Limits" icon="clock" href="/product/ai-gateway/virtual-keys/rate-limits">
265+
Set spending limits per API key, team, or project. Get alerts before hitting limits.
266+
</Card>
267+
268+
<Card title="Cost Analytics" icon="chart-pie" href="/product/observability/analytics">
269+
Real-time cost tracking across all providers with detailed breakdowns by model, user, and feature.
270+
</Card>
271+
</CardGroup>
272+
273+
### 📊 Built-in Observability
274+
275+
<CardGroup cols={2}>
276+
<Card title="Comprehensive Metrics" icon="chart-bar" href="/product/observability/metrics">
277+
Track 40+ metrics including latency, tokens, costs, cache hits, error rates, and more in real-time.
278+
</Card>
279+
280+
<Card title="Detailed Logs" icon="list" href="/product/observability/logs">
281+
Full request/response logging with advanced filtering, search, and export capabilities.
282+
</Card>
283+
284+
<Card title="Distributed Tracing" icon="list" href="/product/observability/traces">
285+
Trace requests across your entire AI pipeline with correlation IDs and custom metadata.
286+
</Card>
287+
288+
<Card title="Custom Alerts" icon="bell" href="/product/observability/alerts">
289+
Set up alerts on any metric with webhook, email, or Slack notifications.
290+
</Card>
291+
</CardGroup>
292+
293+
### 🔒 Security & Compliance
294+
295+
<CardGroup cols={3}>
296+
<Card title="PII Detection" icon="shield-check" href="/product/guardrails">
297+
Automatically detect and redact sensitive information like SSN, credit cards, and personal data.
298+
</Card>
299+
300+
<Card title="Content Filtering" icon="filter" href="/product/guardrails">
301+
Block harmful, toxic, or inappropriate content in real-time based on custom policies.
302+
</Card>
303+
304+
<Card title="Access Controls" icon="users" href="/product/enterprise-offering/access-control-management">
305+
Fine-grained RBAC with team management, user permissions, and audit logs.
306+
</Card>
307+
308+
<Card title="SOC2 Compliance" icon="certificate" href="/security">
309+
Enterprise-grade security with SOC2 Type II certification and GDPR compliance.
310+
</Card>
311+
312+
<Card title="Audit Logs" icon="history" href="/product/enterprise-offering/access-control-management#audit-logs">
313+
Complete audit trail of all API usage, configuration changes, and user actions.
314+
</Card>
315+
316+
<Card title="Data Privacy" icon="lock" href="/security">
317+
Zero data retention options and deployment in your own VPC for maximum privacy.
318+
</Card>
319+
</CardGroup>
320+
321+
### 🏢 Enterprise Features
322+
323+
<CardGroup cols={2}>
324+
<Card title="SSO Integration" icon="key" href="/product/enterprise-offering/org-management/sso">
325+
SAML 2.0 support for Okta, Azure AD, Google Workspace, and custom IdPs.
326+
</Card>
327+
328+
<Card title="Organization Management" icon="building" href="/product/enterprise-offering/org-management">
329+
Multi-workspace support with hierarchical teams and department-level controls.
330+
</Card>
331+
332+
<Card title="SLA Guarantees" icon="handshake" href="/product/enterprise-offering">
333+
99.9% uptime SLA with dedicated support and custom deployment options.
334+
</Card>
335+
336+
<Card title="Private Deployments" icon="server" href="/product/enterprise-offering/private-cloud-deployments">
337+
Deploy Portkey in your own AWS, Azure, or GCP environment with full control.
338+
</Card>
339+
</CardGroup>
340+
341+
342+
## Next Steps
343+
344+
<CardGroup cols={2}>
345+
<Card title="Explore Portkey Features" icon="rocket" href="/product/ai-gateway">
346+
Discover all AI Gateway capabilities beyond observability
347+
</Card>
348+
349+
<Card title="Virtual Keys Setup" icon="key" href="/product/ai-gateway/virtual-keys">
350+
Secure your API keys and set budgets
351+
</Card>
352+
353+
<Card title="Advanced Routing" icon="route" href="/product/ai-gateway/configs">
354+
Configure fallbacks, load balancing, and more
355+
</Card>
356+
357+
<Card title="Built-in Analytics" icon="chart-bar" href="/product/observability">
358+
Use Portkey's native observability features
359+
</Card>
360+
</CardGroup>
361+
362+
363+
**Need help?** Join our [Discord community](https://portkey.sh/discord)

0 commit comments

Comments
 (0)