Skip to content

Commit 21aa3d1

Browse files
arize phoenix docs
1 parent ee2a6c1 commit 21aa3d1

File tree

2 files changed

+364
-0
lines changed

2 files changed

+364
-0
lines changed

docs.json

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -406,6 +406,10 @@
406406
"integrations/libraries/openai-compatible",
407407
"integrations/libraries/microsoft-semantic-kernel"
408408
]
409+
},
410+
{
411+
"group": "Tracing Providers",
412+
"pages": ["integrations/tracing-providers/arize"]
409413
}
410414
]
411415
},
Lines changed: 360 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,360 @@
1+
---
2+
title: 'Arize Phoenix'
3+
description: 'Extend Portkey’s powerful AI Gateway with Arize Phoenix for unified LLM observability, tracing, and analytics across your ML stack.'
4+
---
5+
6+
Portkey is a **production-grade AI Gateway and Observability platform** for AI applications. It offers built-in observability, reliability features and over 40+ key LLM metrics. For teams standardizing observability in Arize Phoenix, Portkey also supports seamless integration.
7+
8+
<Note>
9+
Portkey provides comprehensive observability out-of-the-box. This integration is for teams who want to consolidate their ML observability in Arize Phoenix alongside Portkey's AI Gateway capabilities.
10+
</Note>
11+
12+
## Why Portkey + Arize Phoenix?
13+
14+
While Arize Phoenix focuses on observability, Portkey is a complete AI infrastructure solution that includes:
15+
16+
<CardGroup cols={2}>
17+
<Card title="AI Gateway Features" icon="shield">
18+
- **1600+ LLM Providers**: Single API for OpenAI, Anthropic, AWS Bedrock, and more
19+
- **Advanced Routing**: Fallbacks, load balancing, conditional routing
20+
- **Cost Optimization**: Semantic caching, request
21+
- **Security**: PII detection, content filtering, compliance controls
22+
</Card>
23+
24+
<Card title="Built-in Observability" icon="chart-line">
25+
- **40+ Key Metrics**: Cost, latency, tokens, error rates
26+
- **Detailed Logs & Traces**: Request/response bodies and custom tracing
27+
- **Custom Metadata**: Attach custom metadata to your requests
28+
- **Custom Alerts**: Real-time monitoring and notifications
29+
</Card>
30+
</CardGroup>
31+
32+
With this integration, you can route LLM traffic through Portkey and gain deep observability in Arize Phoenix—bringing together the best of gateway orchestration and ML observability.
33+
34+
35+
## Getting Started
36+
37+
### Installation
38+
39+
Install the required packages to enable Arize Phoenix integration with your Portkey deployment:
40+
41+
<CodeGroup>
42+
43+
```bash arize
44+
pip install portkey-ai openinference-instrumentation-portkey arize-otel
45+
```
46+
47+
```bash phoenix
48+
pip install portkey-ai openinference-instrumentation-portkey arize-phoenix-otel
49+
```
50+
</CodeGroup>
51+
52+
### Setting up the Integration
53+
54+
<Steps>
55+
<Step title="Configure Arize Phoenix">
56+
First, set up the Arize OpenTelemetry configuration:
57+
58+
<CodeGroup>
59+
```python arize
60+
from arize.otel import register
61+
62+
# Configure Arize as your telemetry backend
63+
tracer_provider = register(
64+
space_id="your-space-id", # Found in Arize app settings
65+
api_key="your-api-key", # Your Arize API key
66+
project_name="portkey-gateway" # Name your project
67+
)
68+
```
69+
70+
```python phoenix
71+
from phoenix.otel import register
72+
73+
# Configure Arize as your telemetry backend
74+
tracer_provider = register(
75+
space_id="your-space-id", # Found in Arize app settings
76+
api_key="your-api-key", # Your Arize API key
77+
project_name="portkey-gateway" # Name your project
78+
)
79+
```
80+
81+
82+
</CodeGroup>
83+
</Step>
84+
85+
<Step title="Enable Portkey Instrumentation">
86+
Initialize the Portkey instrumentor to format traces for Arize:
87+
88+
```python
89+
from openinference.instrumentation.portkey import PortkeyInstrumentor
90+
91+
# Enable instrumentation
92+
PortkeyInstrumentor().instrument(tracer_provider=tracer_provider)
93+
```
94+
</Step>
95+
96+
<Step title="Configure Portkey AI Gateway">
97+
Set up Portkey with all its powerful features:
98+
99+
```python
100+
from portkey_ai import Portkey
101+
102+
# Initialize Portkey client
103+
portkey = Portkey(
104+
api_key="your-portkey-api-key", # Optional for self-hosted
105+
virtual_key="your-openai-virtual-key" # Or use provider-specific virtual keys
106+
)
107+
108+
response = portkey.chat.completions.create(
109+
model="gpt-4o-mini",
110+
messages=[{"role": "user", "content": "Explain machine learning"}]
111+
)
112+
113+
print(response.choices[0].message.content)
114+
```
115+
</Step>
116+
</Steps>
117+
118+
## Complete Integration Example
119+
120+
Here's a complete working example that connects Portkey's AI Gateway with Arize Phoenix for centralized monitoring:
121+
122+
123+
```python [expandable]
124+
import os
125+
from openai import OpenAI
126+
from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
127+
from arize.otel import register # OR from phoenix.otel import register
128+
129+
# configure the Phoenix tracer
130+
from openinference.instrumentation.portkey import PortkeyInstrumentor
131+
132+
# Step 1: Configure Arize Phoenix
133+
tracer_provider = register(
134+
space_id="your-space-id",
135+
api_key="your-arize-api-key",
136+
project_name="portkey-production"
137+
)
138+
139+
# Step 2: Enable Portkey instrumentation
140+
PortkeyInstrumentor().instrument(tracer_provider=tracer_provider)
141+
142+
# Step 3: Configure Portkey's Advanced AI Gateway
143+
advanced_config = {
144+
"strategy": {
145+
"mode": "loadbalance" # Distribute load across providers
146+
},
147+
"targets": [
148+
{
149+
"virtual_key": "openai-vk",
150+
"weight": 0.7,
151+
"override_params": {"model": "gpt-4o"}
152+
},
153+
{
154+
"virtual_key": "anthropic-vk",
155+
"weight": 0.3,
156+
"override_params": {"model": "claude-3-opus-20240229"}
157+
}
158+
],
159+
"cache": {
160+
"mode": "semantic", # Intelligent caching
161+
"max_age": 3600
162+
},
163+
"retry": {
164+
"attempts": 3,
165+
"on_status_codes": [429, 500, 502, 503, 504]
166+
},
167+
"request_timeout": 30000
168+
}
169+
170+
# Initialize Portkey-powered client
171+
client = OpenAI(
172+
api_key="not-needed", # Virtual keys handle auth
173+
base_url=PORTKEY_GATEWAY_URL,
174+
default_headers=createHeaders(
175+
api_key=os.environ.get("PORTKEY_API_KEY"),
176+
config=advanced_config,
177+
metadata={
178+
"user_id": "user-123",
179+
"session_id": "session-456",
180+
"feature": "chat-assistant"
181+
}
182+
)
183+
)
184+
185+
# Make requests through Portkey's AI Gateway
186+
response = client.chat.completions.create(
187+
model="gpt-4o", # Portkey handles provider routing
188+
messages=[
189+
{"role": "system", "content": "You are a helpful assistant."},
190+
{"role": "user", "content": "Explain quantum computing in simple terms."}
191+
],
192+
temperature=0.7
193+
)
194+
195+
print(response.choices[0].message.content)
196+
```
197+
198+
## Portkey AI Gateway Features
199+
200+
While Arize Phoenix provides observability, Portkey delivers a complete AI infrastructure platform. Here's everything you get with Portkey:
201+
202+
### 🚀 Core Gateway Capabilities
203+
204+
<CardGroup cols={2}>
205+
<Card title="1600+ LLM Providers" icon="layer-group" href="/integrations">
206+
Access OpenAI, Anthropic, Google, Cohere, Mistral, Llama, and 1600+ models through a single unified API. No more managing different SDKs or endpoints.
207+
</Card>
208+
209+
<Card title="Universal API" icon="key" href="/product/ai-gateway/universal-api">
210+
Use the same code to call any LLM provider. Switch between models and providers without changing your application code.
211+
</Card>
212+
213+
<Card title="Virtual Keys" icon="key" href="/product/ai-gateway/virtual-keys">
214+
Secure vault for API keys with budget limits, rate limiting, and access controls. Never expose raw API keys in your code.
215+
</Card>
216+
217+
<Card title="Advanced Configs" icon="cog" href="/product/ai-gateway/configs">
218+
Define routing strategies, model parameters, and reliability settings in reusable configurations. Version control your AI infrastructure.
219+
</Card>
220+
</CardGroup>
221+
222+
### 🛡️ Reliability & Performance
223+
224+
<CardGroup cols={3}>
225+
<Card title="Smart Fallbacks" icon="life-ring" href="/product/ai-gateway/fallbacks">
226+
Automatically switch to backup providers when primary fails. Define fallback chains across multiple providers.
227+
</Card>
228+
229+
<Card title="Load Balancing" icon="shield" href="/product/ai-gateway/load-balancing">
230+
Distribute requests across multiple API keys or providers based on custom weights and strategies.
231+
</Card>
232+
233+
<Card title="Automatic Retries" icon="rotate-ccw" href="/product/ai-gateway/automatic-retries">
234+
Configurable retry logic with exponential backoff for transient failures and rate limits.
235+
</Card>
236+
237+
<Card title="Request Timeouts" icon="clock" href="/product/ai-gateway/request-timeouts">
238+
Set custom timeouts to prevent hanging requests and improve application responsiveness.
239+
</Card>
240+
241+
<Card title="Conditional Routing" icon="route" href="/product/ai-gateway/conditional-routing">
242+
Route requests to different models based on content, metadata, or custom conditions.
243+
</Card>
244+
245+
<Card title="Canary Testing" icon="flask" href="/product/ai-gateway/canary-testing">
246+
Gradually roll out new models or providers with percentage-based traffic splitting.
247+
</Card>
248+
</CardGroup>
249+
250+
### 💰 Cost Optimization
251+
252+
<CardGroup cols={2}>
253+
<Card title="Semantic Caching" icon="database" href="/product/ai-gateway/cache-simple-and-semantic">
254+
Intelligent caching that understands semantic similarity. Reduce costs by up to 90% on repeated queries.
255+
</Card>
256+
257+
<Card title="Budget Limits" icon="dollar-sign" href="/product/ai-gateway/virtual-keys/budget-limits">
258+
Set spending limits per API key, team, or project. Get alerts before hitting limits.
259+
</Card>
260+
261+
<Card title="Rate Limits" icon="clock" href="/product/ai-gateway/virtual-keys/rate-limits">
262+
Set spending limits per API key, team, or project. Get alerts before hitting limits.
263+
</Card>
264+
265+
<Card title="Cost Analytics" icon="chart-pie" href="/product/observability/analytics">
266+
Real-time cost tracking across all providers with detailed breakdowns by model, user, and feature.
267+
</Card>
268+
</CardGroup>
269+
270+
### 📊 Built-in Observability
271+
272+
<CardGroup cols={2}>
273+
<Card title="Comprehensive Metrics" icon="chart-bar" href="/product/observability/metrics">
274+
Track 40+ metrics including latency, tokens, costs, cache hits, error rates, and more in real-time.
275+
</Card>
276+
277+
<Card title="Detailed Logs" icon="list" href="/product/observability/logs">
278+
Full request/response logging with advanced filtering, search, and export capabilities.
279+
</Card>
280+
281+
<Card title="Distributed Tracing" icon="list" href="/product/observability/traces">
282+
Trace requests across your entire AI pipeline with correlation IDs and custom metadata.
283+
</Card>
284+
285+
<Card title="Custom Alerts" icon="bell" href="/product/observability/alerts">
286+
Set up alerts on any metric with webhook, email, or Slack notifications.
287+
</Card>
288+
</CardGroup>
289+
290+
### 🔒 Security & Compliance
291+
292+
<CardGroup cols={3}>
293+
<Card title="PII Detection" icon="shield-check" href="/product/guardrails">
294+
Automatically detect and redact sensitive information like SSN, credit cards, and personal data.
295+
</Card>
296+
297+
<Card title="Content Filtering" icon="filter" href="/product/guardrails">
298+
Block harmful, toxic, or inappropriate content in real-time based on custom policies.
299+
</Card>
300+
301+
<Card title="Access Controls" icon="users" href="/product/enterprise-offering/access-control-management">
302+
Fine-grained RBAC with team management, user permissions, and audit logs.
303+
</Card>
304+
305+
<Card title="SOC2 Compliance" icon="certificate" href="/security">
306+
Enterprise-grade security with SOC2 Type II certification and GDPR compliance.
307+
</Card>
308+
309+
<Card title="Audit Logs" icon="history" href="/product/enterprise-offering/access-control-management#audit-logs">
310+
Complete audit trail of all API usage, configuration changes, and user actions.
311+
</Card>
312+
313+
<Card title="Data Privacy" icon="lock" href="/security">
314+
Zero data retention options and deployment in your own VPC for maximum privacy.
315+
</Card>
316+
</CardGroup>
317+
318+
### 🏢 Enterprise Features
319+
320+
<CardGroup cols={2}>
321+
<Card title="SSO Integration" icon="key" href="/product/enterprise-offering/org-management/sso">
322+
SAML 2.0 support for Okta, Azure AD, Google Workspace, and custom IdPs.
323+
</Card>
324+
325+
<Card title="Organization Management" icon="building" href="/product/enterprise-offering/org-management">
326+
Multi-workspace support with hierarchical teams and department-level controls.
327+
</Card>
328+
329+
<Card title="SLA Guarantees" icon="handshake" href="/product/enterprise-offering">
330+
99.9% uptime SLA with dedicated support and custom deployment options.
331+
</Card>
332+
333+
<Card title="Private Deployments" icon="server" href="/product/enterprise-offering/private-cloud-deployments">
334+
Deploy Portkey in your own AWS, Azure, or GCP environment with full control.
335+
</Card>
336+
</CardGroup>
337+
338+
339+
## Next Steps
340+
341+
<CardGroup cols={2}>
342+
<Card title="Explore Portkey Features" icon="rocket" href="/product/ai-gateway">
343+
Discover all AI Gateway capabilities beyond observability
344+
</Card>
345+
346+
<Card title="Virtual Keys Setup" icon="key" href="/product/ai-gateway/virtual-keys">
347+
Secure your API keys and set budgets
348+
</Card>
349+
350+
<Card title="Advanced Routing" icon="route" href="/product/ai-gateway/configs">
351+
Configure fallbacks, load balancing, and more
352+
</Card>
353+
354+
<Card title="Built-in Analytics" icon="chart-bar" href="/product/observability">
355+
Use Portkey's native observability features
356+
</Card>
357+
</CardGroup>
358+
359+
360+
**Need help?** Join our [Discord community](https://portkey.sh/discord)

0 commit comments

Comments
 (0)