You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Infra shouldn’t slow you down. In May, we shipped the kind of upgrades that help you move fast into productiion and stay in control—whether you're scaling agents, securing AI behavior, or managing costs across teams.
7
+
In May, we shipped the kind of upgrades that help you move your AI Agents fast into productiion and stay in control — whether you're scaling, securing AI behavior, or bringing new models to your apps.
8
8
9
-
From deeper integrations with agent frameworks to support for newer models Portkey keeps evolving as the AI infra layer teams can rely on. We also shipped observability upgrades, expanded our provider network, and added tighter controls for cost, access, and security.
9
+
We launched deep integrations with agent frameworks like PydanticAI and OpenAI Agents SDK, added enterprise-grade controls to Claude Code, made it simpler to call a remote MCP server simpler and much more!
10
10
11
11
Here’s everything new this month:
12
12
@@ -15,67 +15,55 @@ Here’s everything new this month:
15
15
16
16
| Area | Key Updates |
17
17
| :-- | :-- |
18
-
|**AI agent infrastructure**| • PydanticAI integration for modular agent development<br/>• OpenAI Agents SDK support with monitoring, guardrails, and cost tracking<br/>• Strands Agents integration with observability, retries, and load balancing<br/>• Remote MCP server support via Responses API<br/>• Arize Phoenix tracing integration for unified agent observability |
19
-
|**Platform**| • Deep integration into Azure AI ecosystem (OpenAI, Foundry, APIM, Marketplace)<br/>• Support for Claude Code with rate limits, observability, and access control<br/>• AI coding assistant integrations: Cline, Roo Code<br/>• Multimodal embedding support via Vertex AI (text, image, video)<br/>• Multi-label support for prompt versions<br/>• OpenAI Computer Use Tool routing and observability<br/>• Full support for `GET`, `PUT`, and `DELETE` HTTP methods<br/>• OTel analytics export to your existing observability stack |
20
-
|**Improvements**| • Token cost tracking for gpt-image-1<br/>• Ping messages removed from streamed responses<br/>• Resizing metadata columns in logs |
21
-
|**New models & providers**| • Claude 4 now live<br/>• PDF support for Claude via Anthropic and Bedrock<br/>• OpenAI’s Computer Use Tool supported via Responses API<br/>• Grok 3 and Grok 3 Mini on Azure<br/>• Gemini 2.5 Thinking Mode in Prompt Playground<br/>• Extended thinking for Claude 3.7 and Claude 4<br/>• Mistral supports function calling<br/>• WorkersAI supports image generation<br/>• Lepton AI, Nscale now integrated<br/>• Tool calling enabled for Mistral and OpenRouter<br/>• MIME type support for Vertex and Google<br/>• PDF support via Anthropic and Bedrock |
22
-
|**Guardrails**| • Prompt Security guardrails for injection detection and sensitive data protection<br/>• JWT validator input guardrail<br/>• PANW Prisma AIRS plugin for real-time prompt/response risk blocking<br/>• Model whitelist guardrail for org/environment/request-level control |
23
-
|**Documentation**| • Guardrail documentation moved under “Integrations”<br/>• New solution pages for AWS Bedrock and GovCloud<br/>• Cookbook: OpenAI Computer Use tool <br/>• Cookbook: Optimizing Prompts using LLama Prompt Ops |
18
+
|**AI agent infrastructure**| • Integration with PydanticAI, OpenAI Agents SDK, Strands Agents integration<br/>• Remote MCP server support via Responses API<br/>• Arize Phoenix tracing integration|
19
+
| **AI tools** | • Integration with Claude Code, Cline, Roo Code<br/>
20
+
|**Platform**| • Deep integration into Azure AI ecosystem<br/>• Multi-label support for prompt versions<br/>• Full support for `GET`, `PUT`, and `DELETE` HTTP methods<br/>• OTel analytics export |
21
+
|**New models & providers**| • Claude 4 now live<br/>• Grok 3 and Grok 3 Mini on Azure<br/>• Lepton AI, Nscale now integrated<br/>• PDF support for Claude via Anthropic and Bedrock<br/>• WorkersAI supports image generation<br/>• Tool calling enabled for Mistral and OpenRouter<br/>• MIME type support for Vertex and Google<br/> |
22
+
| **Guardrails** | • Prompt Security guardrails for injection detection and sensitive data protection<br/>• JWT validator input guardrail<br/>• PANW Prisma AIRS plugin for real-time prompt/response risk blocking<br/>• Model whitelist guardrail for org/environment/request-level control<br/>
24
23
---
25
24
26
-
## AI agent infrastructure
25
+
## AI Agent Infrastructure
27
26
AI agent frameworks are helping teams prototype faster, but taking agents to production requires real infrastructure. Portkey integrates with leading frameworks to bring interoperability, observability, reliability, and cost management to your agent workflows.
28
27
29
-
**PydanticAI**
28
+
<CardGroupcols={3}>
30
29
31
-
Portkey now integrates PydanticAI, a Python framework that brings FastAPI-like ergonomics to building AI agents. With Portkey, you can:
32
-
33
-
- Build modular, testable agents with a clean developer experience.
34
-
- Route all agent calls through Portkey for observability and debugging.
35
-
- Add retries, fallbacks, guardrails, and cost tracking without extra infra
36
-
37
-
See how it's done [here](https://portkey.ai/docs/integrations/agents/pydantic-ai#pydantic-ai)
38
-
39
-
**OpenAI Agents SDK**
40
-
41
-
Portkey integrates with the OpenAI Agents SDK to help teams ship production-grade agents with built-in planning, memory, and tool use. You can now:
42
-
43
-
- Monitor and debug each step of the agent’s reasoning and tool use.
44
-
- Automatically track usage and cost for each agent call.
45
-
- Apply guardrails to both agent input and output.
46
-
- Scale agent-based workflows across environments with versioned control
Strands Agents is a lightweight agent framework built by AWS to simplify agent development.<br/><br/>
46
+
<br/>
47
+
</Card>
51
48
52
-
Strands Agents is a lightweight agent framework built by AWS to simplify agent development.
49
+
</CardGroup>
53
50
54
-
Portkey now integrates seamlessly with Strands Agents to make them production-ready. With this integration, you get:
51
+
**Tracing Integrations: Arize AI**
55
52
56
-
- Full observability into agent steps, tool calls, and interactions
57
-
- Built-in reliability through fallbacks, retries, and load balancing
58
-
- Cost tracking and spend optimization
53
+
For teams consolidating observability into Arize, you can now view Portkey’s logs directly into Arize Phoenix to get unified trace views across your LLM workflows.
59
54
60
-
See how it's done [here](https://portkey.ai/docs/integrations/agents/strands)
61
55
62
-
**Support for remote MCP servers!**
56
+
## Remote MCP servers
63
57
64
58
Portkey now supports calling a remote MCP server that is maintained by developers and organizations across the internet that expose these tools to MCP clients via the Responses API
65
-
Read more about the integration [here](https://portkey.ai/docs/product/ai-gateway/remote-mcp)
59
+
Read more about the integration [here](https://portkey.ai/docs/product/ai-gateway/remote-mcp).
66
60
67
-
**Tracing Integrations: Arize AI**
68
-
69
-
For teams consolidating observability into Arize, you can now view Portkey’s logs directly into Arize Phoenix to get unified trace views across your LLM workflows.
61
+
## Azure AI ecosystem
70
62
71
-
## Platform
72
-
73
-
**Azure AI ecosystem**
63
+
More than half of Fortune 500 companies use Azure OpenAI. But building GenAI apps in the enterprise is still messy, cost attribution, routing logic, usage tracking, model evaluation... all scattered.
More than half of Fortune 500 companies use Azure OpenAI. But building GenAI apps in the enterprise is still messy, cost attribution, routing logic, usage tracking, model evaluation... all scattered.
78
-
79
67
With Portkey’s deep integration into the Azure AI ecosystem (OpenAI, Foundry, APIM, Marketplace), teams can now build, scale, and govern GenAI apps without leaving their existing cloud setup.
80
68
81
69
@@ -84,41 +72,49 @@ Our customers are vouching for it!
Add security, compliance, and real-time analytics to your code assistant workflows.
96
+
</Card>
104
97
105
-
Plug Portkey into [Cline](https://portkey.ai/docs/integrations/libraries/cline) or [Roo Code](https://portkey.ai/docs/integrations/libraries/roo-code) and enable:
Add essential enterprise controls to Goose's powerful autonomous coding capabilities
102
+
</Card>
106
103
107
-
- Access to the latest models from OpenAI, Anthropic, Mistral, and more
108
-
- Full observability—log every prompt, tool use, and response with metadata
109
-
- Access control with scoped API keys and JWT-based authentication
110
-
- Built-in governance and cost tracking per user, project, or team
104
+
</CardGroup>
111
105
112
-
**Multilmodal embeddings**
106
+
## Multilmodal embeddings
113
107
114
108
Portkey now supports embedding APIs from Vertex AI for text, image, and video—across multiple languages.
115
109
This unlocks the ability to:
116
-
- Build cross-language search and retrieval
110
+
- Build multimodal search and retrieval
117
111
- Power multimodal RAG pipelines
118
112
- Track, route, and optimize embedding usage at scale
119
113
120
114
Read more about the implementation [here](https://portkey.ai/docs/integrations/llms/vertex-ai/embeddings)
121
115
116
+
## Platform
117
+
122
118
**Multi-label support for prompts**
123
119
124
120
<Frame>
@@ -127,16 +123,6 @@ Read more about the implementation [here](https://portkey.ai/docs/integrations/l
127
123
128
124
You can now assign multiple labels to a single prompt version, making it easy to promote a version across environments like staging and production.
129
125
130
-
**OpenAI Computer Use Tool**
131
-
132
-
Build production-grade browser automation with enterprise-level controls using Portkey and:
133
-
134
-
- Route and monitor Computer Use API calls
135
-
- Build a complete Playwright-based browser automation solution
136
-
- Add observability, logging, and cost controls with Portkey
137
-
138
-
Explore the implementation [here](https://portkey.ai/docs/guides/use-cases/openai-computer-use)
139
-
140
126
**Gateway to any API**
141
127
142
128
Portkey now supports `GET`, `PUT`, and `DELETE` HTTP methods in addition to `POST`, allowing you to route requests to any external or self-hosted provider endpoint. This means you can connect to custom APIs directly through Portkey with full observability for every call.
@@ -150,71 +136,58 @@ You can now export Portkey analytics to any OpenTelemetry (OTel)-compatible coll
150
136
- Ping messages are removed from streamed responses.
151
137
- Resizing metadata columns in logs
152
138
153
-
## New Models and Providers
154
-
155
-
- Claude 4 is now live on Portkey.
156
-
- PDFs can be sent to Claude via Anthropic and Bedrock.
157
-
- OpenAI’s Computer Use Tool works via the Responses API.
158
-
- Grok 3 and Grok 3 Mini are available on Azure.
159
-
- Gemini 2.5 supports Thinking Mode in Prompt Playground.
160
-
- Extended thinking added for Claude 3.7 and Claude 4.
161
-
- Mistral now supports function calling.
162
-
- Image generation is now available on WorkersAI.
163
-
- Lepton AI is now integrated with Portkey.
164
-
- Nscale models can be accessed via Portkey.
165
-
- Tool calling is live for Mistral and OpenRouter.
166
-
- MIME types are now handled for Vertex and Google.
167
-
- PDFs are supported via Anthropic and Bedrock routes.
<li><b>Claude 4</b> is now live for advanced reasoning and coding.</li>
151
+
<li><b>Grok 3 & Grok 3 Mini</b> are available on Azure</li>
152
+
<li><b>Lepton AI</b> is now live</li>
153
+
<li><b>Nscale Models</b> can now be accessed through Portkey.</li>
154
+
</ul>
155
+
</div>
156
+
<divstyle={{ flex: 1, minWidth: 300 }}>
157
+
<ul>
158
+
<b>Updates</b>
159
+
<li><b>PDF Support for Claude</b> via Anthropic and Bedrock.</li>
160
+
<li><b>Gemini 2.5 Thinking Mode</b> is now supported in Prompt Playground.</li>
161
+
<li><b>Extended Thinking</b> is available for Claude 3.7 and Claude 4.</li>
162
+
<li>Image generation now supported on WorkersAI</li>
163
+
<li><b>Tool Calling and Function Calling for Mistral</b> is now live.</li>
164
+
<li><b>MIME Type</b> is now supported for Vertex AI</li>
165
+
</ul>
166
+
</div>
167
+
</div>
169
168
170
169
## Guardrails
171
170
172
171
-**Prompt Security guardrails**: Integrate with Prompt Security to detect prompt injection and prevent sensitive data exposure in both prompts and responses.
173
172
174
173
-**JWT validator guardrail**: Added as an input guardrail to validate incoming JWT tokens before requests are sent to the LLM.
175
174
176
-
-**PANW Prisma AIRS Plugin**:Portkey now integrates with Palo Alto Networks' AIRS (AI Runtime Security) to enforce guardrails that block risky prompts or model responses based on real-time security analysis.
177
-
178
-
-**Model whitelist guardrail**:Restrict or deny specific models at the org, environment, or request level using a flexible whitelist/blacklist guardrail.
179
-
180
-
## Documentation and Guides
181
-
182
-
**Optimizing Prompts using LLama Prompt Ops**
183
-
184
-
Need to try out or switch to the latest Llama models? There's an easier way to do it.
185
-
Llama Prompt Ops transforms prompts that work well with other LLMs into ones that are optimized specifically for Llama models. This helps you get better performance and more reliable results without having to rewrite everything yourself.
175
+
-**PANW Prisma AIRS Plugin**: Portkey now integrates with Palo Alto Networks' AIRS (AI Runtime Security) to enforce guardrails that block risky prompts or model responses based on real-time security analysis.
186
176
187
-
If you work in customer support, we've put together a helpful guide that will show you how to build a system that analyzes support messages for urgency and sentiment, and helps categorize them properly.
177
+
-**Model whitelist guardrail**: Restrict or deny specific models at the org, environment, or request level using a flexible whitelist/blacklist guardrail.
188
178
189
-
Check it out [here](https://portkey.ai/docs/guides/prompts/llama-prompts)
190
-
191
-
**OpenAI’s Computer Use tool**
192
-
193
-
Build production-grade browser automation with enterprise-level controls. Our latest cookbook shows you how to:
194
-
195
-
- Route and monitor Computer Use API calls
196
-
- Build a complete Playwright-based browser automation solution
197
-
- Add observability, logging, and cost controls with Portkey
198
-
199
-
[Explore OpenAI Computer use tool](https://portkey.ai/docs/guides/use-cases/openai-computer-use)
200
-
201
-
**Other updates**
202
-
203
-
- Guardrail documentation moved under “Integrations”.
204
-
- Expanded guides for agent frameworks like CrewAI and LangGraph
205
-
206
-
207
-
## Customer love!
208
-
209
-
From powering reliable provider failovers at Hedy to equipping AI policy analysts, Portkey is becoming the trusted backbone for builders!
0 commit comments