Skip to content

Commit 17d56b5

Browse files
changelog new organization
1 parent 2a92575 commit 17d56b5

File tree

1 file changed

+87
-53
lines changed

1 file changed

+87
-53
lines changed

changelog/2025/may.mdx

Lines changed: 87 additions & 53 deletions
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,9 @@ title: "May"
44

55
**May-king it production ready✨**
66

7-
Infra shouldn’t slow you down. In May, we shipped the kind of upgrades that help you move fast into productiion and stay in controlwhether you're scaling agents, securing AI behavior, or managing costs across teams.
7+
Infra shouldn’t slow you down. In May, we shipped the kind of upgrades that help you move fast into productiion and stay in controlwhether you're scaling agents, securing AI behavior, or managing costs across teams.
88

9-
From deeper integrations with agent frameworks to support for newer models Portkey keeps evolving as the AI infra layer teams can rely on. We also shipped observability upgrades, expanded our provider network, and added tighter controls for cost, access, and security.
9+
From deeper integrations with agent frameworks to support for newer models, Portkey keeps evolving as the AI infra layer teams can rely on. We also shipped observability upgrades, and added tighter controls for cost, access, and security.
1010

1111
Here’s everything new this month:
1212

@@ -23,7 +23,7 @@ Here’s everything new this month:
2323
| **Documentation** | • Guardrail documentation moved under “Integrations”<br/>• New solution pages for AWS Bedrock and GovCloud<br/>• Cookbook: OpenAI Computer Use tool <br/>• Cookbook: Optimizing Prompts using LLama Prompt Ops |
2424
---
2525

26-
## AI agent infrastructure
26+
## AI Agent Infrastructure
2727
AI agent frameworks are helping teams prototype faster, but taking agents to production requires real infrastructure. Portkey integrates with leading frameworks to bring interoperability, observability, reliability, and cost management to your agent workflows.
2828

2929
**PydanticAI**
@@ -84,30 +84,46 @@ Our customers are vouching for it!
8484
<img width="700" src="/images/changelog/testimonial3.png" />
8585
</Frame>
8686

87-
Working with Azure? Read more [here](https://portkey.ai/for/azure)
88-
89-
**Claude Code**
87+
<Card horizontal title="Working with Azure? Read more here." href="https://portkey.ai/for/azure">
88+
</Card>
9089

91-
Bring visibility, governance, and control to Anthropic’s agentic coding assistant with Portkey.
90+
### Portkey for AI Tools
9291

93-
With this integration, you can:
92+
<CardGroup cols={2}>
9493

95-
- Avoid system overload by enforcing rate limits
96-
- Monitor usage by tagging and filtering with metadata (e.g., user ID, workspace)
97-
- Debug and trace issues faster with detailed logs for every interaction
98-
- Share controlled, secure access by issuing virtual API keys per user
99-
- Use Claude code in your existing AWS Bedrock or Vertex AI setup, with granular governance and access control
94+
<Card
95+
title="Claude Code"
96+
icon="terminal"
97+
href="/docs/integrations/libraries/claude-code"
98+
>
99+
Bring enterprise-grade visibility, governance, and access control to Anthropic’s agentic coding assistant. Enforce rate limits, monitor usage with rich metadata, debug faster with detailed logs, and issue virtual keys for secure access across teams and infrastructures (Bedrock, Vertex AI).
100+
</Card>
100101

101-
Start using Portkey with [Claude Code](https://portkey.ai/docs/integrations/libraries/claude-code)
102+
<Card
103+
title="Cline"
104+
icon="code"
105+
href="/docs/integrations/libraries/cline"
106+
>
107+
Supercharge your AI-powered terminal with unified logging, granular cost tracking, access controls, and advanced observability. Portkey lets you audit every prompt, tool invocation, and generation for full developer productivity oversight.
108+
</Card>
102109

103-
**AI coding assistants**
110+
<Card
111+
title="Roo Code"
112+
icon="rocket"
113+
href="/docs/integrations/libraries/roo-code"
114+
>
115+
Add security, compliance, and real-time analytics to your code assistant workflows. Track usage, control spend, and manage access across all Roo deployments—ensuring safe and optimized coding environments at scale.
116+
</Card>
104117

105-
Plug Portkey into [Cline](https://portkey.ai/docs/integrations/libraries/cline) or [Roo Code](https://portkey.ai/docs/integrations/libraries/roo-code) and enable:
118+
<Card
119+
title="Goose"
120+
icon="feather"
121+
href="/docs/integrations/libraries/goose"
122+
>
123+
Enable enterprise features in Goose—AI code review and generation—by routing through Portkey. Gain full observability, cost controls, and secure team access for responsible and accountable AI coding, with seamless integration into your workflows.
124+
</Card>
106125

107-
- Access to the latest models from OpenAI, Anthropic, Mistral, and more
108-
- Full observability—log every prompt, tool use, and response with metadata
109-
- Access control with scoped API keys and JWT-based authentication
110-
- Built-in governance and cost tracking per user, project, or team
126+
</CardGroup>
111127

112128
**Multilmodal embeddings**
113129

@@ -150,21 +166,50 @@ You can now export Portkey analytics to any OpenTelemetry (OTel)-compatible coll
150166
- Ping messages are removed from streamed responses.
151167
- Resizing metadata columns in logs
152168

153-
## New Models and Providers
169+
<CardGroup cols={3}>
170+
<Card title="Claude 4">
171+
Now live on Portkey for advanced reasoning and coding.
172+
</Card>
173+
<Card title="Grok 3 & Grok 3 Mini">
174+
Available on Azure for high-performance inference.
175+
</Card>
176+
<Card title="Lepton AI Integration">
177+
Integrate Lepton AI into your Portkey workflows.
178+
</Card>
179+
<Card title="Nscale Models">
180+
Access Nscale models through Portkey.
181+
</Card>
182+
</CardGroup>
154183

155-
- Claude 4 is now live on Portkey.
156-
- PDFs can be sent to Claude via Anthropic and Bedrock.
157-
- OpenAI’s Computer Use Tool works via the Responses API.
158-
- Grok 3 and Grok 3 Mini are available on Azure.
159-
- Gemini 2.5 supports Thinking Mode in Prompt Playground.
160-
- Extended thinking added for Claude 3.7 and Claude 4.
161-
- Mistral now supports function calling.
162-
- Image generation is now available on WorkersAI.
163-
- Lepton AI is now integrated with Portkey.
164-
- Nscale models can be accessed via Portkey.
165-
- Tool calling is live for Mistral and OpenRouter.
166-
- MIME types are now handled for Vertex and Google.
167-
- PDFs are supported via Anthropic and Bedrock routes.
184+
<CardGroup cols={3}>
185+
<Card title="PDF Support for Claude">
186+
Send PDFs to Claude via Anthropic and Bedrock.
187+
</Card>
188+
<Card title="OpenAI Computer Use Tool">
189+
Access Computer Use Tool via the Responses API.
190+
</Card>
191+
<Card title="Gemini 2.5 Thinking Mode">
192+
Thinking Mode now supported in Prompt Playground.
193+
</Card>
194+
<Card title="Extended Thinking for Claude">
195+
Claude 3.7 and Claude 4 support extended thinking.
196+
</Card>
197+
<Card title="Mistral Function Calling">
198+
Mistral now supports function calling.
199+
</Card>
200+
<Card title="WorkersAI Image Generation">
201+
Generate images directly using WorkersAI.
202+
</Card>
203+
<Card title="Tool Calling for Mistral & OpenRouter">
204+
Tool calling now live for Mistral and OpenRouter.
205+
</Card>
206+
<Card title="MIME Type Support">
207+
MIME types now handled for Vertex and Google.
208+
</Card>
209+
<Card title="PDFs via Anthropic/Bedrock">
210+
PDF routes available via Anthropic and Bedrock.
211+
</Card>
212+
</CardGroup>
168213

169214

170215
## Guardrails
@@ -177,31 +222,20 @@ You can now export Portkey analytics to any OpenTelemetry (OTel)-compatible coll
177222

178223
- **Model whitelist guardrail**:Restrict or deny specific models at the org, environment, or request level using a flexible whitelist/blacklist guardrail.
179224

180-
## Documentation and Guides
181-
182-
**Optimizing Prompts using LLama Prompt Ops**
183-
184-
Need to try out or switch to the latest Llama models? There's an easier way to do it.
185-
Llama Prompt Ops transforms prompts that work well with other LLMs into ones that are optimized specifically for Llama models. This helps you get better performance and more reliable results without having to rewrite everything yourself.
186-
187-
If you work in customer support, we've put together a helpful guide that will show you how to build a system that analyzes support messages for urgency and sentiment, and helps categorize them properly.
225+
## Resources
188226

189-
Check it out [here](https://portkey.ai/docs/guides/prompts/llama-prompts)
227+
**LLama Prompt Ops: Optimizing Prompts**
190228

191-
**OpenAI’s Computer Use tool**
229+
Looking to upgrade to the latest Llama models? Llama Prompt Ops makes it easy—transform your existing prompts for optimal performance with Llama models automatically, no manual rewriting needed.
192230

193-
Build production-grade browser automation with enterprise-level controls. Our latest cookbook shows you how to:
194-
195-
- Route and monitor Computer Use API calls
196-
- Build a complete Playwright-based browser automation solution
197-
- Add observability, logging, and cost controls with Portkey
231+
For customer support teams, we provide a comprehensive guide to building systems that analyze support messages for urgency, sentiment, and categorization.
198232

199-
[Explore OpenAI Computer use tool](https://portkey.ai/docs/guides/use-cases/openai-computer-use)
233+
[Read the Llama Prompt Ops guide](https://portkey.ai/docs/guides/prompts/llama-prompts)
200234

201-
**Other updates**
235+
**More Resources**
202236

203-
- Guardrail documentation moved under “Integrations”.
204-
- Expanded guides for agent frameworks like CrewAI and LangGraph
237+
- Guardrail documentation is now located under “Integrations”.
238+
- Expanded guides for agent frameworks, including CrewAI and LangGraph.
205239

206240

207241
## Customer love!

0 commit comments

Comments
 (0)