diff --git a/Addcree.png b/Addcree.png
new file mode 100644
index 00000000..14d0b02f
Binary files /dev/null and b/Addcree.png differ
diff --git a/Screenshot2025-07-21at5.29.57PM.png b/Screenshot2025-07-21at5.29.57PM.png
new file mode 100644
index 00000000..4fc5185f
Binary files /dev/null and b/Screenshot2025-07-21at5.29.57PM.png differ
diff --git a/Screenshot2025-07-21at5.31.49PM.png b/Screenshot2025-07-21at5.31.49PM.png
new file mode 100644
index 00000000..55c15975
Binary files /dev/null and b/Screenshot2025-07-21at5.31.49PM.png differ
diff --git a/Screenshot2025-07-21at5.34.13PM.png b/Screenshot2025-07-21at5.34.13PM.png
new file mode 100644
index 00000000..14d0b02f
Binary files /dev/null and b/Screenshot2025-07-21at5.34.13PM.png differ
diff --git a/Screenshot2025-07-21at5.36.14PM.png b/Screenshot2025-07-21at5.36.14PM.png
new file mode 100644
index 00000000..76525777
Binary files /dev/null and b/Screenshot2025-07-21at5.36.14PM.png differ
diff --git a/Screenshot2025-07-21at5.39.59PM.png b/Screenshot2025-07-21at5.39.59PM.png
new file mode 100644
index 00000000..ce443ceb
Binary files /dev/null and b/Screenshot2025-07-21at5.39.59PM.png differ
diff --git a/images/Screenshot2025-07-21at5.29.57PM.png b/images/Screenshot2025-07-21at5.29.57PM.png
new file mode 100644
index 00000000..4fc5185f
Binary files /dev/null and b/images/Screenshot2025-07-21at5.29.57PM.png differ
diff --git a/images/Screenshot2025-07-21at5.31.49PM.png b/images/Screenshot2025-07-21at5.31.49PM.png
new file mode 100644
index 00000000..55c15975
Binary files /dev/null and b/images/Screenshot2025-07-21at5.31.49PM.png differ
diff --git a/images/Screenshot2025-07-21at5.36.14PM.png b/images/Screenshot2025-07-21at5.36.14PM.png
new file mode 100644
index 00000000..76525777
Binary files /dev/null and b/images/Screenshot2025-07-21at5.36.14PM.png differ
diff --git a/images/Screenshot2025-07-21at5.39.59PM.png b/images/Screenshot2025-07-21at5.39.59PM.png
new file mode 100644
index 00000000..ce443ceb
Binary files /dev/null and b/images/Screenshot2025-07-21at5.39.59PM.png differ
diff --git a/images/providersandmodels.gif b/images/providersandmodels.gif
new file mode 100644
index 00000000..c525743f
Binary files /dev/null and b/images/providersandmodels.gif differ
diff --git a/product/ai-gateway/mcp-connector.mdx b/product/ai-gateway/mcp-connector.mdx
new file mode 100644
index 00000000..7003325a
--- /dev/null
+++ b/product/ai-gateway/mcp-connector.mdx
@@ -0,0 +1,289 @@
+---
+title: "MCP Connector"
+description: "Portkey’s Model Context Protocol (MCP) connector feature enables you to connect to remote MCP servers directly from the Chat Completions API without a separate MCP client."
+---
+
+[Model Context Protocol](https://modelcontextprotocol.io/introduction) (MCP) is an open protocol that standardizes how applications provide tools and context to LLMs. The MCP tool in the Chat Completions API allows developers to give the model access to tools hosted on **Remote MCP servers**. These are MCP servers maintained by developers and organizations across the internet that expose these tools.
+
+**Key features**
+
+- **Direct API integration**: Connect to MCP servers without implementing an MCP client
+- **Tool calling support**: Access MCP tools through the Chat Completions API
+- **OAuth authentication**: Support for OAuth Bearer tokens for authenticated servers
+- **Multiple servers**: Connect to multiple MCP servers in a single request
+
+
+ **Limitations**
+
+ - Of the feature set of the [**MCP specification**](https://modelcontextprotocol.io/introduction#explore-mcp), only [**tool calls**](https://modelcontextprotocol.io/docs/concepts/tools) are currently supported.
+ - The server must be publicly exposed through HTTP (supports both Streamable HTTP and SSE transports). Local STDIO servers cannot be connected directly.
+ - The MCP connector is currently not supported on Completions, Messages or Messages endpoints.
+
+
+## Adding MCP Tools
+
+Calling a remote MCP server with the Chat Completions API is straightforward. For example, here's how you can use the [DeepWiki](https://deepwiki.com/) MCP server to ask questions about nearly any public GitHub repository.
+
+
+
+```bash cURL
+curl https://api.portkey.ai/v1/chat/completions \
+ -H "Content-Type: application/json" \
+ -H "Authorization: Bearer $PORTKEY_API_KEY" \
+ -d '{
+ "model": "@openai-prod/gpt-4.1",
+ "tools": [
+ {
+ "type": "mcp",
+ "server_label": "deepwiki",
+ "server_url": "https://mcp.deepwiki.com/mcp",
+ "require_approval": "never"
+ }
+ ],
+ "messages": [{
+ "content": "What transport protocols are supported in the 2025-03-26 version of the MCP spec?",
+ "role": "user"
+ }]
+}'
+```
+
+
+```javascript Javascript SDK
+import { Portkey } from 'portkey-ai';
+const client = new Portkey({ apiKey: "PORTKEY_API_KEY" });
+
+const resp = await client.chat.completions.create({
+ model: '@openai-prod/gpt-4o',
+ messages: [{ role: 'user', content: 'What transport protocols are supported in the 2025-03-26 version of the MCP spec?' }],
+ tools: [{
+ "type": "mcp",
+ "server_label": "deepwiki",
+ "server_url": "https://mcp.deepwiki.com/mcp",
+ "require_approval": "never"
+ }]
+});
+```
+
+
+```python Python SDK
+from portkey_ai import Portkey
+
+portkey = Portkey(api_key="PORTKEY_API_KEY")
+
+resp = portkey.chat.completions.create(
+ model="@openai-prod/gpt-4o",
+ messages=[{"role":"user","content":"What transport protocols are supported in the 2025-03-26 version of the MCP spec?"}],
+ tools=[{
+ "type": "mcp",
+ "server_label": "deepwiki",
+ "server_url": "https://mcp.deepwiki.com/mcp",
+ "require_approval": "never"
+ }]
+)
+```
+
+
+```python OpenAI Python SDK
+from openai import OpenAI
+
+client = OpenAI(api_key="PORTKEY_API_KEY", base_url="https://api.portkey.ai/v1")
+
+client.chat.completions.create(
+ model="@openai-prod/gpt-4o",
+ messages=[…],
+ tools=[{
+ "type": "mcp",
+ "server_label": "deepwiki",
+ "server_url": "https://mcp.deepwiki.com/mcp",
+ "require_approval": "never"
+ }]
+)
+```
+
+
+```javascript OpenAI JS SDK
+import OpenAI from 'openai';
+
+const openai = new OpenAI({
+ apiKey: "PORTKEY_API_KEY",
+ baseURL: "https://api.portkey.ai/v1"
+});
+
+const completion = await openai.chat.completions.create({
+ model: "@openai-prod/gpt-4o",
+ messages: [{ role: 'user', content: 'What transport protocols are supported in the 2025-03-26 version of the MCP spec?' }],
+ tools: [{
+ "type": "mcp",
+ "server_label": "deepwiki",
+ "server_url": "https://mcp.deepwiki.com/mcp",
+ "require_approval": "never"
+ }]
+});
+```
+
+
+
+## MCP Tool Configuration
+
+Each tool with `type` as `mcp` can have the following configuration fields
+
+```json
+{
+ "type": "mcp",
+ "server_label": "deepwiki",
+ "server_url": "https://mcp.deepwiki.com/mcp",
+ "require_approval": "never",
+ "allowed_tools": ["ask_question"]
+}
+```
+
+### Field Descriptions
+
+| Property | Type | Required | Description |
+| ---------------- | ------ | -------- | --------------------------------------------------------------------------------------------------------------- |
+| type | string | yes | The type of the tool MUST be `mcp` |
+| server_label | string | yes | The name of the MCP server to be called. This should not contain any spaces, special characters or underscores. |
+| server_url | string | yes | The URL of the MCP server. Must start with https:// |
+| require_approval | string | no | The only accepted value currently in this is "never" |
+| allowed_tools | array | no | List to restrict the tools to allow (by default, all tools are allowed) |
+| headers | object | no | Optional HTTP headers to send to the MCP server. Use for authentication or other purposes. |
+
+## Authentication
+
+Unlike the DeepWiki MCP server, most other MCP servers require authentication. The MCP tool in the Chat Completions API gives you the ability to flexibly specify headers that should be included in any request made to a remote MCP server. These headers can be used to share API keys, oAuth access tokens, or any other authentication scheme the remote MCP server implements.
+
+The most common header used by remote MCP servers is the `Authorization` header. This is what passing this header looks like:
+
+
+
+```bash cURL
+curl https://api.portkey.ai/v1/chat/completions \
+ -H "Content-Type: application/json" \
+ -H "Authorization: Bearer $PORTKEY_API_KEY" \
+ -d '{
+ "model": "@openai-prod/gpt-4.1",
+ "tools": [{
+ "type": "mcp",
+ "server_label": "stripe",
+ "server_url": "https://mcp.stripe.com",
+ "headers": {
+ "Authorization": "Bearer $STRIPE_API_KEY"
+ }
+ }],
+ "messages": [{
+ "content": "Create a payment link for $20",
+ "role": "user"
+ }]
+}'
+```
+
+
+```javascript Javascript SDK
+import { Portkey } from 'portkey-ai';
+const client = new Portkey({ apiKey: "PORTKEY_API_KEY" });
+
+const resp = await client.chat.completions.create({
+ model: '@openai-prod/gpt-4o',
+ messages: [{ role: 'user', content: 'Create a payment link for $20' }],
+ tools: [{
+ "type": "mcp",
+ "server_label": "stripe",
+ "server_url": "https://mcp.stripe.com",
+ "headers": {
+ "Authorization": "Bearer $STRIPE_API_KEY"
+ }
+ }]
+});
+```
+
+
+```python Python SDK
+from portkey_ai import Portkey
+
+portkey = Portkey(api_key="PORTKEY_API_KEY")
+
+resp = portkey.chat.completions.create(
+ model="@openai-prod/gpt-4o",
+ messages=[{"role":"user","content":"Create a payment link for $20"}],
+ tools=[{
+ "type": "mcp",
+ "server_label": "stripe",
+ "server_url": "https://mcp.stripe.com",
+ "headers": {
+ "Authorization": "Bearer $STRIPE_API_KEY"
+ }
+ }]
+)
+```
+
+
+```python OpenAI Python SDK
+from openai import OpenAI
+
+client = OpenAI(api_key="PORTKEY_API_KEY", base_url="https://api.portkey.ai/v1")
+
+client.chat.completions.create(
+ model="@openai-prod/gpt-4o",
+ messages=[…],
+ tools=[{
+ "type": "mcp",
+ "server_label": "stripe",
+ "server_url": "https://mcp.stripe.com",
+ "headers": {
+ "Authorization": "Bearer $STRIPE_API_KEY"
+ }
+ }]
+)
+```
+
+
+```javascript OpenAI JS SDK
+import OpenAI from 'openai';
+
+const openai = new OpenAI({
+ apiKey: "PORTKEY_API_KEY",
+ baseURL: "https://api.portkey.ai/v1"
+});
+
+const completion = await openai.chat.completions.create({
+ model: "@openai-prod/gpt-4o",
+ messages: [{ role: 'user', content: 'Create a payment link for $20' }],
+ tools: [{
+ "type": "mcp",
+ "server_label": "stripe",
+ "server_url": "https://mcp.stripe.com",
+ "headers": {
+ "Authorization": "Bearer $STRIPE_API_KEY"
+ }
+ }]
+});
+```
+
+
+
+API consumers are expected to handle the OAuth flow and obtain the access token prior to making the API call, as well as refreshing the token as needed.
+
+## Multiple MCP Servers
+
+You can connect to multiple MCP servers by including multiple objects in the `tools` array:
+
+```json
+[{
+ "type": "mcp",
+ "server_label": "stripe",
+ "server_url": "https://mcp.stripe.com",
+ "headers": {
+ "Authorization": "Bearer $STRIPE_API_KEY"
+ }
+}, {
+ "type": "mcp",
+ "server_label": "deepwiki",
+ "server_url": "https://mcp.deepwiki.com/mcp"
+}]
+```
+
+## Observability
+
+You can follow the flow of execution of API requests being made by the gateway to the MCP servers and LLMs in the trace. Each request is logged giving you deep observability into the MCP agent flow.
+
+\
\ No newline at end of file
diff --git a/product/model-catalog.mdx b/product/model-catalog.mdx
index 5b217a4c..6e27d1ce 100644
--- a/product/model-catalog.mdx
+++ b/product/model-catalog.mdx
@@ -1,152 +1,306 @@
---
-title: Model Catalog
-description: Explore and query every AI model available to your workspace, with instant code snippets for all supported providers.
+title: "Model Catalog"
+description: "A single pane to view and manage every AI provider and model in your organization. It merges and supersedes Virtual Keys - providing centralized governance, discovery, and usage. "
+sidebarTitle: "Model Catalog"
---
-The **Model Catalog** is the evolution of Virtual Keys, providing a centralized and powerful way to manage, discover, and use AI models within your workspace. It consists of two main sections: **AI Providers**, where you manage your connections, and **Models**, where you explore what you can use.
+The Model Catalog is a centralized hub for viewing and managing all AI providers and models within your organization. It serves as the evolution of Virtual Keys, providing a more powerful and streamlined way to control your AI resources.
-### **How it Works: Inheritance from the Organization**
+It abstracts raw API keys and scattered environment variables into governed Provider Integrations and Models.
-The most significant change with the Model Catalog is the concept of inheritance. Think of it this way:
+
+ **Upgrading from Virtual Keys**\
+ The Model Catalog upgrades the Virtual Key experience by introducing a centralized, organization-level management layer, offering advantages like:
-1. Your **Organization Admin** creates a master **Integration** at the company level (e.g., for "Azure Production"). They add the credentials and can set default budgets, rate limits, and an allow-list of approved models for that integration.
-2. When they provision this integration to your workspace, a corresponding **AI Provider** is automatically created in your Model Catalog.
-3. This new AI Provider in your workspace *inherits* all the settings from the organization-level integration, including its credentials, model access, and spending limits.
+ - Centralized provider and model management - no more duplicate configs across workspaces.
+ - Fine-grained control: budgets, rate limits, and model allow-lists at both org and workspace level.
+ - Inline usage: just pass `model="@provider/model_slug"`
-This "create once, provision many" approach provides central governance while giving workspaces the flexibility they need.
+ **Need help?** See our [Migration Guide ➜](/support/upgrade-to-model-catalog)
+
----
+
-### **The Model Catalog Experience by Role**
+
+
+ AI Providers represent connections to AI services. Each AI Provider has:
-Your experience with the Model Catalog will differ based on your role within the Portkey organization.
+ -
+ - ✔️ A unique slug (e.g., `@openai-prod`)
+ - ✔️ Securely stored credentials
+ - ✔️ Budget and rate limits
+ - ✔️ Access to specific models
+
+
+ The Models section is a gallery of all AI models available. Each Model entry includes:
-#### **For Workspace Members (Developers): Discover and Build**
+ -
+ - ✔️ Model slug (`@openai-prod/gpt-4o`)
+ - ✔️ Ready-to-use code snippets
+ - ✔️ Input/output token limits
+ - ✔️ Pricing information (where available)
+
+
-As a developer, your experience is simplified and streamlined. You primarily interact with the **Models** tab, which acts as your personal "Model Garden."
+## Adding an AI Provider
-- **Discover Models:** The "Models" tab is a complete gallery of every single model you have been given access to by your admins.
-- **Get Code Snippets:** Click on any model, and Portkey will generate the exact code snippet you need to start making calls, with the correct provider and model slugs already included.
-- **Simplified API Calls:** You can now call any model directly using the `model` parameter, formatted as `@{provider_slug}/{model_slug}`. This lets you switch between providers and models on the fly with a single Portkey API key.
+You can add providers via \*\*UI \*\*(follow the steps below) or [**API**](/api-reference/admin-api/introduction).
-```python
-# Switch between a model on OpenAI and one on Bedrock seamlessly
-client.chat.completions.create(
- model="@openai-prod/gpt-4o",
- messages=[...]
-)
+
+
+ 
+
+
+ Choose from list (OpenAI, Anthropic, etc.) or _Self-hosted / Custom_.
-client.chat.completions.create(
- model="@bedrock-us/claude-3-sonnet-v1",
- messages=[...]
+ 
+
+
+ Choose existing credentials or create new ones.
+
+ 
+
+
+ Choose the name and slug for this provider. The slug cannot be changed later and will be used to reference the AI models.
+
+ 
+
+
+
+## Using Provider Models
+
+Once you have AI Providers set up, you can use their models in your applications through various methods.
+
+### 1. Model String Composition (Recommended)
+
+In Portkey, model strings follow this format:
+
+`@provider_slug/model_name`
+
+
+
+For example, `@openai-prod/gpt-4o`, `@anthropic/claude-sonnet-3.7`, `@bedrock-us/claude-3-sonnet-v1`
+
+
+
+```javascript Javascript SDK
+import { Portkey } from 'portkey-ai';
+const client = new Portkey({ apiKey: "PORTKEY_API_KEY" });
+
+const resp = await client.chat.completions.create({
+ model: '@openai-prod/gpt-4o',
+ messages: [{ role: 'user', content: 'Hello!' }]
+});
+```
+
+
+```python Python SDK
+from portkey_ai import Portkey
+
+portkey = Portkey(api_key="PORTKEY_API_KEY")
+
+resp = portkey.chat.completions.create(
+ model="@openai-prod/gpt-4o",
+ messages=[{"role":"user","content":"Hello"}]
)
```
-#### **For Workspace Admins: Manage and Customize**
-As a Workspace Admin, you have more control over the providers within your workspace via the **AI Providers** tab.
+```python OpenAI Python SDK
+from openai import OpenAI
-You will see a list of providers that have been inherited from the organization. From here, you have two primary options when you click **Create Provider**:
+client = OpenAI(api_key="PORTKEY_API_KEY", base_url="https://api.portkey.ai/v1")
-1. **Inherit from an Org Integration:** You can create *additional* providers that are based on an existing org-level integration. This is useful for subdividing access within your team. For example, if your workspace has a $1000 budget on the main "Azure Prod" integration, you could create a new provider from it named "azure-prod-experimental" and give it a stricter $100 budget for a specific project.
-2. **Create a New Workspace-Exclusive Integration:** If your Org Admin has enabled the permission, you can create a brand new integration from scratch. This provider is exclusive to your workspace and functions just like the old Virtual Keys did.
+client.chat.completions.create(
+ model="@openai-prod/gpt-4o",
+ messages=[…]
+)
+```
-#### **For Organization Admins: A View into Workspaces**
-While Org Admins primarily work in the main **[Integrations](/product/integrations)** dashboard, the Model Catalog provides a crucial feedback loop:
+```javascript OpenAI JS SDK
+import OpenAI from 'openai';
-When a Workspace Admin creates a new, workspace-exclusive integration (option #2 above), you gain full visibility. This new integration will automatically appear on your main Integrations page under the **"Workspace-Created"** tab, ensuring you always have a complete audit trail of all provider credentials being used across the organization.
+const openai = new OpenAI({
+ apiKey: "PORTKEY_API_KEY",
+ baseURL: "https://api.portkey.ai/v1"
+});
-### **SDK Integration and Advanced Usage**
+const completion = await openai.chat.completions.create({
+ model: "@openai-prod/gpt-4o",
+ messages: [{ role: 'user', content: 'Hello!' }]
+});
+```
-While the new `model` parameter is the recommended approach for its simplicity, Portkey maintains full backward compatibility and offers flexible integration options for various SDKs.
-
-Remember, the term "Virtual Key" is now synonymous with the **AI Provider slug** found in your Model Catalog.
-
+```bash cURL
+curl https://api.portkey.ai/v1/chat/completions \
+ -H "Content-Type: application/json" \
+ -H "x-portkey-api-key: $PORTKEY_API_KEY" \
+ -d '{
+ "model": "@openai-prod/gpt-4o",
+ "messages": [{"role": "user", "content": "Hello!"}]
+ }'
+```
-#### **Using with Portkey SDK**
+
-You can set a default AI Provider at initialization or override it per request.
+### 2. Using the `provider` header
-
-
+You can also specify the provider in the header instead of the model string like the earlier virtual keys approach. Remember to add the `@` before your provider slug.
-```js
-import Portkey from 'portkey-ai';
+
-// Set a default AI Provider for the client
-const portkey = new Portkey({
- apiKey: process.env.PORTKEY_API_KEY,
- provider:"@YOUR_AI_PROVIDER_SLUG"
+```javascript Javascript SDK
+import { Portkey } from 'portkey-ai';
+const client = new Portkey({
+ apiKey: "PORTKEY_API_KEY",
+ provider: "@openai-prod"
});
-// Or, override it for a specific call
-const chatCompletion = await portkey.chat.completions.create({
- messages: [{ role: 'user', content: 'Say this is a test' }],
- model: '@openai/gpt-4.1',
+const resp = await client.chat.completions.create({
+ model: 'gpt-4o',
+ messages: [{ role: 'user', content: 'Hello!' }]
});
```
-
-
-```python
+```python Python SDK
from portkey_ai import Portkey
-# Set a default AI Provider for the client
portkey = Portkey(
- api_key="PORTKEY_API_KEY",
- provider="@YOUR_AI_PROVIDER_SLUG"
+ api_key="PORTKEY_API_KEY",
+ provider="@openai-prod"
)
-# Or, override it for a specific call
-completion = portkey.chat.completions.create(
- messages = [{ "role": 'user', "content": 'Say this is a test' }],
- model = '@openai/gpt-4.1'
+resp = portkey.chat.completions.create(
+ model="gpt-4o",
+ messages=[{"role":"user","content":"Hello"}]
)
```
-
-
-
-#### **Using with OpenAI SDK**
-Simply point the OpenAI client to Portkey's gateway and pass your AI Provider slug in the headers.
-
-
-
-
-```python
+```python OpenAI Python SDK
from openai import OpenAI
-from portkey_ai import PORTKEY_GATEWAY_URL, createHeaders
+from portkey_ai import createHeaders
client = OpenAI(
- api_key="", # can be left blank
- base_url=PORTKEY_GATEWAY_URL,
- default_headers=createHeaders(
- api_key="PORTKEY_API_KEY",
- provider="@YOUR_AI_PROVIDER_SLUG"
- )
+ api_key="PORTKEY_API_KEY",
+ base_url="https://api.portkey.ai/v1",
+ default_headers=createHeaders(
+ provider="@openai-prod"
+ )
+)
+
+client.chat.completions.create(
+ model="gpt-4o",
+ messages=[…]
)
```
-
-
-```javascript
-import OpenAI from "openai";
-import { PORTKEY_GATEWAY_URL, createHeaders } from 'portkey-ai';
+```javascript OpenAI JS SDK
+import OpenAI from 'openai';
+import { createHeaders } from 'portkey-ai'
const openai = new OpenAI({
- apiKey: '', // can be left blank
- baseURL: PORTKEY_GATEWAY_URL,
- defaultHeaders: createHeaders({
apiKey: "PORTKEY_API_KEY",
- provider:"@YOUR_AI_PROVIDER_SLUG"
- })
+ baseURL: "https://api.portkey.ai/v1",
+ defaultHeaders: {
+ provider: "@openai-prod"
+ }
+});
+
+const completion = await openai.chat.completions.create({
+ model: "gpt-4o",
+ messages: [{ role: 'user', content: 'Hello!' }]
});
```
-
-
\ No newline at end of file
+
+```bash cURL
+curl https://api.portkey.ai/v1/chat/completions \
+ -H "Content-Type: application/json" \
+ -H "x-portkey-api-key: $PORTKEY_API_KEY" \
+ -H "x-portkey-provider": "@openai-prod" \
+ -d '{
+ "model": "gpt-4o",
+ "messages": [{"role": "user", "content": "Hello!"}]
+ }'
+```
+
+
+
+### 3. Specify `provider` in the config
+
+Specifying the `provider` in the [config](/product/ai-gateway/configs) will also set it for all requests using that config. You can also specify the model string format in the override params.
+
+```json
+// Specify provider in the config
+{
+ "provider": "@openai-prod"
+}
+
+// and/or specify the model string in "override_params"
+{
+ "strategy": { "mode": "fallback" },
+ "targets": [{
+ "override_params": { "model": "@openai-prod/gpt-4o" }
+ }, {
+ "override_params": { "model": "@anthropic/claude-sonnet-3.7" }
+ }]
+}
+```
+
+> **Ordering:** `config` (if provided) defines base; `override_params` merges on top (last write wins for scalars, deep merge for objects like `metadata`).
+
+## Integrations and Workspaces
+
+The Model Catalog enables seamless integration across your organization's structure:
+
+- **Organization-Level**: Create and manage integrations centrally
+- **Workspace-Level**: Provision specific integrations to workspaces
+- **Developer-Level**: Use provisioned models through simple API calls
+
+This hierarchical approach provides central governance while giving workspaces the flexibility they need.
+
+
+ Admins can manage AI service credentials across workspaces easily through integrations. Click to learn more about using this.
+
+
+## Budgets & Limits
+
+Portkey allows you to set and manage budget limits at various levels:
+
+- **Workspace-Level**: Set specific budgets for each workspace
+- **Provider-Level**: Set budgets for individual AI Providers
+
+Budget limits can be:
+
+- **Cost-Based**: Set a maximum spend in USD
+- **Token-Based**: Set a maximum number of tokens that can be consumed
+- **Rate-Based**: Set maximum requests per minute/hour/day
+
+You can also configure periodic resets (weekly or monthly) for these limits, which is perfect for managing recurring team budgets.
+
+[Learn more about Budgets and Limits here](/product/administration/enforce-budget-and-rate-limit).
+
+## Model Management
+
+
+ You can manage your own custom models in Model Catalog, including fine-tuned models, custom-hosted models and private models. Click to see how to create custom models.
+
+
+
+ For models with custom pricing arrangements, you can configure input and output token pricing at an integration level.. Click to see how to add custom pricing for models.
+
+
+## Self-hosted AI Providers
+
+TBD
\ No newline at end of file
diff --git a/providersandmodels.gif b/providersandmodels.gif
new file mode 100644
index 00000000..c525743f
Binary files /dev/null and b/providersandmodels.gif differ