@@ -20,23 +20,6 @@ Change your base URL and API key:
2020+ const apiKey = process.env.LLM_GATEWAY_API_KEY;
2121```
2222
23- ## Why Teams Switch to LLM Gateway
24-
25- | Feature | OpenRouter | LLM Gateway |
26- | ------------------------ | ---------------------------- | ------------------------- |
27- | Gateway fee (Pro) | 5% | ** 2.5%** (50% lower) |
28- | OpenAI-compatible API | Yes | Yes |
29- | Model coverage | 300+ models | 180+ models |
30- | Analytics dashboard | Via third-party integrations | ** Built-in, per-request** |
31- | Required headers | HTTP-Referer, X-Title | ** Just Authorization** |
32- | Self-hosting option | No | ** Yes (AGPLv3)** |
33- | Anthropic-compatible API | No | ** Yes (/v1/messages)** |
34- | Native AI SDK provider | Yes | Yes |
35-
36- The biggest differences: lower fees, built-in analytics, simpler API (no extra headers), and the option to self-host.
37-
38- For a detailed breakdown, see [ LLM Gateway vs OpenRouter] ( /compare/open-router ) .
39-
4023## Migration Steps
4124
4225### 1. Get Your LLM Gateway API Key
@@ -50,7 +33,7 @@ Sign up at [llmgateway.io/signup](/signup) and create an API key from your dashb
5033# OPENROUTER_API_KEY=sk-or-...
5134
5235# Add LLM Gateway credentials
53- export LLM_GATEWAY_API_KEY=llmgtwy_your_key_here
36+ LLM_GATEWAY_API_KEY=llmgtwy_your_key_here
5437```
5538
5639### 3. Update Your Code
@@ -63,13 +46,11 @@ const response = await fetch("https://openrouter.ai/api/v1/chat/completions", {
6346 method: " POST" ,
6447 headers: {
6548 Authorization: ` Bearer ${process .env .OPENROUTER_API_KEY } ` ,
66- " Content-Type" : " application/json" ,
67- " HTTP-Referer" : " https://your-site.com" ,
68- " X-Title" : " Your App Name" ,
49+ " Content-Type" : " application/json"
6950 },
7051 body: JSON .stringify ({
71- model: " anthropic/claude-3-5-sonnet " ,
72- messages: [{ role: " user" , content: " Hello!" }],
52+ model: " openai/gpt-5.2 " ,
53+ messages: [{ role: " user" , content: " Hello!" }]
7354 }),
7455});
7556
@@ -78,11 +59,11 @@ const response = await fetch("https://api.llmgateway.io/v1/chat/completions", {
7859 method: " POST" ,
7960 headers: {
8061 Authorization: ` Bearer ${process .env .LLM_GATEWAY_API_KEY } ` ,
81- " Content-Type" : " application/json" ,
62+ " Content-Type" : " application/json"
8263 },
8364 body: JSON .stringify ({
84- model: " anthropic/claude-3-5-sonnet-20241022 " ,
85- messages: [{ role: " user" , content: " Hello!" }],
65+ model: " gpt-5.2 " ,
66+ messages: [{ role: " user" , content: " Hello!" }]
8667 }),
8768});
8869```
@@ -95,17 +76,13 @@ import OpenAI from "openai";
9576// Before (OpenRouter)
9677const client = new OpenAI ({
9778 baseURL: " https://openrouter.ai/api/v1" ,
98- apiKey: process .env .OPENROUTER_API_KEY ,
99- defaultHeaders: {
100- " HTTP-Referer" : " https://your-site.com" ,
101- " X-Title" : " Your App Name" ,
102- },
79+ apiKey: process .env .OPENROUTER_API_KEY
10380});
10481
10582// After (LLM Gateway)
10683const client = new OpenAI ({
10784 baseURL: " https://api.llmgateway.io/v1" ,
108- apiKey: process .env .LLM_GATEWAY_API_KEY ,
85+ apiKey: process .env .LLM_GATEWAY_API_KEY
10986});
11087
11188// Usage remains the same
@@ -153,8 +130,7 @@ Most model names are compatible, but here are some common mappings:
153130
154131| OpenRouter Model | LLM Gateway Model |
155132| -------------------------------- | ----------------------------------------------------------------- |
156- | gpt-5.2 | gpt-5.2 or openai/gpt-5.2 |
157- | claude-opus-4-5-20251101 | claude-opus-4-5-20251101 or anthropic/claude-opus-4-5-20251101 |
133+ | openai/gpt-5.2 | gpt-5.2 or openai/gpt-5.2 |
158134| gemini/gemini-3-flash-preview | gemini-3-flash-preview or google-ai-studio/gemini-3-flash-preview |
159135| bedrock/claude-opus-4-5-20251101 | claude-opus-4-5-20251101 or aws-bedrock/claude-opus-4-5-20251101 |
160136
@@ -176,15 +152,6 @@ for await (const chunk of stream) {
176152}
177153```
178154
179- ## What You Get After Switching
180-
181- - ** 50% lower gateway fees** on Pro plan (2.5% vs OpenRouter's 5%)
182- - ** Per-request analytics** — See exactly what each API call costs
183- - ** Simpler integration** — No HTTP-Referer or X-Title headers required
184- - ** Response caching** — Automatic caching reduces costs for repeated requests
185- - ** Self-hosting option** — Run on your own infrastructure if you need full control
186- - ** Anthropic API support** — Use ` /v1/messages ` for Anthropic-native integrations
187-
188155## Full Comparison
189156
190157Want to see a detailed breakdown of all features? Check out our [ LLM Gateway vs OpenRouter comparison page] ( /compare/open-router ) .
0 commit comments