Skip to content

Commit 1042df2

Browse files
committed
feat: add 9 new OpenRouter models and update aliases (upstream PR BeehiveInnovations#411)
Cherry-picked model additions from BeehiveInnovations#411. New models: Claude 4.6 (Opus/Sonnet), Gemini 3.1 Pro, GPT-5.4/5.4-Pro, GPT-5.3-Codex, Devstral, DeepSeek V3.2, Qwen 3.5, MiniMax M2.5. Updated generic aliases (opus→4.6, sonnet→4.6, pro→3.1, gpt5→5.4, codex→5.3) with version-specific aliases for backward compatibility. Fixed no-API-keys test to account for ADC fallback from PR BeehiveInnovations#306.
1 parent 74032f4 commit 1042df2

File tree

6 files changed

+274
-100
lines changed

6 files changed

+274
-100
lines changed

conf/openrouter_models.json

Lines changed: 189 additions & 48 deletions
Original file line numberDiff line numberDiff line change
@@ -27,9 +27,7 @@
2727
{
2828
"model_name": "anthropic/claude-opus-4.5",
2929
"aliases": [
30-
"opus",
31-
"opus4.5",
32-
"claude-opus"
30+
"opus4.5"
3331
],
3432
"context_window": 200000,
3533
"max_output_tokens": 64000,
@@ -41,10 +39,26 @@
4139
"description": "Claude Opus 4.5 - Anthropic's frontier reasoning model for complex software engineering and agentic workflows",
4240
"intelligence_score": 18
4341
},
42+
{
43+
"model_name": "anthropic/claude-opus-4.6",
44+
"aliases": [
45+
"opus",
46+
"opus4.6",
47+
"claude-opus"
48+
],
49+
"context_window": 1000000,
50+
"max_output_tokens": 128000,
51+
"supports_extended_thinking": false,
52+
"supports_json_mode": false,
53+
"supports_function_calling": false,
54+
"supports_images": true,
55+
"max_image_size_mb": 5.0,
56+
"description": "Claude Opus 4.6 - Anthropic's strongest model for coding, long-running professional tasks, and agentic workflows",
57+
"intelligence_score": 18
58+
},
4459
{
4560
"model_name": "anthropic/claude-sonnet-4.5",
4661
"aliases": [
47-
"sonnet",
4862
"sonnet4.5"
4963
],
5064
"context_window": 200000,
@@ -57,6 +71,22 @@
5771
"description": "Claude Sonnet 4.5 - High-performance model with exceptional reasoning and efficiency",
5872
"intelligence_score": 12
5973
},
74+
{
75+
"model_name": "anthropic/claude-sonnet-4.6",
76+
"aliases": [
77+
"sonnet",
78+
"sonnet4.6"
79+
],
80+
"context_window": 1000000,
81+
"max_output_tokens": 128000,
82+
"supports_extended_thinking": false,
83+
"supports_json_mode": false,
84+
"supports_function_calling": false,
85+
"supports_images": true,
86+
"max_image_size_mb": 5.0,
87+
"description": "Claude Sonnet 4.6 - Frontier Sonnet with coding, agents, and professional task performance",
88+
"intelligence_score": 13
89+
},
6090
{
6191
"model_name": "anthropic/claude-opus-4.1",
6292
"aliases": [
@@ -104,12 +134,30 @@
104134
},
105135
{
106136
"model_name": "google/gemini-3-pro-preview",
137+
"aliases": [
138+
"gemini3.0",
139+
"gemini-3.0",
140+
"pro-openrouter"
141+
],
142+
"context_window": 1048576,
143+
"max_output_tokens": 65536,
144+
"supports_extended_thinking": true,
145+
"supports_json_mode": true,
146+
"supports_function_calling": true,
147+
"supports_images": true,
148+
"max_image_size_mb": 20.0,
149+
"allow_code_generation": true,
150+
"description": "Google's Gemini 3.0 Pro via OpenRouter with vision",
151+
"intelligence_score": 17
152+
},
153+
{
154+
"model_name": "google/gemini-3.1-pro-preview",
107155
"aliases": [
108156
"pro",
109157
"gemini-pro",
110158
"gemini",
111159
"gemini3",
112-
"pro-openrouter"
160+
"gemini3.1"
113161
],
114162
"context_window": 1048576,
115163
"max_output_tokens": 65536,
@@ -119,8 +167,8 @@
119167
"supports_images": true,
120168
"max_image_size_mb": 20.0,
121169
"allow_code_generation": true,
122-
"description": "Google's Gemini 3.0 Pro via OpenRouter with vision",
123-
"intelligence_score": 18
170+
"description": "Google's Gemini 3.1 Pro - Frontier reasoning with enhanced software engineering and agentic capabilities",
171+
"intelligence_score": 19
124172
},
125173
{
126174
"model_name": "google/gemini-2.5-pro",
@@ -171,25 +219,6 @@
171219
"description": "Mistral's largest model (text-only)",
172220
"intelligence_score": 11
173221
},
174-
{
175-
"model_name": "meta-llama/llama-3-70b",
176-
"aliases": [
177-
"llama",
178-
"llama3",
179-
"llama3-70b",
180-
"llama-70b",
181-
"llama3-openrouter"
182-
],
183-
"context_window": 8192,
184-
"max_output_tokens": 8192,
185-
"supports_extended_thinking": false,
186-
"supports_json_mode": false,
187-
"supports_function_calling": false,
188-
"supports_images": false,
189-
"max_image_size_mb": 0.0,
190-
"description": "Meta's Llama 3 70B model (text-only)",
191-
"intelligence_score": 9
192-
},
193222
{
194223
"model_name": "deepseek/deepseek-r1-0528",
195224
"aliases": [
@@ -208,23 +237,6 @@
208237
"description": "DeepSeek R1 with thinking mode - advanced reasoning capabilities (text-only)",
209238
"intelligence_score": 15
210239
},
211-
{
212-
"model_name": "perplexity/llama-3-sonar-large-32k-online",
213-
"aliases": [
214-
"perplexity",
215-
"sonar",
216-
"perplexity-online"
217-
],
218-
"context_window": 32768,
219-
"max_output_tokens": 32768,
220-
"supports_extended_thinking": false,
221-
"supports_json_mode": false,
222-
"supports_function_calling": false,
223-
"supports_images": false,
224-
"max_image_size_mb": 0.0,
225-
"description": "Perplexity's online model with web search (text-only)",
226-
"intelligence_score": 9
227-
},
228240
{
229241
"model_name": "openai/o3",
230242
"aliases": [
@@ -316,7 +328,8 @@
316328
{
317329
"model_name": "openai/gpt-5",
318330
"aliases": [
319-
"gpt5"
331+
"gpt-5.0",
332+
"gpt5.0"
320333
],
321334
"context_window": 400000,
322335
"max_output_tokens": 128000,
@@ -327,15 +340,14 @@
327340
"max_image_size_mb": 20.0,
328341
"supports_temperature": true,
329342
"temperature_constraint": "range",
330-
"description": "GPT-5 (400K context, 128K output) - Advanced model with reasoning support",
343+
"description": "GPT-5.0 (400K context, 128K output) - Advanced model with reasoning support",
331344
"intelligence_score": 16
332345
},
333346
{
334347
"model_name": "openai/gpt-5.2-pro",
335348
"aliases": [
336349
"gpt5.2-pro",
337-
"gpt5.2pro",
338-
"gpt5pro"
350+
"gpt5.2pro"
339351
],
340352
"context_window": 400000,
341353
"max_output_tokens": 272000,
@@ -352,10 +364,53 @@
352364
"description": "GPT-5.2 Pro - Advanced reasoning model with highest quality responses (text+image input, text output only)",
353365
"intelligence_score": 18
354366
},
367+
{
368+
"model_name": "openai/gpt-5.4-pro",
369+
"aliases": [
370+
"gpt5.4-pro",
371+
"gpt5.4pro",
372+
"gpt5pro"
373+
],
374+
"context_window": 1050000,
375+
"max_output_tokens": 128000,
376+
"supports_extended_thinking": true,
377+
"supports_json_mode": true,
378+
"supports_function_calling": true,
379+
"supports_images": true,
380+
"max_image_size_mb": 20.0,
381+
"supports_temperature": false,
382+
"temperature_constraint": "fixed",
383+
"use_openai_response_api": true,
384+
"default_reasoning_effort": "high",
385+
"allow_code_generation": true,
386+
"description": "GPT-5.4 Pro - OpenAI's most advanced model with enhanced reasoning and 1M context window",
387+
"intelligence_score": 19
388+
},
389+
{
390+
"model_name": "openai/gpt-5.4",
391+
"aliases": [
392+
"gpt5",
393+
"gpt5.4",
394+
"gpt-5.4"
395+
],
396+
"context_window": 1050000,
397+
"max_output_tokens": 128000,
398+
"supports_extended_thinking": true,
399+
"supports_json_mode": true,
400+
"supports_function_calling": true,
401+
"supports_images": true,
402+
"max_image_size_mb": 20.0,
403+
"supports_temperature": false,
404+
"temperature_constraint": "fixed",
405+
"default_reasoning_effort": "medium",
406+
"allow_code_generation": true,
407+
"description": "GPT-5.4 - OpenAI's unified frontier model (1M context, 128K output) combining Codex and GPT capabilities",
408+
"intelligence_score": 19
409+
},
355410
{
356411
"model_name": "openai/gpt-5-codex",
357412
"aliases": [
358-
"codex",
413+
"codex-5.0",
359414
"gpt5codex"
360415
],
361416
"context_window": 400000,
@@ -450,6 +505,28 @@
450505
"description": "GPT-5.1 Codex (400K context, 128K output) - Agentic coding specialization available through the Responses API",
451506
"intelligence_score": 19
452507
},
508+
{
509+
"model_name": "openai/gpt-5.3-codex",
510+
"aliases": [
511+
"codex",
512+
"codex-5.3",
513+
"gpt5.3-codex"
514+
],
515+
"context_window": 400000,
516+
"max_output_tokens": 128000,
517+
"supports_extended_thinking": true,
518+
"supports_json_mode": true,
519+
"supports_function_calling": true,
520+
"supports_images": true,
521+
"max_image_size_mb": 20.0,
522+
"supports_temperature": false,
523+
"temperature_constraint": "fixed",
524+
"use_openai_response_api": true,
525+
"default_reasoning_effort": "high",
526+
"allow_code_generation": true,
527+
"description": "GPT-5.3 Codex - OpenAI's most advanced agentic coding model with frontier software engineering performance",
528+
"intelligence_score": 19
529+
},
453530
{
454531
"model_name": "openai/gpt-5.1-codex-mini",
455532
"aliases": [
@@ -507,6 +584,70 @@
507584
"temperature_constraint": "range",
508585
"description": "xAI's Grok 4.1 Fast Reasoning via OpenRouter (2M context) with vision and advanced reasoning",
509586
"intelligence_score": 15
587+
},
588+
{
589+
"model_name": "deepseek/deepseek-v3.2-exp",
590+
"aliases": [
591+
"deepseek-v3",
592+
"deepseek-v3.2",
593+
"dsv3"
594+
],
595+
"context_window": 163840,
596+
"max_output_tokens": 65536,
597+
"supports_extended_thinking": true,
598+
"supports_json_mode": true,
599+
"supports_function_calling": false,
600+
"supports_images": false,
601+
"max_image_size_mb": 0.0,
602+
"description": "DeepSeek V3.2 Experimental - Strong reasoning capabilities (text-only)",
603+
"intelligence_score": 16
604+
},
605+
{
606+
"model_name": "mistralai/devstral-2512",
607+
"aliases": [
608+
"devstral"
609+
],
610+
"context_window": 262144,
611+
"max_output_tokens": 32768,
612+
"supports_extended_thinking": false,
613+
"supports_json_mode": true,
614+
"supports_function_calling": true,
615+
"supports_images": false,
616+
"max_image_size_mb": 0.0,
617+
"description": "Devstral 2 - Mistral's 123B parameter model specialized for agentic coding and codebase exploration",
618+
"intelligence_score": 15
619+
},
620+
{
621+
"model_name": "qwen/qwen3.5-397b-a17b",
622+
"aliases": [
623+
"qwen",
624+
"qwen3.5"
625+
],
626+
"context_window": 262144,
627+
"max_output_tokens": 65536,
628+
"supports_extended_thinking": true,
629+
"supports_json_mode": true,
630+
"supports_function_calling": true,
631+
"supports_images": true,
632+
"max_image_size_mb": 20.0,
633+
"description": "Qwen 3.5 397B - Frontier reasoning model with vision, hybrid architecture (text+image+video input)",
634+
"intelligence_score": 16
635+
},
636+
{
637+
"model_name": "minimax/minimax-m2.5",
638+
"aliases": [
639+
"minimax",
640+
"m2.5"
641+
],
642+
"context_window": 196608,
643+
"max_output_tokens": 32768,
644+
"supports_extended_thinking": false,
645+
"supports_json_mode": true,
646+
"supports_function_calling": true,
647+
"supports_images": false,
648+
"max_image_size_mb": 0.0,
649+
"description": "MiniMax M2.5 - SWE-Bench 80.2%, optimized for agent workflows and real-world productivity (API allows up to 196K output)",
650+
"intelligence_score": 16
510651
}
511652
]
512653
}

docs/custom_models.md

Lines changed: 13 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -52,18 +52,23 @@ The curated defaults in `conf/openrouter_models.json` include popular entries su
5252

5353
| Alias | Canonical Model | Highlights |
5454
|-------|-----------------|------------|
55-
| `opus`, `claude-opus` | `anthropic/claude-opus-4.1` | Flagship Claude reasoning model with vision |
56-
| `sonnet`, `sonnet4.5` | `anthropic/claude-sonnet-4.5` | Balanced Claude with high context window |
55+
| `opus`, `claude-opus` | `anthropic/claude-opus-4.6` | Latest Anthropic flagship (1M context, vision). `opus4.5` → 4.5, `opus4.1` → 4.1 |
56+
| `sonnet` | `anthropic/claude-sonnet-4.6` | Frontier Sonnet (1M context, vision). `sonnet4.5` → 4.5 |
5757
| `haiku` | `anthropic/claude-3.5-haiku` | Fast Claude option with vision |
58-
| `pro`, `gemini` | `google/gemini-2.5-pro` | Frontier Gemini with extended thinking |
58+
| `pro`, `gemini` | `google/gemini-3.1-pro-preview` | Latest Gemini Pro with 1M context, thinking. `gemini3.0` → 3.0 |
5959
| `flash` | `google/gemini-2.5-flash` | Ultra-fast Gemini with vision |
60-
| `mistral` | `mistralai/mistral-large-2411` | Frontier Mistral (text only) |
61-
| `llama3` | `meta-llama/llama-3-70b` | Large open-weight text model |
62-
| `deepseek-r1` | `deepseek/deepseek-r1-0528` | DeepSeek reasoning model |
63-
| `perplexity` | `perplexity/llama-3-sonar-large-32k-online` | Search-augmented model |
60+
| `gpt5`, `gpt5.4` | `openai/gpt-5.4` | Unified frontier model (1M context, 128K output). `gpt5.0` → 5.0 |
61+
| `gpt5pro` | `openai/gpt-5.4-pro` | Enhanced reasoning variant (1M context). `gpt5.2-pro` → 5.2 Pro |
62+
| `codex`, `codex-5.3` | `openai/gpt-5.3-codex` | Latest agentic coding model (Responses API). `codex-5.0` → 5.0 |
6463
| `gpt5.2`, `gpt-5.2`, `5.2` | `openai/gpt-5.2` | Flagship GPT-5.2 with reasoning and vision |
6564
| `gpt5.1-codex`, `codex-5.1` | `openai/gpt-5.1-codex` | Agentic coding specialization (Responses API) |
66-
| `codex-mini`, `gpt5.1-codex-mini` | `openai/gpt-5.1-codex-mini` | Cost-efficient Codex variant with streaming |
65+
| `codex-mini` | `openai/gpt-5.1-codex-mini` | Cost-efficient Codex variant with streaming |
66+
| `mistral` | `mistralai/mistral-large-2411` | Frontier Mistral (text only) |
67+
| `devstral` | `mistralai/devstral-2512` | 123B agentic coding model (262K context) |
68+
| `deepseek-r1` | `deepseek/deepseek-r1-0528` | DeepSeek reasoning model |
69+
| `deepseek-v3`, `dsv3` | `deepseek/deepseek-v3.2-exp` | DeepSeek V3.2 with strong reasoning (164K context) |
70+
| `qwen`, `qwen3.5` | `qwen/qwen3.5-397b-a17b` | Frontier 397B MoE reasoning model (262K context) |
71+
| `minimax`, `m2.5` | `minimax/minimax-m2.5` | SWE-Bench 80.2%, agent-optimized (197K context) |
6772

6873
Consult the JSON file for the full list, aliases, and capability flags. Add new entries as OpenRouter releases additional models.
6974

0 commit comments

Comments
 (0)