Skip to content

feat: Add 9 new models, remove 2 obsolete, update aliases#411

Closed
timeleft-- wants to merge 6 commits intoBeehiveInnovations:mainfrom
MachineWisdomAI:feat/update-openrouter-models-2026-03
Closed

feat: Add 9 new models, remove 2 obsolete, update aliases#411
timeleft-- wants to merge 6 commits intoBeehiveInnovations:mainfrom
MachineWisdomAI:feat/update-openrouter-models-2026-03

Conversation

@timeleft--
Copy link
Copy Markdown

Summary

  • 9 new models added: Claude Opus 4.6, Claude Sonnet 4.6, Gemini 3.1 Pro Preview, GPT-5.4 Pro, GPT-5.3 Codex, DeepSeek V3.2 Exp, Devstral 2512, Qwen 3.5 397B, MiniMax M2.5
  • 2 obsolete models removed: meta-llama/llama-3-70b (8K context), perplexity/llama-3-sonar-large-32k-online (legacy)
  • Alias migrations: Generic aliases (opus, sonnet, pro, gpt5pro) moved to newest model versions. Version-specific aliases preserved for backward compatibility (opus4.5, sonnet4.5, gemini3.0, gpt5.2-pro)
  • Version bump: 9.8.2 → 9.9.0 (pyproject.toml, config.py, uv.lock)
  • All model IDs verified against OpenRouter API (/api/v1/models)

Details

New models

Model Context Score Key capabilities
anthropic/claude-opus-4.6 1M 18 Vision
anthropic/claude-sonnet-4.6 1M 13 Vision
google/gemini-3.1-pro-preview 1M 19 Thinking, vision, function calling, code-gen
openai/gpt-5.4-pro 1.05M 19 Responses API, reasoning, code-gen
openai/gpt-5.3-codex 400K 19 Responses API, reasoning, code-gen
deepseek/deepseek-v3.2-exp 164K 16 Thinking
mistralai/devstral-2512 262K 15 Function calling, coding
qwen/qwen3.5-397b-a17b 262K 16 Thinking, vision, function calling
minimax/minimax-m2.5 197K 16 Function calling, SWE-Bench 80.2%

Alias migration (backward compatible)

Generic alias Old target New target Version-specific alias
opus claude-opus-4.5 claude-opus-4.6 opus4.5 still works
sonnet claude-sonnet-4.5 claude-sonnet-4.6 sonnet4.5 still works
pro, gemini gemini-3-pro-preview gemini-3.1-pro-preview gemini3.0 still works
gpt5pro gpt-5.2-pro gpt-5.4-pro gpt5.2-pro still works

Test plan

  • All 870 unit tests pass (14 skipped, 0 failures)
  • Alias resolution tests updated and cover all new models
  • Backward compatibility verified (version-specific aliases resolve correctly)
  • No duplicate aliases (76 total, case-insensitive uniqueness enforced)
  • Model IDs verified against live OpenRouter API

🤖 Generated with Claude Code

New models: Claude Opus 4.6, Claude Sonnet 4.6, Gemini 3.1 Pro,
GPT-5.4 Pro, GPT-5.3 Codex, DeepSeek V3.2, Devstral 2512,
Qwen 3.5 397B, MiniMax M2.5.

Removed: meta-llama/llama-3-70b (8K context, obsolete),
perplexity/llama-3-sonar-large-32k-online (legacy).

Generic aliases (opus, sonnet, pro, gpt5pro) migrated to newest
model versions. Version-specific aliases preserved for backward
compatibility (opus4.5, sonnet4.5, gemini3.0, gpt5.2-pro).

Version bumped to 9.9.0. All 870 tests pass.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request updates the project's AI model registry to incorporate the latest offerings from OpenRouter. It introduces nine new, advanced models, removes two outdated ones, and intelligently reconfigures generic aliases to point to the newest versions while preserving older, version-specific aliases for seamless backward compatibility. The changes aim to enhance the system's capabilities with cutting-edge models and ensure the model configurations remain current and robust, supported by a minor version increment and thorough validation.

Highlights

  • New Model Additions: Added 9 new AI models, including Claude Opus 4.6, Claude Sonnet 4.6, Gemini 3.1 Pro Preview, GPT-5.4 Pro, GPT-5.3 Codex, DeepSeek V3.2 Exp, Devstral 2512, Qwen 3.5 397B, and MiniMax M2.5, significantly expanding the available model capabilities.
  • Obsolete Model Removals: Removed 2 obsolete models, meta-llama/llama-3-70b (8K context) and perplexity/llama-3-sonar-large-32k-online (legacy), to maintain an up-to-date and efficient model list.
  • Alias Migrations and Backward Compatibility: Updated generic aliases (e.g., 'opus', 'sonnet', 'pro', 'gpt5pro') to point to their newest model versions, while preserving version-specific aliases (e.g., 'opus4.5', 'sonnet4.5', 'gemini3.0', 'gpt5.2-pro') to ensure backward compatibility.
  • Version Bump: Bumped the project version from 9.8.2 to 9.9.0 across pyproject.toml, config.py, and uv.lock, indicating a minor feature release.
  • Model ID Verification: Verified all model IDs against the live OpenRouter API (/api/v1/models) to ensure accuracy and functionality of the updated configurations.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • conf/openrouter_models.json
    • Added configurations for 9 new models including Claude Opus 4.6, Gemini 3.1 Pro Preview, and GPT-5.4 Pro.
    • Removed configurations for meta-llama/llama-3-70b and perplexity/llama-3-sonar-large-32k-online.
    • Updated aliases for existing models to shift generic aliases to newer versions while retaining version-specific ones.
  • config.py
    • Updated the project version to 9.9.0.
    • Updated the last update date to 2026-03-06.
  • docs/custom_models.md
    • Updated the model alias table to reflect new generic alias mappings and the inclusion of new models.
    • Removed documentation entries for deprecated models and their aliases.
  • pyproject.toml
    • Updated the project version to 9.9.0.
    • Updated the minimum required Python version to >=3.10.
  • tests/test_custom_provider.py
    • Modified alias resolution tests to reflect the removal of the 'llama' alias and updated 'deepseek' alias behavior.
  • tests/test_model_resolution_bug.py
    • Updated the Gemini alias resolution test to confirm generic 'gemini' and 'pro' aliases resolve to 'google/gemini-3.1-pro-preview', while 'gemini3.0' remains for backward compatibility.
  • tests/test_openrouter_provider.py
    • Updated alias resolution tests to reflect new generic alias targets for 'opus', 'sonnet', 'pro', and 'gemini3.0'.
    • Updated capability tests to reflect the 'opus' alias now pointing to 'anthropic/claude-opus-4.6' with its increased context window.
    • Updated alias resolution for 'sonnet' to point to 'anthropic/claude-sonnet-4.6'.
  • tests/test_openrouter_registry.py
    • Updated alias resolution test cases to include new alias mappings for 'opus', 'sonnet', 'pro', 'gemini3.1', 'gemini3.0', and new model aliases.
    • Updated model capabilities conversion test to reflect 'opus' alias pointing to 'anthropic/claude-opus-4.6' and its new context window.
Activity
  • All 870 unit tests passed, with 14 skipped and 0 failures, ensuring stability.
  • Alias resolution tests were updated and confirmed to cover all newly added models.
  • Backward compatibility for version-specific aliases was thoroughly verified.
  • A check confirmed no duplicate aliases exist across the 76 total aliases.
  • Model IDs were successfully verified against the live OpenRouter API.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Keep version 9.9.0 from our feature branch (upstream synced to 9.8.2).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds 9 new models, removes 2 obsolete models, and updates various aliases and tests accordingly. The changes are extensive and mostly look good. However, I've found a few issues that need attention. In pyproject.toml, the required Python version is bumped to >=3.10 without justification, which is a breaking change. Additionally, in conf/openrouter_models.json, there are a couple of configuration issues: one new model has a max_output_tokens value that leaves no room for an input prompt, and another has a contradictory setting for temperature support. Please see my detailed comments for suggestions.

Comment thread conf/openrouter_models.json Outdated
Comment thread conf/openrouter_models.json Outdated
Comment thread pyproject.toml
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f428374514

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment thread pyproject.toml
Comment thread conf/openrouter_models.json
- gpt-5.3-codex: supports_temperature true→false (contradicts fixed constraint)
- minimax-m2.5: max_output_tokens 196608→32768 (practical default; API allows up to 196K)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@timeleft--
Copy link
Copy Markdown
Author

Addressed Review Feedback

Fixed in 830f732:

Temperature contradiction (gpt-5.3-codex) ✅

Changed supports_temperature from true to false — contradicted temperature_constraint: "fixed".

Note: The same contradiction exists in pre-existing models (gpt-5-mini, gpt-5-nano, gpt-5.2, gpt-5.1-codex, gpt-5.1-codex-mini). Those are out of scope for this PR but could be a good follow-up.

MiniMax M2.5 max_output_tokens ✅

Changed from 196608 to 32768. OpenRouter API reports max_completion_tokens=196608 (matching context_window), but a practical default of 32K leaves room for input prompts. Added note in description field.

requires-python >=3.10 — Intentional, not accidental

This change is necessary: mcp>=1.0.0 (a core dependency) requires Python >=3.10. Running uv lock with >=3.9 fails:

No solution found when resolving dependencies for split (python_full_version == '3.9.*'):
Because mcp>=1.0.0 depends on Python>=3.10...

The project has never actually worked on 3.9 due to this transitive requirement. The requires-python metadata was simply incorrect.

gpt5pro alias cross-provider drift — Acknowledged

Valid observation. conf/openai_models.json still maps gpt5progpt-5.2-pro, and OpenAI native takes priority over OpenRouter. This PR focuses on the OpenRouter catalog; updating the OpenAI native config (adding gpt-5.4-pro) would be a separate PR.

timeleft-- and others added 3 commits March 6, 2026 09:32
10 files were failing the black --check CI step before this PR.
None are files modified by this PR — fixing them here to unblock CI.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Add openai/gpt-5.4 with aliases gpt5, gpt5.4, gpt-5.4
- Move gpt5 alias from openai/gpt-5 to openai/gpt-5.4 (newest base)
- Keep gpt-5.0/gpt5.0 on old openai/gpt-5 for backward compat
- Update tests and docs

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- codex → openai/gpt-5.3-codex (was gpt-5-codex)
- Old gpt-5-codex keeps codex-5.0 for backward compat

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
cchapman added a commit to cchapman/zen-mcp-server that referenced this pull request Mar 10, 2026
…hiveInnovations#411)

Cherry-picked model additions from BeehiveInnovations#411.
New models: Claude 4.6 (Opus/Sonnet), Gemini 3.1 Pro, GPT-5.4/5.4-Pro,
GPT-5.3-Codex, Devstral, DeepSeek V3.2, Qwen 3.5, MiniMax M2.5.
Updated generic aliases (opus→4.6, sonnet→4.6, pro→3.1, gpt5→5.4, codex→5.3)
with version-specific aliases for backward compatibility.
Fixed no-API-keys test to account for ADC fallback from PR BeehiveInnovations#306.
@illera88
Copy link
Copy Markdown

ship it! 💯

@JCMais
Copy link
Copy Markdown

JCMais commented Mar 30, 2026

@guidedways Can we get this in pls? Looking forward to using gpt 5.4 with consensus / codereview

crowe452 pushed a commit to crowe452/pal-mcp-server that referenced this pull request Mar 31, 2026
…im to 5 core tools

Merged community PR BeehiveInnovations#411 adding GPT-5.4, Gemini 3.1 Pro, Claude Opus 4.6,
Grok 4.1, and 5 other frontier models to the OpenRouter catalog.

Deleted 13 tools, keeping only the 5 we use:
- consensus (multi-model debate)
- chat (direct model access)
- debug (root cause analysis)
- thinkdeep (extended reasoning)
- codereview (structured code review)

Removed: precommit, planner, challenge, apilookup, analyze, refactor,
testgen, secaudit, docgen, tracer, version, listmodels, clink.
Each replaced by existing tools in our stack (Context7, deploy hooks,
EnterPlanMode, CLAUDE.md directives, PreToolUse hook).

9,757 lines removed. Clean, focused fork.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@timeleft--
Copy link
Copy Markdown
Author

Closing — all changes from this PR are already merged into the MachineWisdomAI fork and superseded by #430 (April 2026 model refresh).

@timeleft-- timeleft-- closed this Apr 14, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants