This MCP server bridges Claude Code to OpenRouter, allowing you to query other AI models (GPT-5.2-Codex, Gemini, DeepSeek, Llama, etc.) without leaving your Claude session.
Before installing, you need:
| Requirement | Description | Get it at |
|---|---|---|
| Claude Code CLI | Requires Max Pro subscription OR Anthropic API key | claude.ai/code |
| Node.js 18+ | JavaScript runtime | nodejs.org |
| OpenRouter account | With credits for API usage | openrouter.ai |
| OpenRouter API key | For authenticating requests | openrouter.ai/keys |
# Clone and setup
git clone <this-repo>
cd Claude_GPT_MCP
./setup.shThe setup script will:
- Check Node.js version
- Install dependencies
- Build the TypeScript
- Prompt for your OpenRouter API key
- Show the Claude Code configuration
./setup.sh # Fresh install (build + prompt for key)
./setup.sh --set-key # Add or change API key
./setup.sh --show-key # Show current key (masked: sk-or-v1...xxxx)
./setup.sh --remove-key # Remove stored API key
./setup.sh --help # Show helpAdd to ~/.claude.json:
{
"mcpServers": {
"openrouter": {
"command": "node",
"args": ["/path/to/Claude_GPT_MCP/dist/index.js"]
}
}
}Then restart Claude Code: claude --mcp-debug
Once configured, Claude has access to these tools:
Query any OpenRouter model:
"Ask GPT-5.2-Codex to review this function for edge cases"
"Get Gemini's opinion on this architecture"
"Ask DeepSeek for an alternative implementation"
List available models with pricing:
"What models are available on OpenRouter?"
"List models that match 'llama'"
Set your preferred default model:
"Set my default model to GPT-5.2-Codex"
View current configuration:
"Show my OpenRouter config"
Create custom model shortcuts:
"Add a shortcut 'fast' for openai/gpt-4o-mini"
Manage your favorites list:
"Add claude-3-opus to my favorites"
If you copy .claude/commands/ to your project, you get:
| Command | Description |
|---|---|
/models |
List available OpenRouter models |
/models gpt |
Filter models by name |
/model |
Show current config |
/model GPT-5.2-Codex |
Set default model |
| Shortcut | OpenRouter Model ID |
|---|---|
GPT-5.2-Codex |
openai/gpt-5.2-codex |
gpt-4-turbo |
openai/gpt-4-turbo |
claude-3-opus |
anthropic/claude-3-opus |
claude-3-sonnet |
anthropic/claude-3-sonnet |
gemini-pro |
google/gemini-pro |
deepseek-chat |
deepseek/deepseek-chat |
llama-3-70b |
meta-llama/llama-3-70b-instruct |
You can also use any full OpenRouter model ID directly.
| File | Purpose |
|---|---|
~/.config/openrouter-mcp/config.json |
User preferences (default model, favorites, shortcuts) |
~/.bashrc or ~/.zshrc |
API key environment variable |
You: "I'm not sure if this caching strategy is optimal. Ask GPT-5.2-Codex for a second opinion."
Claude: [Uses ask_model tool with your code as context]
Claude: "GPT-5.2-Codex suggests using an LRU cache instead of TTL-based expiration because..."
./setup.sh --show-key # Check if key is set
./setup.sh --set-key # Set/update the key
source ~/.bashrc # Reload shell configclaude --mcp-debug # See detailed MCP loading logsUse list_models to see available models, or check openrouter.ai/models.
OpenRouter has rate limits per model. If you hit them, wait or try a different model.