| Provider | Plan | Description | Links |
|---|---|---|---|
| Zhipu AI | GLM CODING PLAN | You've been invited to join the GLM Coding Plan! Enjoy full support for Claude Code, Cline, and 10+ top coding tools — starting at just $3/month. Subscribe now and grab the limited-time deal! | English / 中文 |
| Volcengine | CODING PLAN | Ark Coding Plan supports Doubao, GLM, DeepSeek, Kimi and other models. Compatible with unlimited tools. Subscribe now for an extra 10% off — as low as $1.2/month. The more you subscribe, the more you save! | Link / Code: LXKDZK3W |
AxonHub is the AI gateway that lets you switch between model providers without changing a single line of code.
Whether you're using OpenAI SDK, Anthropic SDK, or any AI SDK, AxonHub transparently translates your requests to work with any supported model provider. No refactoring, no SDK swaps—just change a configuration and you're done.
What it solves:
- 🔒 Vendor lock-in - Switch from GPT-4 to Claude or Gemini instantly
- 🔧 Integration complexity - One API format for 10+ providers
- 📊 Observability gap - Complete request tracing out of the box
- 💸 Cost control - Real-time usage tracking and budget management
| Feature | What You Get |
|---|---|
| 🔄 Any SDK → Any Model | Use OpenAI SDK to call Claude, or Anthropic SDK to call GPT. Zero code changes. |
| 🔍 Full Request Tracing | Complete request timelines with thread-aware observability. Debug faster. |
| 🔐 Enterprise RBAC | Fine-grained access control, usage quotas, and data isolation. |
| ⚡ Smart Load Balancing | Auto failover in <100ms. Always route to the healthiest channel. |
| 💰 Real-time Cost Tracking | Per-request cost breakdown. Input, output, cache tokens—all tracked. |
For detailed technical documentation, API references, architecture design, and more, please visit
Try AxonHub live at our demo instance!
Note:The demo instance currently configures Zhipu and OpenRouter free models.
- Email: demo@example.com
- Password: 12345678
Here are some screenshots of AxonHub in action:
System Dashboard |
Channel Management |
Model Price |
Models |
Trace Viewer |
Request Monitoring |
| API Type | Status | Description | Document |
|---|---|---|---|
| Text Generation | ✅ Done | Conversational interface | OpenAI API, Anthropic API, Gemini API |
| Image Generation | ✅ Done | Image generation | Image Generation |
| Rerank | ✅ Done | Results ranking | Rerank API |
| Embedding | ✅ Done | Vector embedding generation | Embedding API |
| Realtime | 📝 Todo | Live conversation capabilities | - |
| Provider | Status | Supported Models | Compatible APIs |
|---|---|---|---|
| OpenAI | ✅ Done | GPT-4, GPT-4o, GPT-5, etc. | OpenAI, Anthropic, Gemini, Embedding, Image Generation |
| Anthropic | ✅ Done | Claude 3.5, Claude 3.0, etc. | OpenAI, Anthropic, Gemini |
| Zhipu AI | ✅ Done | GLM-4.5, GLM-4.5-air, etc. | OpenAI, Anthropic, Gemini |
| Moonshot AI (Kimi) | ✅ Done | kimi-k2, etc. | OpenAI, Anthropic, Gemini |
| DeepSeek | ✅ Done | DeepSeek-V3.1, etc. | OpenAI, Anthropic, Gemini |
| ByteDance Doubao | ✅ Done | doubao-1.6, etc. | OpenAI, Anthropic, Gemini, Image Generation |
| Gemini | ✅ Done | Gemini 2.5, etc. | OpenAI, Anthropic, Gemini, Image Generation |
| Jina AI | ✅ Done | Embeddings, Reranker, etc. | Jina Embedding, Jina Rerank |
| OpenRouter | ✅ Done | Various models | OpenAI, Anthropic, Gemini, Image Generation |
| ZAI | ✅ Done | - | Image Generation |
| AWS Bedrock | 🔄 Testing | Claude on AWS | OpenAI, Anthropic, Gemini |
| Google Cloud | 🔄 Testing | Claude on GCP | OpenAI, Anthropic, Gemini |
| NanoGPT | ✅ Done | Various models, Image Gen | OpenAI, Anthropic, Gemini, Image Generation |
# Download and extract (macOS ARM64 example)
curl -sSL https://github.com/looplj/axonhub/releases/latest/download/axonhub_darwin_arm64.tar.gz | tar xz
cd axonhub_*
# Run with SQLite (default)
./axonhub
# Open http://localhost:8090
# First run: Follow the setup wizard to initialize the system (create admin account, password must be at least 6 characters)That's it! Now configure your first AI channel and start calling models through AxonHub.
Your existing code works without any changes. Just point your SDK to AxonHub:
from openai import OpenAI
client = OpenAI(
base_url="http://localhost:8090/v1", # Point to AxonHub
api_key="your-axonhub-api-key" # Use AxonHub API key
)
# Call Claude using OpenAI SDK!
response = client.chat.completions.create(
model="claude-3-5-sonnet", # Or gpt-4, gemini-pro, deepseek-chat...
messages=[{"role": "user", "content": "Hello!"}]
)Switch models by changing one line: model="gpt-4" → model="claude-3-5-sonnet". No SDK changes needed.
Deploy AxonHub with 1-click on Render for free.
Perfect for individual developers and small teams. No complex configuration required.
-
Download the latest release from GitHub Releases
- Choose the appropriate version for your operating system:
-
Extract and run
# Extract the downloaded file unzip axonhub_*.zip cd axonhub_* # Add execution permissions (only for Linux/macOS) chmod +x axonhub # Run directly - default SQLite database # Install AxonHub to system sudo ./install.sh # Start AxonHub service ./start.sh # Stop AxonHub service ./stop.sh
-
Access the application
http://localhost:8090
For production environments, high availability, and enterprise deployments.
AxonHub supports multiple databases to meet different scale deployment needs:
| Database | Supported Versions | Recommended Scenario | Auto Migration | Links |
|---|---|---|---|---|
| TiDB Cloud | Starter | Serverless, Free tier, Auto Scale | ✅ Supported | TiDB Cloud |
| TiDB Cloud | Dedicated | Distributed deployment, large scale | ✅ Supported | TiDB Cloud |
| TiDB | V8.0+ | Distributed deployment, large scale | ✅ Supported | TiDB |
| Neon DB | - | Serverless, Free tier, Auto Scale | ✅ Supported | Neon DB |
| PostgreSQL | 15+ | Production environment, medium-large deployments | ✅ Supported | PostgreSQL |
| MySQL | 8.0+ | Production environment, medium-large deployments | ✅ Supported | MySQL |
| SQLite | 3.0+ | Development environment, small deployments | ✅ Supported | SQLite |
AxonHub uses YAML configuration files with environment variable override support:
# config.yml
server:
port: 8090
name: "AxonHub"
debug: false
db:
dialect: "tidb"
dsn: "<USER>.root:<PASSWORD>@tcp(gateway01.us-west-2.prod.aws.tidbcloud.com:4000)/axonhub?tls=true&parseTime=true&multiStatements=true&charset=utf8mb4"
log:
level: "info"
encoding: "json"Environment variables:
AXONHUB_SERVER_PORT=8090
AXONHUB_DB_DIALECT="tidb"
AXONHUB_DB_DSN="<USER>.root:<PASSWORD>@tcp(gateway01.us-west-2.prod.aws.tidbcloud.com:4000)/axonhub?tls=true&parseTime=true&multiStatements=true&charset=utf8mb4"
AXONHUB_LOG_LEVEL=infoFor detailed configuration instructions, please refer to configuration documentation.
# Clone project
git clone https://github.com/looplj/axonhub.git
cd axonhub
# Set environment variables
export AXONHUB_DB_DIALECT="tidb"
export AXONHUB_DB_DSN="<USER>.root:<PASSWORD>@tcp(gateway01.us-west-2.prod.aws.tidbcloud.com:4000)/axonhub?tls=true&parseTime=true&multiStatements=true&charset=utf8mb4"
# Start services
docker-compose up -d
# Check status
docker-compose psDeploy AxonHub on Kubernetes using the official Helm chart:
# Quick installation
git clone https://github.com/looplj/axonhub.git
cd axonhub
helm install axonhub ./deploy/helm
# Production deployment
helm install axonhub ./deploy/helm -f ./deploy/helm/values-production.yaml
# Access AxonHub
kubectl port-forward svc/axonhub 8090:8090
# Visit http://localhost:8090Key Configuration Options:
| Parameter | Description | Default |
|---|---|---|
axonhub.replicaCount |
Replicas | 1 |
axonhub.dbPassword |
DB password | axonhub_password |
postgresql.enabled |
Embedded PostgreSQL | true |
ingress.enabled |
Enable ingress | false |
persistence.enabled |
Data persistence | false |
For detailed configuration and troubleshooting, see Helm Chart Documentation.
Download the latest release from GitHub Releases
# Extract and run
unzip axonhub_*.zip
cd axonhub_*
# Set environment variables
export AXONHUB_DB_DIALECT="tidb"
export AXONHUB_DB_DSN="<USER>.root:<PASSWORD>@tcp(gateway01.us-west-2.prod.aws.tidbcloud.com:4000)/axonhub?tls=true&parseTime=true&multiStatements=true&charset=utf8mb4"
sudo ./install.sh
# Configuration file check
axonhub config check
# Start service
# For simplicity, we recommend managing AxonHub with the helper scripts:
# Start
./start.sh
# Stop
./stop.shAxonHub provides a unified API gateway that supports both OpenAI Chat Completions and Anthropic Messages APIs. This means you can:
- Use OpenAI API to call Anthropic models - Keep using your OpenAI SDK while accessing Claude models
- Use Anthropic API to call OpenAI models - Use Anthropic's native API format with GPT models
- Use Gemini API to call OpenAI models - Use Gemini's native API format with GPT models
- Automatic API translation - AxonHub handles format conversion automatically
- Zero code changes - Your existing OpenAI or Anthropic client code continues to work
-
Access Management Interface
http://localhost:8090 -
Configure AI Providers
- Add API keys in the management interface
- Test connections to ensure correct configuration
-
Create Users and Roles
- Set up permission management
- Assign appropriate access permissions
Configure AI provider channels in the management interface. For detailed information on channel configuration, including model mappings, parameter overrides, and troubleshooting, see the Channel Configuration Guide.
AxonHub provides a flexible model management system that supports mapping abstract models to specific channels and model implementations through Model Associations. This enables:
- Unified Model Interface - Use abstract model IDs (e.g.,
gpt-4,claude-3-opus) instead of channel-specific names - Intelligent Channel Selection - Automatically route requests to optimal channels based on association rules and load balancing
- Flexible Mapping Strategies - Support for precise channel-model matching, regex patterns, and tag-based selection
- Priority-based Fallback - Configure multiple associations with priorities for automatic failover
For comprehensive information on model management, including association types, configuration examples, and best practices, see the Model Management Guide.
Create API keys to authenticate your applications with AxonHub. Each API key can be configured with multiple profiles that define:
- Model Mappings - Transform user-requested models to actual available models using exact match or regex patterns
- Channel Restrictions - Limit which channels an API key can use by channel IDs or tags
- Model Access Control - Control which models are accessible through a specific profile
- Profile Switching - Change behavior on-the-fly by activating different profiles
For detailed information on API key profiles, including configuration examples, validation rules, and best practices, see the API Key Profile Guide.
See the dedicated guides for detailed setup steps, troubleshooting, and tips on combining these tools with AxonHub model profiles:
For detailed SDK usage examples and code samples, please refer to the API documentation:
For detailed development instructions, architecture design, and contribution guidelines, please see docs/en/development/development.md.
- 🙏 musistudio/llms - LLM transformation framework, source of inspiration
- 🎨 satnaing/shadcn-admin - Admin interface template
- 🔧 99designs/gqlgen - GraphQL code generation
- 🌐 gin-gonic/gin - HTTP framework
- 🗄️ ent/ent - ORM framework
- 🔧 air-verse/air - Auto reload Go service
- ☁️ Render - Free cloud deployment platform for hosting our demo
- 🗃️ TiDB Cloud - Serverless database platform for demo deployment
This project is licensed under multiple licenses (Apache-2.0 and LGPL-3.0). See LICENSE file for the detailed licensing overview and terms.
AxonHub - All-in-one AI Development Platform, making AI development simpler
🏠 Homepage • 📚 Documentation • 🐛 Issue Feedback
Built with ❤️ by the AxonHub team