A high-performance Rust-based proxy server that converts GitHub Copilot into OpenAI-compatible and Ollama-compatible APIs.
This project enables using GitHub Copilot models with Rig and other Ollama-compatible and OpenAI-compatible frameworks:
use rig::providers::ollama;
let client: Client<OllamaExt> = ollama::Client::builder()
.api_key(Nothing)
.base_url("http://127.0.0.1:8081/v1")
.build()?;
let model = client.completion_model("claude-sonnet-4.5");
let agent = AgentBuilder::new(model)
.preamble("You're an AI assistant powered by GitHub Copilot")
.name("copilot-agent")
.max_tokens(2000)
.build();or
use rig::providers::ollama;
let client: Client<OpenAIResponsesExt> = openai::Client::builder()
.api_key("no key")
.base_url("http://127.0.0.1:8081/v1")
.build()?;
let model = client.completion_model("claude-sonnet-4.5");
let agent = AgentBuilder::new(model)
.preamble("You're an AI assistant powered by GitHub Copilot")
.name("copilot-agent")
.max_tokens(2000)
.build();The proxy supports streaming and can be used with Open WebUI as a chat interface over GitHub Copilot models.
As a local Ollama connection:
Point Open WebUI at the proxy using its Ollama connection setting:
http://127.0.0.1:8081
Open WebUI will discover available models via GET /api/tags and stream responses via POST /api/chat.
As a local OpenAI connection:
Alternatively, configure Open WebUI with a custom OpenAI-compatible endpoint:
http://127.0.0.1:8081/v1
Set any non-empty string as the API key (the proxy does not validate it). Open WebUI will use GET /v1/models to list models and POST /v1/chat/completions for streaming chat.
- GitHub OAuth Authentication: Secure device flow authentication with GitHub
- Token Management: Automatic token caching, validation, and refresh
- OpenAI Compatibility: Drop-in replacement for OpenAI API clients
- Ollama Compatibility: Ollama-format responses via
/v1/api/chatendpoint - Custom Token Paths: Flexible token storage locations
- Health Monitoring: Built-in health check endpoint
- Request/Response Transformation: Seamless conversion between OpenAI, Ollama, and Copilot formats
- Quick Start
- Installation
- Running as a System Service
- Usage
- Configuration
- Architecture
- API Endpoints
- CLI Reference
- Development
- Testing
- Troubleshooting
Download a pre-built binary from the releases page, or install the packaged version for CentOS or Arch Linux.
chmod +x ./passenger-rs # if using binary straight./passenger-rs -- --loginThis will:
- Display a GitHub device code and URL
- Open your browser to https://github.com/login/device
- After authorization, save tokens to
~/.config/passenger-rs/
./passenger-rsThe server will start on http://127.0.0.1:8081 by default.
OpenAI format:
curl http://127.0.0.1:8081/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"messages": [
{"role": "user", "content": "Hello, how are you?"}
]
}'Ollama format:
curl http://127.0.0.1:8081/v1/api/chat \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4",
"messages": [
{"role": "user", "content": "Hello, how are you?"}
]
}'git clone https://github.com/yourusername/passenger-rs.git
cd passenger-rs
cargo build --releaseThe binary will be available at target/release/passenger-rs.
- Rust 1.70 or later
- Active GitHub Copilot subscription
- Internet connection for GitHub OAuth and Copilot API
Pre-built packages for Ubuntu and Arch Linux are available on the releases page.
Install using your AUR helper:
yay -U passenger-rs-0.0.1-1-x86_64.pkg.tar.zstsudo dpkg -i passenger-rs-0.0.1-x86_64.debThe package includes a systemd user service that can be managed with standard systemctl commands:
# Start the service
systemctl --user start passenger-rs.service
# Enable auto-start on login
systemctl --user enable passenger-rs.service
# Check service status
systemctl --user status passenger-rs.serviceExample output:
β passenger-rs.service - passenger-rs - GitHub Copilot Proxy
Loaded: loaded (/usr/lib/systemd/user/passenger-rs.service; disabled; preset: enabled)
Active: active (running) since Tue 2026-02-03 22:44:17 CET; 1s ago
[...]
INFO passenger_rs: OpenAI API endpoint: http://127.0.0.1:8081/v1/chat/completions
INFO passenger_rs: Ollama API endpoint: http://127.0.0.1:8081/v1/api/chat
INFO passenger_rs: Models endpoint: http://127.0.0.1:8081/v1/models
Note: Before starting the service, you must authenticate with GitHub Copilot using --login (see Usage).
# Start the server with default configuration
./passenger-rs
# Use custom configuration file
./passenger-rs --config /path/to/config.toml
# Authenticate with GitHub
./passenger-rs --login
# Refresh expired token
./passenger-rs --refresh-tokenYou can specify custom locations for token storage:
# Login with custom token paths
./passenger-rs --login \
--access-token-path /custom/path/access_token.json \
--copilot-token-path /custom/path/copilot_token.json
# Refresh token using custom paths
./passenger-rs --refresh-token \
--access-token-path /custom/path/access_token.json \
--copilot-token-path /custom/path/copilot_token.json
# Start server with custom copilot token path
./passenger-rs --copilot-token-path /custom/path/copilot_token.jsonEdit config.toml to customize the proxy behavior:
[github]
# GitHub OAuth device code endpoint
device_code_url = "https://github.com/login/device/code"
# GitHub OAuth access token endpoint
oauth_token_url = "https://github.com/login/oauth/access_token"
# GitHub Copilot token endpoint
copilot_token_url = "https://api.github.com/copilot_internal/v2/token"
# GitHub Copilot models catalog
copilot_models_url = "https://models.github.ai/catalog/models"
# GitHub Copilot public client ID (same for all users)
client_id = "Iv1.b507a08c87ecfe98"
[copilot]
# GitHub Copilot API base URL
api_base_url = "https://api.githubcopilot.com"
[server]
# Port to listen on
port = 8081
# Host to bind to
host = "127.0.0.1"Currently, configuration is file-based. Environment variable support may be added in future versions.
βββββββββββββββββββ ββββββββββββββββββββ ββββββββββββββββββββ
β OpenAI Client β OpenAI β passenger-rs β Copilot β GitHub Copilot β
β (Any SDK) ββββββββββΊβ Proxy Server ββββββββββΊβ API β
β β Format β β Format β β
βββββββββββββββββββ ββββββββββββββββββββ ββββββββββββββββββββ
β
β OAuth Flow
βΌ
βββββββββββββββββββ
β GitHub OAuth β
β Device Flow β
βββββββββββββββββββ
β
β Token Storage
βΌ
βββββββββββββββββββ
β Token Cache β
β ~/.config/ β
β passenger-rs/ β
βββββββββββββββββββ
OpenAI-compatible chat completions endpoint.
Request:
{
"model": "gpt-4",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
],
"temperature": 0.7,
"max_tokens": 100
}Response:
{
"id": "chatcmpl-123",
"object": "chat.completion",
"created": 1677652288,
"model": "gpt-4",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"finish_reason": "stop"
}
],
"usage": {
"prompt_tokens": 12,
"completion_tokens": 10,
"total_tokens": 22
}
}Note: Streaming is supported. When "stream": true is set, the response is returned as server-sent events (SSE) using text/event-stream.
Ollama-compatible chat endpoint.
Request:
{
"model": "gpt-4",
"messages": [
{
"role": "user",
"content": "Hello!"
}
],
"temperature": 0.7,
"max_tokens": 100
}Response:
{
"model": "gpt-4",
"created_at": "2023-11-07T05:31:56Z",
"message": {
"role": "assistant",
"content": "Hello! How can I help you today?"
},
"done": true,
"done_reason": "stop",
"prompt_eval_count": 12,
"eval_count": 10
}Note: This endpoint accepts OpenAI-format requests but returns Ollama-format responses for compatibility with Ollama clients.
Lists available models from GitHub Copilot catalog.
Response:
{
"object": "list",
"data": [
{
"id": "gpt-4",
"object": "model",
"created": 1677652288,
"owned_by": "openai"
}
]
}passenger-rs - GitHub Copilot to OpenAI API Proxy
Usage: passenger-rs [OPTIONS]
Options:
-c, --config <CONFIG>
Path to the configuration file
[default: config.toml]
--login
Perform GitHub OAuth device flow login
Initiates interactive authentication with GitHub
--refresh-token
Refresh Copilot token using existing access token
Useful when Copilot token expires
--access-token-path <ACCESS_TOKEN_PATH>
Path to the access token file
[default: ~/.config/passenger-rs/access_token.json]
--copilot-token-path <COPILOT_TOKEN_PATH>
Path to the Copilot token file
[default: ~/.config/passenger-rs/token.json]
-h, --help
Print help information
-V, --version
Print version information
# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Verify installation
rustc --version
cargo --version# Development build
cargo build
# Release build (optimized)
cargo build --release
# Check without building (fast)
cargo check# Format code
cargo fmt
# Check formatting
cargo fmt --check
# Run clippy linter
cargo clippy --all-targets --all-features -- -D warnings
# Fix clippy warnings automatically
cargo clippy --fix# Run all tests
cargo test
# Run with output
cargo test -- --nocapture
# Run specific test
cargo test test_chat_completions_without_auth
# Run only unit tests
cargo test --lib
# Run only integration tests
cargo test --test '*'
# Run ignored tests (require real authentication)
cargo test -- --ignoredSolution:
./passenger-rs --loginYou specified a custom access token path but the file doesn't exist.
Solution:
# Login will create the token at the default location
./passenger-rs --login
# Then copy to your custom location, or re-login with custom path
./passenger-rs --login --access-token-path /custom/path/access.jsonYour access token has expired or is invalid.
Solution:
./passenger-rs --loginAnother process is using port 8081.
Solutions:
# Option 1: Change port in config.toml
[server]
port = 8081
# Option 2: Find and kill the process
lsof -ti:8081 | xargs kill -9Server is not running.
Solution:
./passenger-rsEnable debug logging:
RUST_LOG=debug ./passenger-rs# View token details
cat ~/.config/passenger-rs/token.json | jq
# Check expiration
jq '.expires_at' ~/.config/passenger-rs/token.jsonBy default, tokens are stored in:
- Access Token:
~/.config/passenger-rs/access_token.json - Copilot Token:
~/.config/passenger-rs/token.json
- Access Token: Long-lived, used to obtain Copilot tokens
- Copilot Token: Short-lived (~25 minutes), auto-refreshed
- Expiration Buffer: Tokens refresh 60 seconds before expiration
# Refresh using default paths
./passenger-rs --refresh-token
# Refresh using custom paths
./passenger-rs --refresh-token \
--access-token-path /path/to/access.json \
--copilot-token-path /path/to/copilot.json- Tokens contain sensitive credentials
- Store tokens in secure locations with appropriate permissions
- Consider using encrypted filesystems for token storage
- Never commit tokens to version control
# Set secure permissions
chmod 600 ~/.config/passenger-rs/*.json- Language: Rust for memory safety and performance
- Async Runtime: Tokio for efficient concurrency
- Web Framework: Axum for fast HTTP handling
- HTTP Client: Reqwest with connection pooling
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.
This means you can:
- β Use the software for any purpose
- β Study and modify the source code
- β Share the software with others
- β Share your modifications
Important: If you distribute modified versions, you must:
- π Make the source code available
- π License it under GPL-3.0
- π Document your changes
- π Include the original copyright notice
- Based on the copilot-to-api project
- Built with Axum web framework
- Uses Tokio async runtime
- CLI powered by Clap
- Issues: GitHub Issues
- Discussions: GitHub Discussions
This project is for educational purposes. Make sure you comply with GitHub's Terms of Service and Copilot's usage policies.