Skip to content

grumlimited/passenger-rs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

45 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

passenger-rs

CI License: GPL v3 Rust

A high-performance Rust-based proxy server that converts GitHub Copilot into OpenAI-compatible and Ollama-compatible APIs.

πŸ’‘ Use Case: Rig Integration

This project enables using GitHub Copilot models with Rig and other Ollama-compatible and OpenAI-compatible frameworks:

use rig::providers::ollama;

let client: Client<OllamaExt> = ollama::Client::builder()
.api_key(Nothing)
.base_url("http://127.0.0.1:8081/v1")
.build()?;

let model = client.completion_model("claude-sonnet-4.5");

let agent = AgentBuilder::new(model)
    .preamble("You're an AI assistant powered by GitHub Copilot")
    .name("copilot-agent")
    .max_tokens(2000)
    .build();

or

use rig::providers::ollama;

let client: Client<OpenAIResponsesExt> = openai::Client::builder()
.api_key("no key")
.base_url("http://127.0.0.1:8081/v1")
.build()?;

let model = client.completion_model("claude-sonnet-4.5");

let agent = AgentBuilder::new(model)
    .preamble("You're an AI assistant powered by GitHub Copilot")
    .name("copilot-agent")
    .max_tokens(2000)
    .build();

πŸ’‘ Use Case: Open WebUI Integration

The proxy supports streaming and can be used with Open WebUI as a chat interface over GitHub Copilot models.

As a local Ollama connection:

Point Open WebUI at the proxy using its Ollama connection setting:

http://127.0.0.1:8081

Open WebUI will discover available models via GET /api/tags and stream responses via POST /api/chat.

As a local OpenAI connection:

Alternatively, configure Open WebUI with a custom OpenAI-compatible endpoint:

http://127.0.0.1:8081/v1

Set any non-empty string as the API key (the proxy does not validate it). Open WebUI will use GET /v1/models to list models and POST /v1/chat/completions for streaming chat.

πŸš€ Features

  • GitHub OAuth Authentication: Secure device flow authentication with GitHub
  • Token Management: Automatic token caching, validation, and refresh
  • OpenAI Compatibility: Drop-in replacement for OpenAI API clients
  • Ollama Compatibility: Ollama-format responses via /v1/api/chat endpoint
  • Custom Token Paths: Flexible token storage locations
  • Health Monitoring: Built-in health check endpoint
  • Request/Response Transformation: Seamless conversion between OpenAI, Ollama, and Copilot formats

πŸ“‹ Table of Contents

🏁 Quick Start

1. Download

Download a pre-built binary from the releases page, or install the packaged version for CentOS or Arch Linux.

chmod +x ./passenger-rs # if using binary straight

2. Authenticate with GitHub

./passenger-rs -- --login

This will:

  1. Display a GitHub device code and URL
  2. Open your browser to https://github.com/login/device
  3. After authorization, save tokens to ~/.config/passenger-rs/

3. Start the proxy server

./passenger-rs

The server will start on http://127.0.0.1:8081 by default.

4. Test the connection

OpenAI format:

curl http://127.0.0.1:8081/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4",
    "messages": [
      {"role": "user", "content": "Hello, how are you?"}
    ]
  }'

Ollama format:

curl http://127.0.0.1:8081/v1/api/chat \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4",
    "messages": [
      {"role": "user", "content": "Hello, how are you?"}
    ]
  }'

πŸ“¦ Installation

From Source

git clone https://github.com/yourusername/passenger-rs.git
cd passenger-rs
cargo build --release

The binary will be available at target/release/passenger-rs.

System Requirements

  • Rust 1.70 or later
  • Active GitHub Copilot subscription
  • Internet connection for GitHub OAuth and Copilot API

πŸ”§ Running as a System Service

Pre-built packages for Ubuntu and Arch Linux are available on the releases page.

Arch Linux Installation

Install using your AUR helper:

yay -U passenger-rs-0.0.1-1-x86_64.pkg.tar.zst

Ubuntu/Debian Installation

sudo dpkg -i passenger-rs-0.0.1-x86_64.deb

Managing the Service

The package includes a systemd user service that can be managed with standard systemctl commands:

# Start the service
systemctl --user start passenger-rs.service

# Enable auto-start on login
systemctl --user enable passenger-rs.service

# Check service status
systemctl --user status passenger-rs.service

Example output:

● passenger-rs.service - passenger-rs - GitHub Copilot Proxy
     Loaded: loaded (/usr/lib/systemd/user/passenger-rs.service; disabled; preset: enabled)
     Active: active (running) since Tue 2026-02-03 22:44:17 CET; 1s ago
     [...]
     INFO passenger_rs: OpenAI API endpoint: http://127.0.0.1:8081/v1/chat/completions
     INFO passenger_rs: Ollama API endpoint: http://127.0.0.1:8081/v1/api/chat
     INFO passenger_rs: Models endpoint: http://127.0.0.1:8081/v1/models

Note: Before starting the service, you must authenticate with GitHub Copilot using --login (see Usage).

🎯 Usage

Basic Usage

# Start the server with default configuration
./passenger-rs

# Use custom configuration file
./passenger-rs --config /path/to/config.toml

# Authenticate with GitHub
./passenger-rs --login

# Refresh expired token
./passenger-rs --refresh-token

Custom Token Paths

You can specify custom locations for token storage:

# Login with custom token paths
./passenger-rs --login \
  --access-token-path /custom/path/access_token.json \
  --copilot-token-path /custom/path/copilot_token.json

# Refresh token using custom paths
./passenger-rs --refresh-token \
  --access-token-path /custom/path/access_token.json \
  --copilot-token-path /custom/path/copilot_token.json

# Start server with custom copilot token path
./passenger-rs --copilot-token-path /custom/path/copilot_token.json

βš™οΈ Configuration

Edit config.toml to customize the proxy behavior:

[github]
# GitHub OAuth device code endpoint
device_code_url = "https://github.com/login/device/code"

# GitHub OAuth access token endpoint
oauth_token_url = "https://github.com/login/oauth/access_token"

# GitHub Copilot token endpoint
copilot_token_url = "https://api.github.com/copilot_internal/v2/token"

# GitHub Copilot models catalog
copilot_models_url = "https://models.github.ai/catalog/models"

# GitHub Copilot public client ID (same for all users)
client_id = "Iv1.b507a08c87ecfe98"

[copilot]
# GitHub Copilot API base URL
api_base_url = "https://api.githubcopilot.com"

[server]
# Port to listen on
port = 8081

# Host to bind to
host = "127.0.0.1"

Environment Variables

Currently, configuration is file-based. Environment variable support may be added in future versions.

πŸ—οΈ Architecture

High-Level Overview

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”         β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚   OpenAI Client β”‚ OpenAI  β”‚   passenger-rs   β”‚ Copilot β”‚ GitHub Copilot   β”‚
β”‚   (Any SDK)     β”œβ”€β”€β”€β”€β”€β”€β”€β”€β–Ίβ”‚   Proxy Server   β”œβ”€β”€β”€β”€β”€β”€β”€β”€β–Ίβ”‚  API             β”‚
β”‚                 β”‚ Format  β”‚                  β”‚ Format  β”‚                  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜         β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                     β”‚
                                     β”‚ OAuth Flow
                                     β–Ό
                            β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                            β”‚  GitHub OAuth   β”‚
                            β”‚  Device Flow    β”‚
                            β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                     β”‚
                                     β”‚ Token Storage
                                     β–Ό
                            β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                            β”‚  Token Cache    β”‚
                            β”‚  ~/.config/     β”‚
                            β”‚  passenger-rs/  β”‚
                            β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ”Œ API Endpoints

POST /v1/chat/completions

OpenAI-compatible chat completions endpoint.

Request:

{
  "model": "gpt-4",
  "messages": [
    {
      "role": "system",
      "content": "You are a helpful assistant."
    },
    {
      "role": "user",
      "content": "Hello!"
    }
  ],
  "temperature": 0.7,
  "max_tokens": 100
}

Response:

{
  "id": "chatcmpl-123",
  "object": "chat.completion",
  "created": 1677652288,
  "model": "gpt-4",
  "choices": [
    {
      "index": 0,
      "message": {
        "role": "assistant",
        "content": "Hello! How can I help you today?"
      },
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 12,
    "completion_tokens": 10,
    "total_tokens": 22
  }
}

Note: Streaming is supported. When "stream": true is set, the response is returned as server-sent events (SSE) using text/event-stream.

POST /v1/api/chat

Ollama-compatible chat endpoint.

Request:

{
  "model": "gpt-4",
  "messages": [
    {
      "role": "user",
      "content": "Hello!"
    }
  ],
  "temperature": 0.7,
  "max_tokens": 100
}

Response:

{
  "model": "gpt-4",
  "created_at": "2023-11-07T05:31:56Z",
  "message": {
    "role": "assistant",
    "content": "Hello! How can I help you today?"
  },
  "done": true,
  "done_reason": "stop",
  "prompt_eval_count": 12,
  "eval_count": 10
}

Note: This endpoint accepts OpenAI-format requests but returns Ollama-format responses for compatibility with Ollama clients.

GET /v1/models

Lists available models from GitHub Copilot catalog.

Response:

{
  "object": "list",
  "data": [
    {
      "id": "gpt-4",
      "object": "model",
      "created": 1677652288,
      "owned_by": "openai"
    }
  ]
}

πŸ–₯️ CLI Reference

passenger-rs - GitHub Copilot to OpenAI API Proxy

Usage: passenger-rs [OPTIONS]

Options:
  -c, --config <CONFIG>
          Path to the configuration file
          [default: config.toml]

      --login
          Perform GitHub OAuth device flow login
          Initiates interactive authentication with GitHub

      --refresh-token
          Refresh Copilot token using existing access token
          Useful when Copilot token expires

      --access-token-path <ACCESS_TOKEN_PATH>
          Path to the access token file
          [default: ~/.config/passenger-rs/access_token.json]

      --copilot-token-path <COPILOT_TOKEN_PATH>
          Path to the Copilot token file
          [default: ~/.config/passenger-rs/token.json]

  -h, --help
          Print help information

  -V, --version
          Print version information

πŸ› οΈ Development

Prerequisites

# Install Rust
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

# Verify installation
rustc --version
cargo --version

Building

# Development build
cargo build

# Release build (optimized)
cargo build --release

# Check without building (fast)
cargo check

Code Quality

# Format code
cargo fmt

# Check formatting
cargo fmt --check

# Run clippy linter
cargo clippy --all-targets --all-features -- -D warnings

# Fix clippy warnings automatically
cargo clippy --fix

πŸ§ͺ Testing

Running Tests

# Run all tests
cargo test

# Run with output
cargo test -- --nocapture

# Run specific test
cargo test test_chat_completions_without_auth

# Run only unit tests
cargo test --lib

# Run only integration tests
cargo test --test '*'

# Run ignored tests (require real authentication)
cargo test -- --ignored

πŸ› Troubleshooting

Common Issues

"No authentication token found"

Solution:

./passenger-rs --login

"Access token file does not exist"

You specified a custom access token path but the file doesn't exist.

Solution:

# Login will create the token at the default location
./passenger-rs --login

# Then copy to your custom location, or re-login with custom path
./passenger-rs --login --access-token-path /custom/path/access.json

"Failed to refresh Copilot token: 401 Unauthorized"

Your access token has expired or is invalid.

Solution:

./passenger-rs --login

"Address already in use"

Another process is using port 8081.

Solutions:

# Option 1: Change port in config.toml
[server]
port = 8081

# Option 2: Find and kill the process
lsof -ti:8081 | xargs kill -9

"Connection refused" when making API calls

Server is not running.

Solution:

./passenger-rs

Debug Mode

Enable debug logging:

RUST_LOG=debug ./passenger-rs

Token Inspection

# View token details
cat ~/.config/passenger-rs/token.json | jq

# Check expiration
jq '.expires_at' ~/.config/passenger-rs/token.json

πŸ“ Token Management

Token Locations

By default, tokens are stored in:

  • Access Token: ~/.config/passenger-rs/access_token.json
  • Copilot Token: ~/.config/passenger-rs/token.json

Token Lifecycle

  • Access Token: Long-lived, used to obtain Copilot tokens
  • Copilot Token: Short-lived (~25 minutes), auto-refreshed
  • Expiration Buffer: Tokens refresh 60 seconds before expiration

Manual Token Refresh

# Refresh using default paths
./passenger-rs --refresh-token

# Refresh using custom paths
./passenger-rs --refresh-token \
  --access-token-path /path/to/access.json \
  --copilot-token-path /path/to/copilot.json

Security Considerations

  • Tokens contain sensitive credentials
  • Store tokens in secure locations with appropriate permissions
  • Consider using encrypted filesystems for token storage
  • Never commit tokens to version control
# Set secure permissions
chmod 600 ~/.config/passenger-rs/*.json

πŸš€ Performance

  • Language: Rust for memory safety and performance
  • Async Runtime: Tokio for efficient concurrency
  • Web Framework: Axum for fast HTTP handling
  • HTTP Client: Reqwest with connection pooling

🀝 Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

πŸ“„ License

This project is licensed under the GNU General Public License v3.0 - see the LICENSE file for details.

GPL-3.0 Summary

This means you can:

  • βœ… Use the software for any purpose
  • βœ… Study and modify the source code
  • βœ… Share the software with others
  • βœ… Share your modifications

Important: If you distribute modified versions, you must:

  • πŸ“ Make the source code available
  • πŸ”“ License it under GPL-3.0
  • πŸ“‹ Document your changes
  • πŸ“„ Include the original copyright notice

πŸ™ Acknowledgments

πŸ“ž Support

⚠️ Disclaimer

This project is for educational purposes. Make sure you comply with GitHub's Terms of Service and Copilot's usage policies.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors