Expose Codex Desktop's capabilities as standard OpenAI / Anthropic / Gemini APIs, seamlessly connecting any AI client.
Quick Start • Features • Models • Client Setup • Configuration
简体中文 | English
Disclaimer: This project is independently developed and maintained by a single person — built to scratch my own itch. I have my own account pipeline and am not short on tokens; this project exists because I needed it, not to freeload off anyone.
I open-source and maintain this voluntarily. Features get added when I need them; bugs get fixed as soon as I find them. But I am under no obligation to serve any individual user's demands.
Think the code is garbage? Don't use it. Think you can do better? Open a PR and join as a contributor. The issue tracker is for bug reports and suggestions — not feature demands, update nagging, or unsolicited code reviews.
Codex Proxy is a lightweight local gateway that translates the Codex Desktop Responses API into multiple standard protocol endpoints — OpenAI /v1/chat/completions, Anthropic /v1/messages, Gemini, and Codex /v1/responses passthrough. Use Codex coding models directly in Cursor, Claude Code, Continue, or any compatible client.
Just a ChatGPT account (or a third-party API relay) and this proxy — your own personal AI coding assistant gateway, running locally.
Download the installer from GitHub Releases:
| Platform | Installer |
|---|---|
| Windows | Codex Proxy Setup x.x.x.exe |
| macOS | Codex Proxy-x.x.x.dmg |
| Linux | Codex Proxy-x.x.x.AppImage |
Open the app, log in with your ChatGPT account. Dashboard at http://localhost:8080.
mkdir codex-proxy && cd codex-proxy
curl -O https://raw.githubusercontent.com/icebear0828/codex-proxy/master/docker-compose.yml
curl -O https://raw.githubusercontent.com/icebear0828/codex-proxy/master/.env.example
cp .env.example .env
docker compose up -d
# Open http://localhost:8080 to log inData persists in
data/. Cross-container access: use host LAN IP (e.g.192.168.x.x:8080), notlocalhost. Uncomment Watchtower indocker-compose.ymlfor auto-updates.
git clone https://github.com/icebear0828/codex-proxy.git
cd codex-proxy
npm install # Backend dependencies
cd web && npm install && cd .. # Frontend dependencies
npm run dev # Dev mode (hot reload)
# Or: npm run build && npm start # Production modeRequires Rust toolchain (for TLS native addon):
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh cd native && npm install && npm run build && cd ..Docker / desktop app ship pre-built addons — no manual compilation needed.
After logging in, open the dashboard at http://localhost:8080 and find your API Key in the API Configuration section:
# Replace your-api-key with the key shown in the dashboard
curl http://localhost:8080/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d '{"model":"codex","messages":[{"role":"user","content":"Hello!"}],"stream":true}'If you see streaming AI text, the setup is working. If you get 401, double-check the API Key.
- Compatible with
/v1/chat/completions(OpenAI),/v1/messages(Anthropic), Gemini, and/v1/responses(Codex passthrough) - SSE streaming, works with all OpenAI / Anthropic SDKs and clients
- Automatic bidirectional translation between all protocols and Codex Responses API
- Structured Outputs —
response_format(json_object/json_schema) and GeminiresponseMimeType - Function Calling — native
function_call/tool_callsacross all protocols - If using custom API Keys, only the OpenAI (
/v1/chat/completions) format is supported.
- OAuth PKCE login — one-click browser auth
- Multi-account rotation —
least_used,round_robin, andstickystrategies - Plan Routing — accounts on different plans (free/plus/team/business) auto-route to their supported models
- Auto token refresh — JWT renewed before expiry with exponential backoff
- Quota auto-refresh — background polling every 5 min; configurable warning thresholds; exhausted accounts auto-skip
- Ban detection — upstream 403 auto-marks banned; 401 token invalidation auto-expires and switches account
- Relay accounts — connect third-party API relays (API Key + baseUrl) with auto format detection
- Web dashboard — account management, usage stats, batch operations; dashboard login gate for remote access
- Per-account proxy routing — different upstream proxies per account
- Four assignment modes — Global Default / Direct / Auto / Specific proxy
- Health checks — scheduled + manual, reports exit IP and latency
- Auto-mark unreachable — unreachable proxies excluded from rotation
- Rust Native TLS — built-in reqwest + rustls native addon, TLS fingerprint matches real Codex Desktop exactly (pinned dependency versions)
- Desktop header replication —
originator,User-Agent,x-openai-internal-codex-residency,x-codex-turn-state,x-client-request-idheaders sent per real client behavior - Cookie persistence — automatic Cloudflare cookie capture and replay
- Fingerprint auto-update — polls Codex Desktop update feed, auto-syncs
app_versionandbuild_number
Codex Proxy
┌──────────────────────────────────────────────────────────┐
│ │
│ Client (Cursor / Claude Code / Continue / SDK / ...) │
│ │ │
│ POST /v1/chat/completions (OpenAI) │
│ POST /v1/messages (Anthropic) │
│ POST /v1/responses (Codex passthrough) │
│ POST /gemini/* (Gemini) │
│ │ │
│ ▼ │
│ ┌──────────┐ ┌───────────────┐ ┌──────────────┐ │
│ │ Routes │──▶│ Translation │──▶│ Proxy │ │
│ │ (Hono) │ │ Multi→Codex │ │ Native TLS │ │
│ └──────────┘ └───────────────┘ └──────┬───────┘ │
│ ▲ │ │
│ │ ┌───────────────┐ │ │
│ └──────────│ Translation │◀─────────┘ │
│ │ Codex→Multi │ SSE stream │
│ └───────────────┘ │
│ │
│ ┌──────────┐ ┌───────────────┐ ┌──────────────────┐ │
│ │ Auth │ │ Fingerprint │ │ Model Store │ │
│ │ OAuth/JWT│ │ Rust (rustls) │ │ Static + Dynamic │ │
│ │ Relay │ │ Headers/UA │ │ Plan Routing │ │
│ └──────────┘ └───────────────┘ └──────────────────┘ │
│ │
└──────────────────────────────────────────────────────────┘
│
Rust Native Addon (napi-rs)
reqwest 0.12.28 + rustls 0.23.36
(TLS fingerprint = real Codex Desktop)
│
┌──────┴──────┐
▼ ▼
chatgpt.com Relay providers
/backend-api/codex (3rd-party API)
| Model ID | Alias | Reasoning Efforts | Description |
|---|---|---|---|
gpt-5.4 |
— | low / medium / high / xhigh | Latest flagship model |
gpt-5.4-mini |
— | low / medium / high / xhigh | 5.4 lightweight version |
gpt-5.3-codex |
— | low / medium / high / xhigh | 5.3 coding-optimized model |
gpt-5.2-codex |
codex |
low / medium / high / xhigh | Frontier agentic coding model (default) |
gpt-5.2 |
— | low / medium / high / xhigh | Professional work & long-running agents |
gpt-5.1-codex-max |
— | low / medium / high / xhigh | Extended context / deepest reasoning |
gpt-5.1-codex |
— | low / medium / high | GPT-5.1 coding model |
gpt-5.1 |
— | low / medium / high | General-purpose GPT-5.1 |
gpt-5-codex |
— | low / medium / high | GPT-5 coding model |
gpt-5 |
— | minimal / low / medium / high | General-purpose GPT-5 |
gpt-oss-120b |
— | low / medium / high | Open-source 120B model |
gpt-oss-20b |
— | low / medium / high | Open-source 20B model |
gpt-5.1-codex-mini |
— | medium / high | Lightweight, fast coding model |
gpt-5-codex-mini |
— | medium / high | Lightweight coding model |
Suffixes: Append
-fastfor Fast mode,-high/-lowfor reasoning effort. E.g.codex-fast,gpt-5.2-codex-high-fast.Plan Routing: Accounts on different plans auto-route to their supported models. Models are dynamically fetched and auto-synced.
Dashboard model picker ≠ config file: Changing the model in the Dashboard only affects the UI display and API examples — it does not modify
model.defaultinconfig/default.yamlordata/local.yaml. The actual model used is determined by themodelfield in each client request (Cursor, Claude Code, etc.). Themodel.defaultconfig is only a fallback when the client omits the model field.
Get your API Key from the dashboard (
http://localhost:8080). Usecodex(default gpt-5.2-codex) or any model ID as the model name.
export ANTHROPIC_BASE_URL=http://localhost:8080
export ANTHROPIC_API_KEY=your-api-key
# Switch model: export ANTHROPIC_MODEL=codex-fast / gpt-5.4 / gpt-5.1-codex-mini ...
claudeCopy env vars from the Anthropic SDK Setup card in the dashboard (includes Opus / Sonnet / Haiku tier model config).
Recommended models: Opus →
gpt-5.4, Sonnet →gpt-5.3-codex, Haiku →gpt-5.4-mini.
~/.codex/config.toml:
[model_providers.proxy_codex]
name = "Codex Proxy"
base_url = "http://localhost:8080/v1"
wire_api = "responses"
env_key = "PROXY_API_KEY"
[profiles.default]
model = "gpt-5.4"
model_provider = "proxy_codex"export PROXY_API_KEY=your-api-key
codexOpen Claude extension settings → API Configuration:
- API Provider: Anthropic
- Base URL:
http://localhost:8080 - API Key: your API key
- Settings → Models → OpenAI API
- Base URL:
http://localhost:8080/v1 - API Key: your API key
- Add model
codex
- Settings → AI Provider → OpenAI Compatible
- API Base URL:
http://localhost:8080/v1 - API Key: your API key
- Model:
codex
- Cline sidebar → gear icon
- API Provider: OpenAI Compatible
- Base URL:
http://localhost:8080/v1 - API Key: your API key
- Model ID:
codex
~/.continue/config.json:
{
"models": [{
"title": "Codex",
"provider": "openai",
"model": "codex",
"apiBase": "http://localhost:8080/v1",
"apiKey": "your-api-key"
}]
}aider --openai-api-base http://localhost:8080/v1 \
--openai-api-key your-api-key \
--model openai/codex- Settings → Model Services → Add
- Type: OpenAI
- API URL:
http://localhost:8080/v1 - API Key: your API key
- Add model
codex
| Setting | Value |
|---|---|
| Base URL | http://localhost:8080/v1 |
| API Key | from dashboard |
| Model | codex (or any model ID) |
SDK examples (Python / Node.js)
Python
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8080/v1", api_key="your-api-key")
for chunk in client.chat.completions.create(
model="codex", messages=[{"role": "user", "content": "Hello!"}], stream=True
):
print(chunk.choices[0].delta.content or "", end="")Node.js
import OpenAI from "openai";
const client = new OpenAI({ baseURL: "http://localhost:8080/v1", apiKey: "your-api-key" });
const stream = await client.chat.completions.create({
model: "codex", messages: [{ role: "user", content: "Hello!" }], stream: true,
});
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || "");
}All configuration in config/default.yaml:
| Section | Key Settings | Description |
|---|---|---|
server |
host, port, proxy_api_key |
Listen address and API key |
api |
base_url, timeout_seconds |
Upstream API URL and timeout |
client |
app_version, build_number, chromium_version |
Codex Desktop version to impersonate |
model |
default, default_reasoning_effort, inject_desktop_context |
Default model and reasoning config |
auth |
rotation_strategy, rate_limit_backoff_seconds |
Rotation strategy and rate limit backoff |
tls |
proxy_url, force_http11 |
TLS proxy and HTTP version |
quota |
refresh_interval_minutes, warning_thresholds, skip_exhausted |
Quota refresh and warnings |
session |
ttl_minutes, cleanup_interval_minutes |
Dashboard session management |
| Variable | Overrides |
|---|---|
PORT |
server.port |
CODEX_PLATFORM |
client.platform |
CODEX_ARCH |
client.arch |
HTTPS_PROXY |
tls.proxy_url |
Click to expand full endpoint list
Protocol Endpoints
| Endpoint | Method | Description |
|---|---|---|
/v1/chat/completions |
POST | OpenAI format chat completions |
/v1/responses |
POST | Codex Responses API passthrough |
/v1/messages |
POST | Anthropic format chat completions |
/v1/models |
GET | List available models |
Auth & Accounts
| Endpoint | Method | Description |
|---|---|---|
/auth/login |
GET | OAuth login entry |
/auth/accounts |
GET | Account list (?quota=true / ?quota=fresh) |
/auth/accounts |
POST | Add single account (token or refreshToken) |
/auth/accounts/import |
POST | Bulk import accounts |
/auth/accounts/export |
GET | Export accounts (?format=minimal for compact) |
/auth/accounts/relay |
POST | Add relay account |
/auth/accounts/batch-delete |
POST | Batch delete accounts |
/auth/accounts/batch-status |
POST | Batch update account status |
Account Import/Export Examples
# Export all accounts (full format with tokens)
curl -s http://localhost:8080/auth/accounts/export \
-H "Authorization: Bearer your-api-key" > backup.json
# Export minimal format (refreshToken + label only, safe to share)
curl -s "http://localhost:8080/auth/accounts/export?format=minimal" \
-H "Authorization: Bearer your-api-key" > backup-minimal.json
# Bulk import (token, refreshToken, or both)
curl -X POST http://localhost:8080/auth/accounts/import \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d '{
"accounts": [
{ "token": "eyJhbGciOi..." },
{ "refreshToken": "v1.abc..." },
{ "refreshToken": "v1.def...", "label": "Backup" }
]
}'
# Returns: { "added": 2, "updated": 1, "failed": 0, "errors": [] }
# One-step backup restore (export file → import to another instance)
curl -X POST http://localhost:8080/auth/accounts/import \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-api-key" \
-d @backup.jsonAdmin
| Endpoint | Method | Description |
|---|---|---|
/admin/rotation-settings |
GET/POST | Rotation strategy config |
/admin/quota-settings |
GET/POST | Quota refresh & warning config |
/admin/refresh-models |
POST | Trigger manual model list refresh |
/admin/usage-stats/summary |
GET | Usage stats summary |
/admin/usage-stats/history |
GET | Usage time series |
/health |
GET | Health check |
Proxy Pool
| Endpoint | Method | Description |
|---|---|---|
/api/proxies |
GET/POST | List / add proxies |
/api/proxies/:id |
PUT/DELETE | Update / remove proxy |
/api/proxies/:id/check |
POST | Health check single proxy |
/api/proxies/check-all |
POST | Health check all proxies |
/api/proxies/assign |
POST | Assign proxy to account |
- Node.js 18+ (20+ recommended)
- Rust — required for source builds (compiles TLS native addon); Docker / desktop app ship pre-built
- ChatGPT account — free account is sufficient
- Docker (optional)
- Codex API is stream-only.
stream: falsecauses the proxy to stream internally and return assembled JSON. - This project relies on Codex Desktop's public API. Upstream updates are auto-detected and fingerprints auto-synced.
- Windows source builds need Rust toolchain for the TLS native addon. Docker deployment has it pre-built.
Non-Commercial license:
- Allowed: Personal learning, research, self-hosted deployment
- Prohibited: Any commercial use including selling, reselling, paid proxy services, or commercial product integration
Not affiliated with OpenAI. Users assume all risks and must comply with OpenAI's Terms of Service.


