Skip to content

vibecoddd/cc-flux

Repository files navigation

CC-Flux (Multimodal Coding Agent Proxy)

CC-Flux is a powerful, lightweight proxy layer designed to decouple the Claude Code CLI from its hardcoded Anthropic API dependency. It allows developers to use any OpenAI-compatible API (like DeepSeek, GPT-4) or local models (via Ollama) while maintaining the agentic coding experience.


🌟 Key Features

  • 🚀 Seamless Model Swapping: Hot-swap between cloud providers (OpenAI, DeepSeek) and local backends (Ollama) without restarting your CLI session.
  • 🔄 Protocol Translation: Automatically converts Anthropic's XML-based tool_use format into OpenAI's JSON function_calling format.
  • 🎮 TUI Controller: A beautiful Go-based Terminal User Interface (Bubble Tea) to monitor and control the proxy in real-time.
  • 🛠️ Local Model Optimization: Specialized prompt injection for Ollama/local models to ensure reliable tool-calling behavior.
  • ⚡ High-Performance Streaming: Built on Fastify with low-latency Server-Sent Events (SSE) relaying.

🏗️ System Architecture

  1. Flux Proxy (Node.js): The heart of the system. It handles traffic, performs protocol transformation, and exposes an Admin API for the TUI.
  2. TUI Controller (Go): A standalone control tower used to switch active models and monitor connectivity.
  3. Config Engine: Manages API keys and model presets via providers.json.

🛠️ Getting Started

1. Prerequisites

  • Node.js: v18.0 or higher
  • Go: v1.20 or higher (for building the TUI)
  • Claude Code CLI: Installed and ready

2. Installation

Clone the repository and install dependencies:

# Install Proxy dependencies
cd proxy
npm install

# Build the TUI Controller
cd ../tui
go build -o cc-flux.exe .

Configuration

  • Proxy: Edit proxy/.env for default startup settings.
    • Port: Change PORT=8080.
    • IPC (Optional): Set SOCKET_PATH for higher performance.
      • Windows: \\.\pipe\cc-flux
      • Linux/Mac: /tmp/cc-flux.sock
  • TUI: Edit tui/providers.json to add/remove model presets.

🚀 Usage

Step 1: Start CC-Flux

Option A: Quick Start

  • Windows:
    start_cc_flux.bat
  • Linux / macOS:
    chmod +x start_cc_flux.sh
    ./start_cc_flux.sh

Option B: Manual Start

  1. Start the Proxy:

    cd proxy
    npm start

    (Default port: 8080)

  2. Start the TUI (in a new terminal):

    • Windows:
      cd tui
      ./cc-flux.exe
    • Linux / macOS:
      cd tui
      go build -o cc-flux .
      ./cc-flux

Step 2: Connect Claude Code

Once the proxy is running (default: http://localhost:8080), you need to tell Claude Code to use it as the backend instead of the default Anthropic API.

Option A: Using Environment Variables (Recommended)

Set the CLAUDE_BASE_URL (or ANTHROPIC_BASE_URL depending on your version) to point to the proxy:

  • Linux / macOS:
    export ANTHROPIC_BASE_URL=http://localhost:8080/v1
    claude
  • Windows (PowerShell):
    $env:ANTHROPIC_BASE_URL="http://localhost:8080/v1"
    claude

Option B: Using the --api-url flag

If your version of the Claude CLI supports it, you can pass the URL directly:

claude --api-url http://localhost:8080/v1

Option C: Using Unix Sockets (Advanced)

If you configured SOCKET_PATH in the proxy (e.g., /tmp/cc-flux.sock), you can connect via curl-compatible tools or wrappers that support sockets, though environment variables are typically used for the CLI.


📖 Common Operations

1. Switching Models

  1. Ensure the Proxy is running.
  2. Open the TUI Controller (cc-flux).
  3. Use the Arrow Keys (Up/Down) or j/k to navigate the list.
  4. Press Enter to select a model.
    • The Proxy will instantly switch its backend.
    • Status will update to Successfully switched to [Model Name].
  5. Press q or Ctrl+C to exit the TUI (the Proxy will continue running in the background).

2. Adding New Model Providers

  1. Open tui/providers.json.
  2. Add a new JSON object to the array:
    {
      "id": "my-custom-model",
      "name": "My Custom Model",
      "provider": "openai",
      "baseUrl": "https://api.example.com/v1",
      "apiKey": "your-api-key",
      "model": "model-name-123"
    }
  3. Restart the TUI to see the new entry.

3. Tuning for Local Models (Ollama)

  • Retry Mode: If your local model often outputs invalid tool-call JSON, ensure RETRY_ENABLED=true is set in proxy/.env.
  • System Prompts: The proxy automatically injects formatting instructions for ollama providers to improve reliability.

🗺️ Implementation Roadmap

  • Phase 1 (MVP): Core Node.js proxy and Anthropic-to-OpenAI mapping.
  • Phase 2 (TUI): Go-based interactive model selector.
  • Phase 3 (Optimization): System prompt injection for improved local model (Ollama) support.
  • Phase 4 (Advanced): Support for thinking/reasoning tokens (DeepSeek R1) and conversation history compression.

🛡️ Security

All API keys are stored locally on your machine. The proxy acts as a pass-through and does not log your sensitive keys or conversation content to any external service.


📄 License

MIT

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published