Skip to content

OpenRouter 401s in “LLM Council Plus” fork despite valid key and direct API success #133

@warrenzm

Description

@warrenzm

Hi, and thanks for releasing LLM Council – it’s a fantastic concept and has clearly inspired a lot of experimentation.

I’m running into a reproducible issue in a third‑party fork (“LLM Council Plus”) that adds a modern UI and configuration layer on top of the original project. The problem appears to be specific to that fork’s OpenRouter integration, but since it follows the original council architecture, I wanted to document the behavior here in case it points to a general pitfall with OpenRouter usage or model configuration.

Context
Base project: karpathy/llm-council

Code actually used: a fork branded “LLM Council Plus” (linked from a recent YouTube video / Reddit post)

OS: Windows 10/11

Backend: FastAPI/uvicorn server (python -m backend.main via uv)

Frontend: Vite/React UI from the fork

Provider mode: OpenRouter as the only provider (no direct OpenAI/Anthropic keys configured)

What I configured in the fork
Provider key

In the fork’s settings UI, I added an OpenRouter API key and ran its built‑in “Test” function, which reported success.

On the OpenRouter dashboard, this key is:

Status: Enabled

Credit limit: $100 (per key)

BYOK: disabled

No obvious restrictions or org policies visible.

Council configuration

Council members are selected from the fork’s dropdowns, not typed manually.

Example configuration:

Member: google/gemini-2.0-flash-001

Member: openai/chatgpt-4o-latest

Chair: one of the above

All local/Ollama and “Direct Connection” providers (OpenAI, Anthropic, etc.) are turned off; only the OpenRouter path is active.

Symptoms in the fork
Creating a New Discussion and sending a trivial prompt (“test”) triggers Stage 1.

Each member fails with UI messages like:

Model Failed to Respond
Error: http_401

No final verdict is produced; the run bails out at Stage 1.

Backend log output
From the Python backend:

text
DEBUG: Sending stage1_init with total=2
HTTP error querying model google/gemini-2.0-flash-001: Client error '401 Unauthorized' for url 'https://openrouter.ai/api/v1/chat/completions'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
HTTP error querying model openai/chatgpt-4o-latest: Client error '401 Unauthorized' for url 'https://openrouter.ai/api/v1/chat/completions'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/401
So the fork is calling OpenRouter’s chat/completions endpoint with plausible model IDs, but OpenRouter is returning 401 for every member.

Sanity check of the OpenRouter key (works)
To rule out account or key problems, I called OpenRouter directly from the same machine with the same key and model:

powershell
$headers = @{
"Authorization" = "Bearer "
"Content-Type" = "application/json"
"HTTP-Referer" = "http://localhost/test"
"X-Title" = "Key sanity check"
}

$body = @{
model = "openai/chatgpt-4o-latest"
messages = @(@{ role = "user"; content = "Say OK." })
} | ConvertTo-Json

$response = Invoke-WebRequest -Uri "https://openrouter.ai/api/v1/chat/completions" -Headers $headers -Method POST -Body $body
$response.StatusCode # 200
$response.Content[0..400] # JSON shows model reply: "content":"OK."
This confirms:

The key is valid and authorized for openai/chatgpt-4o-latest.

Credits and limits are configured correctly.

OpenRouter is functioning as expected when called directly.

Why I’m raising this here
I fully understand that this is happening in a third‑party fork with its own UI and request pipeline. However:

The fork is following the original council design (multiple members, chair, stages), and is likely using similar patterns for constructing provider calls.

The failure mode (401 for all OpenRouter calls, while the same key+model succeed outside the app) suggests a subtle issue in how the fork is building or routing OpenRouter requests (headers, auth, provider selection, or model mapping).

If there are recommended patterns or examples for:

How to structure OpenRouter requests from LLM Council (headers, referer/title, model naming), or

How to safely support both OpenRouter and “Direct” provider modes in the same codebase without confusing auth paths,

it would be very helpful for anyone maintaining forks like LLM Council Plus.

If this upstream repo already has a canonical OpenRouter integration example or a known caveat (for example: special handling for OpenRouter headers, or avoiding mixing different provider clients in the same code path), a pointer to that would be greatly appreciated, and I can relay it back to the fork maintainer.

Happy to provide additional logs, redact screenshots, or test patches if there’s anything in core that you’d like checked.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions