-
-
Notifications
You must be signed in to change notification settings - Fork 52
Open
Description
Issue Summary
When configuring the nano-gpt provider through LLM-API-Key-Proxy with litellm, standard naming patterns fail. Only using a custom name that avoids "nano" and "gpt" combinations works reliably.
Test Configurations & Results
Attempt 1: NANOGPT (non-custom method)
- .env:
NANOGPT(added via "add api key", not custom) - opencode.jsonc model names tried:
nano-gpt(per litellm docs: https://docs.litellm.ai/docs/providers/nano-gpt)- Error:
"An unexpected error occurred during the stream: 'nano-gpt'"
- Error:
nanogpt- Error:
litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=nanogpt/deepseek-v3-0324\n Pass model as E.g. For 'Huggingface' inference endpoints pass in \completion(model='huggingface/starcacker',..)` Learn more: https://docs.litellm.ai/docs/providers`
- Error:
nano_gpt- Error:
"An unexpected error occurred during the stream: 'nano_gpt'"
- Error:
Attempt 2: NANO_GPT (custom OpenAI-compatible)
- .env:
NANO_GPT(added as custom OpenAI-compatible) - opencode.jsonc model names tried:
nano-gpt,nano_gpt- Result: Initially worked, then became flakey (same litellm BadRequestError as above)
Working Solution
- .env:
NG(custom OpenAI-compatible) - opencode.jsonc:
ng/<model>(e.g.,ng/deepseek-v3-0324) - Result: Works reliably
Expected Behavior
Per litellm documentation, using nano-gpt with NANOGPT or NANO_GPT should work correctly without requiring non-standard naming.
Hypothesis
litellm appears to intercept names containing "nano" and "gpt" in combination, attempting to route them through a built-in nano-gpt provider handler instead of treating them as custom OpenAI-compatible endpoints.
mirrobot-agent
Metadata
Metadata
Assignees
Labels
No labels