-
-
Notifications
You must be signed in to change notification settings - Fork 691
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Description
When using the latest RD-Agent (0.6.x) and LiteLLM (1.72.x), it is impossible to use any non-OpenAI provider (e.g., Ollama, DeepSeek) for local LLM inference.
Even with all environment variables, CLI arguments, and code patches set to provider=ollama
(or deepseek
), RD-Agent always falls back to deepseek-chat
and throws:
litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=deepseek-chat
This makes it impossible to use local LLMs or any non-OpenAI provider, even though the official documentation claims full support.
Environment
- OS: macOS 14.x (Apple Silicon)
- Python: 3.13.x
- RD-Agent: 0.6.0 / 0.6.1 (latest)
- LiteLLM: 1.72.4 (latest)
- Ollama: Installed and running (
qwen2:7b
model pulled,ollama serve
active) - DeepSeek API: Also tested, same error
Reproduction Steps
- Set all environment variables for Ollama:
export LITELLM_PROVIDER=ollama export LITELLM_MODEL=qwen2:7b export OLLAMA_BASE_URL=http://localhost:11434 export RDAGENT_LLM_BACKEND=rdagent.oai.backend.litellm.LiteLLMBackend
- (Also tried DeepSeek, same result)
- Run:
source rdagent_venv/bin/activate rdagent fin_factor --provider ollama --model qwen2:7b --max_iterations 3 --fast_mode
- Result: Always fails with
LLM Provider NOT provided
, and falls back todeepseek-chat
.
Error Log
litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=deepseek-chat
...
RuntimeError: Failed to create chat completion after 10 retries.
- Even after patching
rdagent/oai/backend/litellm.py
andbase.py
to forcibly injectprovider="ollama"
(or"deepseek"
), the error persists. - CLI arguments, environment variables, and code patching all fail to pass the provider to LiteLLM.
What I Tried
- Set all relevant environment variables (LITELLM_PROVIDER, LITELLM_MODEL, etc.)
- Used CLI arguments (
--provider
,--model
) - Patched
rdagent/oai/backend/litellm.py
andbase.py
to forcibly injectprovider
into kwargs - Cleaned all
.env
files, restarted shell, reinstalled packages - Confirmed Ollama is running and accessible
- Also tested with DeepSeek API (same error)
Expected Behavior
- RD-Agent should correctly pass the provider/model to LiteLLM and allow local LLM inference via Ollama (or DeepSeek API).
- The official documentation claims Ollama is supported, but the code does not work as described.
Additional Context
- This used to work in RD-Agent 0.5.x + LiteLLM 1.6x/1.7x (provider was not strictly required).
- The bug only appears after upgrading to RD-Agent 0.6.x and LiteLLM 1.72.x.
- There are similar issues reported in LiteLLM repo and SWE-agent.
Request
- Please provide a working example or fix for using Ollama (or any non-OpenAI provider) with RD-Agent 0.6.x + LiteLLM 1.72.x.
- If possible, clarify the correct way to pass provider/model through all layers (env, CLI, code) so that it is not overwritten or lost.
Thank you for your help!
如需补充具体 patch 代码或更详细的环境变量/命令行/报错堆栈,可随时补充。
建议直接复制此模板到 GitHub issue,官方会更容易定位和修复问题。
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working