Replies: 4 comments 5 replies
-
At this time, we're optimizing Gemini CLI for Gemini models, and not building direct support for other LLM providers. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
@Greatz08 The latest release of LiteLLM supports gemini cli. So you can configure a local proxy, configure any model providers you want, and then use them in Gemini CLI. Works great |
Beta Was this translation helpful? Give feedback.
-
We also forked Gemini-cli and added multi-provider for openai and Anthropic https://github.com/acoliver/llxprt-code We keep up with gemini-cli released and features. It works great with GPToss and Qwen3-coder. The proxy solution sounds good in theory, but the Gemini Code Assist team is right that providing a good experience for different models is a lot of work. So, the readManyFiles tool, if a model with a less than 1m context window requests 500 files, what happens? In gemini-cli it dies and potentially blows the heap. In llxprt-code it gets a warning not to do that, and the response is limited. I think the Gemini-cli team made the right choice: focusing on the best possible Gemini 2.5 Pro experience upstream while community-driven projects downstream. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I used find and search on main page and didnt found any openai reference so thought to ask.
Beta Was this translation helpful? Give feedback.
All reactions