Replies: 6 comments 8 replies
-
|
At this time, we're optimizing Gemini CLI for Gemini models, and not building direct support for other LLM providers. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
|
@Greatz08 The latest release of LiteLLM supports gemini cli. So you can configure a local proxy, configure any model providers you want, and then use them in Gemini CLI. Works great |
Beta Was this translation helpful? Give feedback.
-
|
We also forked Gemini-cli and added multi-provider for openai and Anthropic https://github.com/acoliver/llxprt-code We keep up with gemini-cli released and features. It works great with GPToss and Qwen3-coder. The proxy solution sounds good in theory, but the Gemini Code Assist team is right that providing a good experience for different models is a lot of work. So, the readManyFiles tool, if a model with a less than 1m context window requests 500 files, what happens? In gemini-cli it dies and potentially blows the heap. In llxprt-code it gets a warning not to do that, and the response is limited. I think the Gemini-cli team made the right choice: focusing on the best possible Gemini 2.5 Pro experience upstream while community-driven projects downstream. |
Beta Was this translation helpful? Give feedback.
-
|
I’m using Qwen Code (Gemini CLI Fork) and OpenCode and Codex CLI for this purpose. I believe it’s best for Gemini CLI to stay optimized for Gemini models. What I’d really like to see is an agent that makes the most of Gemini’s unique model characteristics and shows off its full potential. |
Beta Was this translation helpful? Give feedback.
-
|
I wrote a proxy service that allows Gemini CLI to directly use OpenAI-compatible APIs. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I used find and search on main page and didnt found any openai reference so thought to ask.
Beta Was this translation helpful? Give feedback.
All reactions