LiteLLM x gemini-cli Integration #12080
Replies: 6 comments 23 replies
-
hi thanks ! Can we use a non-gemini model like qwen 3 hosted with vllm, with GEMINI_MODEL="qwen3" for example ? I have some error with that in the debug console cli :
Thanks ! |
Beta Was this translation helpful? Give feedback.
-
hey, do we need to have gemini models named some way to get this to work? do we need to expose all models? we have for example a model named gemini_25_pro and gemini-cli isnt working |
Beta Was this translation helpful? Give feedback.
-
I don't think this is working as it should/expected. I'm using the latest available LiteLLM version (1.73.2-nightly) and while only with this version I'm at least able to connect to Gemini via LiteLLM at all (prior versions aren't working), the gemini-cli isn't able to run any tools. "create a textfile with the sentence 'hello world'" I get the error:
So I'm not even able to allow the shell command it has generated. |
Beta Was this translation helpful? Give feedback.
-
With |
Beta Was this translation helpful? Give feedback.
-
The Gemini-CLI integration still isn't working properly imo. Tested with the latest images (v1.73.6-stable.patch.1, v1.74.0.rc.2) Again, it's working correctly, when I use the Gemini API directly. The difference is, that when using the native Gemini API, I will immediately get asked to confirm the action. |
Beta Was this translation helpful? Give feedback.
-
Hello! Thank you so much for providing the tutorial! Currently I am attempting to use a locally hosted Ollama LLM model as the LLM for Gemini CLI, since I have sensitive documents that I prefer not to send to 3rd party. When I tried the tutorial, I have successfully called LiteLLM's proxy with Gemini CLI, but then the response from the LiteLLM's proxy does not seem to work as expected. When I send Gemini CLI a simple query, this is what was returned : Then, I checked the LiteLLM server and found this response: ![]() Last time I've checked from Ollama's side, Deepseek r1 should support tool calling. According to the tutorial here, https://docs.litellm.ai/docs/tutorials/litellm_gemini_cli#use-anthropic-openai-bedrock-etc-models-on-gemini-cli, I set the model_alias to use DeepSeek-r1 rather than Gemini-Pro-2.5. Is this a problem regarding how LiteLLM handles some of the Gemini tool calling, or a problem regarding the model itself? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Starting this thread for support for using litellm with gemini-cli. Send your questions/feature requests here
Get Started Doc
https://docs.litellm.ai/docs/tutorials/litellm_gemini_cli
Demo video
View here
cc @jgowdy-godaddy @dotmobo @zkcpku @ffreemt @Macmee @artdent
Beta Was this translation helpful? Give feedback.
All reactions