Replies: 10 comments 23 replies
-
Hi, you need to set export GEMINI_MODEL="gemini-2.5-flash" to override the default 2.5 Pro model |
Beta Was this translation helpful? Give feedback.
-
So I am having the opposite issue. It starts with Pro but then automatically reverts to flash (2.5). My gemini.md under my username (C:\User\USERNAME\gemini.md has model: gemini-2.5-proAnd it will start in it then revert to flash... |
Beta Was this translation helpful? Give feedback.
-
you might have authenticated under the api key where it default to flash if you did not setup any $$$ |
Beta Was this translation helpful? Give feedback.
-
I figured it out. Check the api keys and pick the one that says tier 1 (paid)
Thank you
Andreas
[cid:logo-small_d7a6505a-c44e-4afc-9bf7-457dc874e18a.png] Andreas Mueller
Chief Operating Officer
Downstreem LLC
617.895.6615<tel:617.895.6615> | ***@***.******@***.***>.
On Jun 28, 2025, at 07:21, Douglas ***@***.***> wrote:
[EXTERNAL EMAIL] DO NOT CLICK links or attachments unless you recognize the sender and know the content is safe.
lol okay i fixed it. i had to refresh my ai studio api page and then link a new billing account to it that i set up. after that i reloaded gemini cli and it seems to stay on pro now. phew ...
—
Reply to this email directly, view it on GitHub<#1824 (reply in thread)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/BDVUCP5MQ7BOKOTZQ2W4ZGD3F2QHVAVCNFSM6AAAAACAFNK24SVHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTGNRQGUZDKMI>.
You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
I think it should be possible to switch the model from the console. For example "/model". You may want to switch from one to the other just in different sessions. |
Beta Was this translation helpful? Give feedback.
-
Is there a way to automate this? If a model reaches its limit with |
Beta Was this translation helpful? Give feedback.
-
So I had sent a pull request for this exact thing like on the first day, and they rejected it. So I created a multi-model multi-provider fork: It still works for Gemini, but we did some changes that apply even to Gemini users:
Luckily, due to the magic of LLMs, we are able to keep up with Gemini-CLIs release (0.1.17 - we added GPT-OSS support). We have a cherrypicker script and the LLM fixes the conflicts! So if you want to use these things like /model, custom prompts, changing the model settings (temp etc) -- you can have it if you swim downstream :-) I actually think Google is making the right decision. They have a particular vision and a very Gemini-centric one. For people who are doing more agentic stuff or need to control the budget -- a community project downstream can satisfy them while the Gemini Code Assist team focuses on their overall roadmap. |
Beta Was this translation helpful? Give feedback.
-
use gemini --model gemini-2.5-flash |
Beta Was this translation helpful? Give feedback.
-
Having the option to change this once starting would be nice |
Beta Was this translation helpful? Give feedback.
-
model name for
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
By default it is using gemini-2.5-pro but is there a way to use Flash instead ? it is much fatser.
thanks
Beta Was this translation helpful? Give feedback.
All reactions