Skip to content

Unable to change default model for Gemini Integration #567

@kevhardy

Description

@kevhardy

Checklist

  • I'm running the newest version of LLM Vision https://github.com/valentinfrlch/ha-llmvision/releases/latest
  • I have enabled debug logging for the integration.
  • I have filled out the issue template to the best of my ability.
  • This issue only contains 1 issue (if you have multiple issues, open one issue for each issue).
  • This is a bug and not a feature request.
  • I have searched open issues for my problem.

Describe the issue

I am unable to change the default model to use when configuring the Gemini Integration. This also is preventing me from updating the API key.

It appears that the POST request used to test the API key is using the gemini-2.0-flash model. Since I have a free version of billing on this account, this model is not available and returns an exceeded quota error. However, this key should be able to use other models I am attempting to put in the default model field.

Reproduction steps

  1. Configure Google Gemini Integration
  2. Input API key from a free billing account
  3. Input a model available to free usage tier (e.g. gemini-2.5-flash)
  4. Hit Submit and receive Invalid API key error (default field goes back to gemini-2.0-flash after error)

Debug logs

2025-12-18 23:41:16.330 DEBUG (MainThread) [custom_components.llmvision.providers] Provider initialized: Google(model=gemini-2.5-flash, endpoint={'base_url': 'https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent?key={api_key}'})
2025-12-18 23:41:16.330 DEBUG (MainThread) [custom_components.llmvision.providers] Request data: {'contents': [{'role': 'user', 'parts': [{'text': 'Hi'}]}], 'generationConfig': {'maxOutputTokens': 1, 'temperature': 0.5}}
2025-12-18 23:41:16.330 DEBUG (MainThread) [custom_components.llmvision.providers] Posting to https://generativelanguage.googleapis.com/v1beta/models/gemini-2.0-flash:generateContent

...

2025-12-18 23:41:16.487 ERROR (MainThread) [custom_components.llmvision.config_flow] Validation failed: You exceeded your current quota, please check your plan and billing details. For more information on this error, head to: https://ai.google.dev/gemini-api/docs/rate-limits. To monitor your current usage, head to: https://ai.dev/usage?tab=rate-limit. 
* Quota exceeded for metric: generativelanguage.googleapis.com/generate_content_free_tier_input_token_count, limit: 0, model: gemini-2.0-flash
* Quota exceeded for metric: generativelanguage.googleapis.com/generate_content_free_tier_requests, limit: 0, model: gemini-2.0-flash
* Quota exceeded for metric: generativelanguage.googleapis.com/generate_content_free_tier_requests, limit: 0, model: gemini-2.0-flash
Please retry in 43.569762747s.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingstale

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions