fix: update default VLM model from deprecated gemini-2.0-flash to gemini-2.5-flash#45
Conversation
4521e4d to
18aa4ae
Compare
…ini-2.5-flash The gemini-2.0-flash model has been deprecated by Google and returns a 404 for new users. Update all default references to gemini-2.5-flash which is the current recommended replacement. Fixes llmsresearch#44 Signed-off-by: haosenwang1018 <haosenwang1018@users.noreply.github.com>
18aa4ae to
a169d75
Compare
|
Thanks @haosenwang1018 for the fix, the core change makes sense, and issue linking looks good (Fixes #44).
Also, a small follow-up test for YAML-based config loading (for example, through Settings.from_yaml(...)) would help ensure defaults resolve to gemini-2.5-flash in config-driven paths too. |
|
@haosenwang1018 just noticed, #48 appears to supersede this PR with the same core fix plus config-file updates. To avoid duplicated work, can we consolidate into one PR? If you want to keep this PR as the canonical one, please include updates to |
|
Closing per backlog policy; can revisit with narrower scope. |
Summary
Updates the default VLM model from
gemini-2.0-flashtogemini-2.5-flashacross the entire codebase.Problem
gemini-2.0-flashhas been deprecated by Google and returns404 NOT_FOUNDfor new users, breaking the pipeline out of the box.Changes
paperbanana/core/config.py: Updated default model inVLMConfigandSettingspaperbanana/providers/vlm/gemini.py: Updated default parameterconfigs/config.yamlandconfigs/provider/vlm/gemini.yaml: Updated defaultstests/test_providers/test_registry.py: Updated test expectationsexamples/generate_diagram.py: Updated exampleREADME.md: Updated model reference in provider tableExisting users with
VLM_MODELenv var or custom config are unaffected.Fixes #44