-
Notifications
You must be signed in to change notification settings - Fork 10.6k
fix: image token estimation #16004
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix: image token estimation #16004
Conversation
Summary of ChangesHello @jackwotherspoon, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical issue where the system was significantly overestimating token counts for images, leading to incorrect "Context window will overflow" errors. The changes introduce a more accurate, fixed token estimation of 3,000 tokens for images during fallback scenarios and ensure that model strings are correctly resolved for token counting APIs, thereby improving the reliability and accuracy of token calculations for multimodal inputs. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Size Change: +503 B (0%) Total Size: 22.2 MB ℹ️ View Unchanged
|
| const mimeType = inlineData?.mimeType || fileData?.mimeType; | ||
|
|
||
| if (mimeType?.startsWith('image/')) { | ||
| totalTokens += 3000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: make this a named constant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request addresses a critical issue where image token estimation was significantly overestimated, leading to incorrect "Context window will overflow" errors. The changes resolve model aliases to concrete model names before calling the countTokens API and update the fallback token estimation for images to a fixed value of 3,000 tokens, enhancing the reliability of token estimation. A high-severity vulnerability was identified related to the handling of model names from user configuration, which allows for HTTP Header Injection and could be exploited by an attacker with access to the application's configuration. This vulnerability needs to be addressed.
| const fileData = 'fileData' in part ? part.fileData : undefined; | ||
| const mimeType = inlineData?.mimeType || fileData?.mimeType; | ||
|
|
||
| if (mimeType?.startsWith('image/')) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should also handle video.
jacob314
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
/patch preview |
|
✅ Patch workflow(s) dispatched successfully! 📋 Details:
🔗 Track Progress: |
|
🚀 Patch PR Created! 📋 Patch Details:
📝 Next Steps:
🔗 Track Progress: |
|
🚀 Patch Release Started! 📋 Release Details:
⏳ Status: The patch release is now running. You'll receive another update when it completes. 🔗 Track Progress: |
|
✅ Patch Release Complete! 📦 Release Details:
🎉 Status: Your patch has been successfully released and published to npm! 📝 What's Available:
🔗 Links: |

Summary
Fallback token estimation vastly overestimates tokens for images (e.g., 2.7M
tokens for a 2.6MB image), causing false-positive "Context window will overflow"
errors.
For two main reasons:
Updated the token estimation fallback heuristic to use a safe, fixed value of
3,000 tokens for images.
Based on: https://ai.google.dev/gemini-api/docs/vision#token_counting
This value safely covers the maximum actual cost of an Ultra High Resolution
(4K) image in Gemini 3 (2,240 tokens) plus a buffer, while remaining well
below the context window limit.
Details
When the CLI cannot reach the
countTokensAPI (e.g., during rapid estimationor API failure), it falls back to a heuristic. Previously, this heuristic
used
JSON.stringify(part).length / 4for all non-text parts.The Problem:
For base64-encoded image data, this heuristic results in an estimate roughly
equal to the raw file size in bytes. A 2.6MB image was estimated at ~2.7 million
tokens, which exceeds the ~1 million token limit of most Gemini models,
triggering a blocking "Context window will overflow" warning.
The Fix:
estimateTokenCountSyncinpackages/core/src/utils/tokenCalculation.tsto identify image parts via their MIME type.
Rationale: This value safely covers the maximum actual cost of an Ultra High
Resolution (4K) image in Gemini 3 (2,240 tokens) plus a buffer, while remaining
well below the context window limit.
Test plan
should use fixed estimate for images in fallbackinpackages/core/src/utils/tokenCalculation.test.ts.How to Validate
Pre-Merge Checklist