Skip to content

Conversation

@jackwotherspoon
Copy link
Collaborator

Summary

Fallback token estimation vastly overestimates tokens for images (e.g., 2.7M
tokens for a 2.6MB image), causing false-positive "Context window will overflow"
errors.

image

For two main reasons:

  1. We are not resolving our model string properly so token count API is getting bad model string
  2. Fallback logic is flawed

Updated the token estimation fallback heuristic to use a safe, fixed value of
3,000 tokens for images.

Based on: https://ai.google.dev/gemini-api/docs/vision#token_counting

This value safely covers the maximum actual cost of an Ultra High Resolution
(4K) image in Gemini 3 (2,240 tokens) plus a buffer, while remaining well
below the context window limit.

Details

When the CLI cannot reach the countTokens API (e.g., during rapid estimation
or API failure), it falls back to a heuristic. Previously, this heuristic
used JSON.stringify(part).length / 4 for all non-text parts.

The Problem:
For base64-encoded image data, this heuristic results in an estimate roughly
equal to the raw file size in bytes. A 2.6MB image was estimated at ~2.7 million
tokens, which exceeds the ~1 million token limit of most Gemini models,
triggering a blocking "Context window will overflow" warning.

The Fix:

  • Updated estimateTokenCountSync in packages/core/src/utils/tokenCalculation.ts
    to identify image parts via their MIME type.
  • Images now use a fixed fallback estimate of 3,000 tokens.

Rationale: This value safely covers the maximum actual cost of an Ultra High
Resolution (4K) image in Gemini 3 (2,240 tokens) plus a buffer, while remaining
well below the context window limit.

Test plan

  • Added unit test should use fixed estimate for images in fallback in packages/core/src/utils/tokenCalculation.test.ts.
  • Updated existing mixed-content fallback test to use deterministic token expectations (3,001 tokens).

How to Validate

Pre-Merge Checklist

  • Updated relevant documentation and README (if needed)
  • Added/updated tests (if needed)
  • Noted breaking changes (if any)
  • Validated on required platforms/methods:
    • MacOS
      • npm run
      • npx
      • Docker
      • Podman
      • Seatbelt
    • Windows
      • npm run
      • npx
      • Docker
    • Linux
      • npm run
      • npx
      • Docker

@jackwotherspoon jackwotherspoon requested a review from a team as a code owner January 6, 2026 17:56
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @jackwotherspoon, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue where the system was significantly overestimating token counts for images, leading to incorrect "Context window will overflow" errors. The changes introduce a more accurate, fixed token estimation of 3,000 tokens for images during fallback scenarios and ensure that model strings are correctly resolved for token counting APIs, thereby improving the reliability and accuracy of token calculations for multimodal inputs.

Highlights

  • Fixes Image Token Overestimation: Addresses an issue where the fallback token estimation vastly overestimated tokens for images (e.g., 2.7M tokens for a 2.6MB image), leading to false-positive 'Context window will overflow' errors.
  • Corrected Model String Resolution: One root cause identified was that the model string was not being resolved properly, causing the token count API to receive incorrect model strings.
  • Improved Fallback Heuristic for Images: The previous fallback logic for non-text parts, which used JSON.stringify(part).length / 4, was flawed for images. This has been updated.
  • Fixed Image Token Value: The token estimation fallback heuristic now uses a safe, fixed value of 3,000 tokens specifically for images.
  • Rationale for Fixed Value: This 3,000 token value safely covers the maximum actual cost of an Ultra High Resolution (4K) image in Gemini 3 (2,240 tokens) plus a buffer, based on Google's official documentation.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link

github-actions bot commented Jan 6, 2026

Size Change: +503 B (0%)

Total Size: 22.2 MB

ℹ️ View Unchanged
Filename Size Change
./bundle/gemini.js 22.2 MB +503 B (0%)
./bundle/sandbox-macos-permissive-closed.sb 1.03 kB 0 B
./bundle/sandbox-macos-permissive-open.sb 890 B 0 B
./bundle/sandbox-macos-permissive-proxied.sb 1.31 kB 0 B
./bundle/sandbox-macos-restrictive-closed.sb 3.29 kB 0 B
./bundle/sandbox-macos-restrictive-open.sb 3.36 kB 0 B
./bundle/sandbox-macos-restrictive-proxied.sb 3.56 kB 0 B

compressed-size-action

const mimeType = inlineData?.mimeType || fileData?.mimeType;

if (mimeType?.startsWith('image/')) {
totalTokens += 3000;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: make this a named constant.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a critical issue where image token estimation was significantly overestimated, leading to incorrect "Context window will overflow" errors. The changes resolve model aliases to concrete model names before calling the countTokens API and update the fallback token estimation for images to a fixed value of 3,000 tokens, enhancing the reliability of token estimation. A high-severity vulnerability was identified related to the handling of model names from user configuration, which allows for HTTP Header Injection and could be exploited by an attacker with access to the application's configuration. This vulnerability needs to be addressed.

const fileData = 'fileData' in part ? part.fileData : undefined;
const mimeType = inlineData?.mimeType || fileData?.mimeType;

if (mimeType?.startsWith('image/')) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should also handle video.

Copy link
Collaborator

@jacob314 jacob314 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should either in this PR of a fast follow handle video.lgtm

@jackwotherspoon jackwotherspoon added this pull request to the merge queue Jan 6, 2026
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jan 6, 2026
@jackwotherspoon jackwotherspoon added this pull request to the merge queue Jan 6, 2026
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jan 6, 2026
@jackwotherspoon jackwotherspoon added this pull request to the merge queue Jan 6, 2026
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jan 6, 2026
@jackwotherspoon jackwotherspoon added this pull request to the merge queue Jan 6, 2026
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Jan 6, 2026
@jackwotherspoon jackwotherspoon added this pull request to the merge queue Jan 6, 2026
Merged via the queue into main with commit c31f053 Jan 6, 2026
20 checks passed
@jackwotherspoon jackwotherspoon deleted the image-token-estimation branch January 6, 2026 21:47
@jackwotherspoon
Copy link
Collaborator Author

/patch preview

@github-actions
Copy link

github-actions bot commented Jan 6, 2026

Patch workflow(s) dispatched successfully!

📋 Details:

  • Channels: preview
  • Commit: c31f05356ae3cd9a51e55319ebe3a5ae41abc48c
  • Workflows Created: 1

🔗 Track Progress:

github-actions bot pushed a commit that referenced this pull request Jan 6, 2026
@github-actions
Copy link

github-actions bot commented Jan 6, 2026

🚀 Patch PR Created!

📋 Patch Details:

📝 Next Steps:

  1. Review and approve the hotfix PR: #16027
  2. Once merged, the patch release will automatically trigger
  3. You'll receive updates here when the release completes

🔗 Track Progress:

@github-actions
Copy link

github-actions bot commented Jan 6, 2026

🚀 Patch Release Started!

📋 Release Details:

  • Environment: prod
  • Channel: preview → publishing to npm tag preview
  • Version: v0.23.0-preview.4
  • Hotfix PR: Merged ✅
  • Release Branch: release/v0.23.0-preview.4-pr-16004

⏳ Status: The patch release is now running. You'll receive another update when it completes.

🔗 Track Progress:

@github-actions
Copy link

github-actions bot commented Jan 6, 2026

Patch Release Complete!

📦 Release Details:

🎉 Status: Your patch has been successfully released and published to npm!

📝 What's Available:

🔗 Links:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants