Skip to content

Conversation

@roomote
Copy link
Contributor

@roomote roomote bot commented Jul 28, 2025

Fixes #6290

Summary

This PR addresses the issue where VS Code Language Model API was not correctly calculating the context size when using the "copilot - gpt-4.1" model. The problem was that the API would return a token count of 4 for LanguageModelChatMessage objects, which appears to be a placeholder value rather than the actual token count.

Changes

  1. Modified internalCountTokens method: Added special handling when the token count equals 4 for LanguageModelChatMessage objects

    • Extracts the text content from the message
    • Recalculates the token count using the extracted string
  2. Updated type checking: Changed from instanceof vscode.LanguageModelChatMessage to a duck-typing approach to work with both real and mocked objects

  3. Added comprehensive test coverage:

    • Tests for normal token counting behavior
    • Tests for the special case when token count is 4
    • Tests for various edge cases and error scenarios

Implementation Details

As requested by @NaccOll, the solution:

  • Still uses the internalCountTokens method
  • Only when the text type is LanguageModelChatMessage AND tokenCount equals 4, converts the LanguageModelChatMessage to a string and recalculates via this.client.countTokens

Testing

  • All existing tests pass
  • Added new test cases specifically for this behavior
  • Ran full test suite for the api/providers directory

This ensures that Roo Code continues to work correctly with VS Code's Language Model API and properly calculates context sizes.


Important

Fixes token count calculation in internalCountTokens for LanguageModelChatMessage when count is 4 by recalculating with string content.

  • Behavior:
    • Fixes token count calculation in internalCountTokens in vscode-lm.ts for LanguageModelChatMessage when count is 4 by recalculating with string content.
    • Updates type checking from instanceof to duck-typing for compatibility with real and mocked objects.
  • Testing:
    • Adds tests in vscode-lm.spec.ts for normal token counting, special case when token count is 4, and various edge cases.
  • Misc:
    • Ensures all existing tests pass and runs full test suite for api/providers directory.

This description was created by Ellipsis for ce29a53. You can customize this summary. It will automatically update as commits are pushed.

…ing content

- When VS Code LM API returns token count of 4 for LanguageModelChatMessage, convert to string and recalculate
- This addresses issue #6290 where context size was not being calculated correctly
- Added comprehensive test coverage for the new behavior
- Updated instanceof check to work with mocked objects in tests
@roomote roomote bot requested review from cte, jr and mrubens as code owners July 28, 2025 12:27
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. bug Something isn't working labels Jul 28, 2025
@hannesrudolph hannesrudolph added the Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. label Jul 28, 2025
@daniel-lxs
Copy link
Member

Closing since these are duplicates of #6424

@daniel-lxs daniel-lxs closed this Jul 30, 2025
@github-project-automation github-project-automation bot moved this from Triage to Done in Roo Code Roadmap Jul 30, 2025
@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Jul 30, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. size:L This PR changes 100-499 lines, ignoring generated files.

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

VS Code LM API unable to get the correct context size

4 participants