Skip to content

Conversation

@d-oit
Copy link
Contributor

@d-oit d-oit commented Apr 20, 2025

Context

Added support for thinking metrics specifically for the Gemini API Provider across API and UI components.

Implementation

feat:

  • Introduced thoughtsTokenCount and thinkingBudget in API responses.
  • Updated GeminiHandler to handle thinking models and their configurations.
  • Enhanced ChatView, TaskHeader, and related components to display thinking metrics.
  • Added tests for thinking metrics in ChatView and TaskHeader.

fix:

  • gemini.ts: this.options.modelMaxThinkingTokens - undefinied

Screenshots

RooCodeGeminiThinkingCount

@cte @mrubens

  • ToDo: tooltip with translation necessary or other ui changes ?

How to Test

-Reference: https://colab.research.google.com/drive/17_JFakbGVuvJTbeNeXzan2xcr9l1rl0o

  • TODO Description

Get in Touch

discord: d_oit


Important

Add thinking metrics support to GeminiHandler and related components, including API and UI updates, and add tests for new functionality.

  • Behavior:
    • Add thoughtsTokenCount and thinkingBudget to API responses in gemini.ts.
    • Update GeminiHandler to handle thinking models and configurations.
    • Enhance ChatView and TaskHeader to display thinking metrics.
    • Add tests for thinking metrics in ChatView.test.tsx and TaskHeader.test.tsx.
  • Fixes:
    • Fix this.options.modelMaxThinkingTokens undefined issue in gemini.ts.
  • Mocks:
    • Add mocks for i18next and react-i18next in __mocks__ directory.
  • Tests:
    • Add tests for thinking metrics display in ChatView.test.tsx and TaskHeader.test.tsx.

This description was created by Ellipsis for 0989f01. You can customize this summary. It will automatically update as commits are pushed.

- Introduced thoughtsTokenCount and thinkingBudget in API responses.
- Updated GeminiHandler to handle thinking models and their configurations.
- Enhanced ChatView, TaskHeader, and related components to display thinking metrics.
- Added tests for thinking metrics in ChatView and TaskHeader.
@changeset-bot
Copy link

changeset-bot bot commented Apr 21, 2025

⚠️ No Changeset found

Latest commit: e12f268

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@hannesrudolph hannesrudolph moved this from New to PR [Pre Approval Review] in Roo Code Roadmap Apr 22, 2025
@d-oit d-oit marked this pull request as ready for review April 25, 2025 10:55
@d-oit d-oit requested review from cte and mrubens as code owners April 25, 2025 10:55
@dosubot dosubot bot added size:XXL This PR changes 1000+ lines, ignoring generated files. enhancement New feature or request labels Apr 25, 2025
// Create a wrapper component that forces expanded state
const ExpandedTaskHeader: React.FC<React.ComponentProps<typeof TaskHeader>> = (props) => {
// Override useState to force expanded state
React.useState = jest.fn(() => [true, jest.fn()]) as any
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Avoid globally overriding React.useState to force expanded state in ExpandedTaskHeader. This can affect other components and tests unpredictably. Consider passing an explicit prop (e.g. 'expanded') or use a more localized mock.

This comment was generated because it violated a code review rule: mrule_oAUXVfj5l9XxF01R.

SmartManoj pushed a commit to SmartManoj/Raa-Code that referenced this pull request May 6, 2025
@daniel-lxs
Copy link
Member

Hey, does it make sense to add at least thoughtsTokenCount to other providers/models that we know also have a reasoning output?

Also, does this number represent the total thought tokens used for the task or for a specific message?
image

As far as I know, the thinking tokens are dynamic and change per response from the model; they are not accumulative like the context window is, so it might be confusing what this number means.

@d-oit
Copy link
Contributor Author

d-oit commented May 17, 2025

Hey, does it make sense to add at least thoughtsTokenCount to other providers/models that we know also have a reasoning output?

Sure. Do you know any which return the thought usage?

Also, does this number represent the total thought tokens used for the task or for a specific message?
image

As far as I know, the thinking tokens are dynamic and change per response from the model; they are not accumulative like the context window is, so it might be confusing what this number means.

I was also not sure about the ui/ux.

I only needed the text like in the cookbook to verify the price calculation.

The task price includes the price of the thoughts.

Edit: i need to find my screenshots. One of the price tables was thought tokens separately.

Now it's not extra anymore 🤔

Remove from the task (header) and only show the thought token next to the API.

If someone needs more info then a second PR?

@daniel-lxs
Copy link
Member

I only required the text, similar to what is found in a cookbook, to verify the price calculation.

The task price encompasses the cost of the thoughts.

In this context, it would be more logical to display the total amount of reasoning tokens utilized per task. As I understand it, the thought tokens are not capped at 7.2k per task; instead, there is a limit per response for each individual response.
Showing the total reasoning tokens used per task is sensible, but comparing this to the thought budget, which applies to each individual response, does not make sense to me, maybe I'm misunderstanding something here.

@d-oit
Copy link
Contributor Author

d-oit commented May 17, 2025

I only required the text, similar to what is found in a cookbook, to verify the price calculation.

The task price encompasses the cost of the thoughts.

In this context, it would be more logical to display the total amount of reasoning tokens utilized per task. As I understand it, the thought tokens are not capped at 7.2k per task; instead, there is a limit per response for each individual response.
Showing the total reasoning tokens used per task is sensible, but comparing this to the thought budget, which applies to each individual response, does not make sense to me, maybe I'm misunderstanding something here.

I am confused. 

What do you do except?
What changes are needed?

Let's only display the thought icon with the tokens next to the API. Nothing is the task. Agree?

@d-oit d-oit closed this May 18, 2025
@github-project-automation github-project-automation bot moved this from PR [Pre Approval Review] to Done in Roo Code Roadmap May 18, 2025
@hannesrudolph hannesrudolph moved this from New to Done in Roo Code Roadmap May 20, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request size:XXL This PR changes 1000+ lines, ignoring generated files.

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

2 participants