Skip to content

Performance Regression: Auto-Expanded Thinking #7999

@nabilfreeman

Description

@nabilfreeman

App Version

3.28.2

API Provider

Anthropic

Model Used

Claude 4 Opus Max Thinking Tokens

Roo Code Task Links (Optional)

ReasoningBlock timer forces 1 Hz re-renders & CPU spikes (introduced in bbd3d98)

Problem
After bbd3d98 the chat view re-renders every second while the assistant is streaming, even if the reasoning block is never expanded. On long conversations this chews through CPU, blows up paint/layout costs, and feels janky. As the conversation lengthens the problem becomes worse and worse. Locked CPU means Roo is able to apply changes more slowly causing productivity to grind to a halt.

What changed

File & line Old behaviour New behaviour Cost
ReasoningBlock.tsx#L28-L33 ⬅️ no timer setInterval(tick, 1000) inside the component Triggers a state write → React commit once per second, dragging the whole markdown subtree along for the ride.
ReasoningBlock.tsx#L53-L57 Body was hidden unless user expanded (isCollapsed) Markdown now always mounted – even when the user never opens it Unnecessary parsing/layout on every commit; for huge chains-of-thought this dominates the main thread.
ChatRow.tsx#L1088-L1092 reasoningCollapsed defaulted to true and could be toggled Collapse flag removed; block is always rendered Eliminates the cheapest escape hatch (not rendering at all).

How to reproduce

  1. Open any long conversation (>= 1000 tokens of reasoning)
  2. Submit a message
  3. Watch CPU usage spike

Proposed fixes

  • Lift the timer into a tiny <ElapsedTime> child wrapped in React.memo so only the counter re-paints.
  • Re-apply the old 160-char debouncer when streaming partial chunks.
  • Collapse thinking by default again (probably not desired based on the issues I read and recent change).

🔁 Steps to Reproduce

  1. Open any long conversation with a model that has very high thinking tokens
  2. Submit a message
  3. Watch CPU usage spike

💥 Outcome Summary

Expected Roo to stay snappy on long thinking workflows, but now it blows up my M1 Max at 100% CPU usage whenever the model is thinking.

📄 Relevant Logs or Errors (Optional)

Metadata

Metadata

Assignees

No one assigned

    Labels

    Issue - Unassigned / ActionableClear and approved. Available for contributors to pick up.bugSomething isn't working

    Type

    No type

    Projects

    Status

    Done

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions