Skip to content
Closed
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 13 additions & 7 deletions webview-ui/src/components/chat/ChatView.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -181,8 +181,8 @@ const ChatViewComponent: React.ForwardRefRenderFunction<ChatViewRef, ChatViewPro
const [showAnnouncementModal, setShowAnnouncementModal] = useState(false)
const everVisibleMessagesTsRef = useRef<LRUCache<number, boolean>>(
new LRUCache({
max: 100,
ttl: 1000 * 60 * 5,
max: 200,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doubling the cache size and TTL partially reverses the memory optimizations from PR #6697. While this helps with scroll stability, have we considered if there's a way to achieve the same result without increasing memory usage? Perhaps we could be smarter about which messages to cache based on scroll direction or user behavior patterns?

ttl: 1000 * 60 * 10,
}),
)
const autoApproveTimeoutRef = useRef<NodeJS.Timeout | null>(null)
Expand Down Expand Up @@ -897,11 +897,14 @@ const ChatViewComponent: React.ForwardRefRenderFunction<ChatViewRef, ChatViewPro
useMount(() => textAreaRef.current?.focus())

const visibleMessages = useMemo(() => {
// Use a more conservative approach for message filtering to prevent jumping
// Only apply the 500 message limit for very long conversations
const currentMessageCount = modifiedMessages.length
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These magic numbers (1000, 750, 200) could benefit from being defined as named constants with comments explaining why these specific values were chosen. Something like:

This would make the code more maintainable and the reasoning clearer.

const startIndex = Math.max(0, currentMessageCount - 500)
const recentMessages = modifiedMessages.slice(startIndex)
const shouldLimitMessages = currentMessageCount > 1000
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This gradual scaling approach is good, but for medium-sized conversations (500-800 messages), we're now processing all messages which could impact performance. Consider implementing a more gradual scaling, perhaps:

const startIndex = shouldLimitMessages ? Math.max(0, currentMessageCount - 750) : 0
const messagesToProcess = modifiedMessages.slice(startIndex)

const newVisibleMessages = recentMessages.filter((message: ClineMessage) => {
const newVisibleMessages = messagesToProcess.filter((message: ClineMessage) => {
if (everVisibleMessagesTsRef.current.has(message.ts)) {
const alwaysHiddenOnceProcessedAsk: ClineAsk[] = [
"api_req_failed",
Expand Down Expand Up @@ -954,7 +957,10 @@ const ChatViewComponent: React.ForwardRefRenderFunction<ChatViewRef, ChatViewPro
return true
})

const viewportStart = Math.max(0, newVisibleMessages.length - 100)
// Track visible messages more conservatively to avoid cache thrashing
// Only track messages that are actually in the current viewport
const viewportWindow = Math.min(200, newVisibleMessages.length)
const viewportStart = Math.max(0, newVisibleMessages.length - viewportWindow)
newVisibleMessages
.slice(viewportStart)
.forEach((msg: ClineMessage) => everVisibleMessagesTsRef.current.set(msg.ts, true))
Expand Down Expand Up @@ -1870,7 +1876,7 @@ const ChatViewComponent: React.ForwardRefRenderFunction<ChatViewRef, ChatViewPro
ref={virtuosoRef}
key={task.ts}
className="scrollable grow overflow-y-scroll mb-1"
increaseViewportBy={{ top: 3_000, bottom: 1000 }}
increaseViewportBy={{ top: 1500, bottom: 1500 }}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The symmetric viewport values might not be optimal since users typically scroll up more than down when reviewing conversation history. The previous asymmetric values (3000px top, 1000px bottom) might have been intentional. Could we consider keeping more buffer at the top, perhaps 2000px top and 1000px bottom?

data={groupedMessages}
itemContent={itemContent}
atBottomStateChange={(isAtBottom: boolean) => {
Expand Down
Loading