Skip to content

Commit 434de5a

Browse files
committed
docs: Update summarization trigger details for intelligent context condensation feature
1 parent 9de5de9 commit 434de5a

File tree

1 file changed

+2
-3
lines changed

1 file changed

+2
-3
lines changed

docs/features/experimental/intelligent-context-condensation.md

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,14 +9,13 @@ The `autoCondenseContext` experimental feature proactively manages Roo Code's co
99

1010
## How It Works
1111

12-
When a conversation with Roo approaches its context window limit, older messages would typically be dropped to make space. The `autoCondenseContext` feature addresses this by automatically summarizing the conversation history using a Large Language Model (LLM) call. This summarization is triggered when the context window reaches 50% capacity.
12+
When a conversation with Roo approaches its context window limit, older messages would typically be dropped to make space. The `autoCondenseContext` feature addresses this by automatically summarizing the conversation history using a Large Language Model (LLM) call. This summarization is triggered when the context window is almost full.
1313

1414
The goal is to shrink the token count of the conversation history while preserving essential information, preventing the context window from overflowing and avoiding the silent dropping of messages. This helps maintain a more coherent and complete conversation history for the LLM.
1515

1616
**Key Points:**
17-
* **Summarization Trigger:** Occurs when the context window is 50% full.
17+
* **Summarization Trigger:** Occurs when the context window is almost full.
1818
* **Message Preservation:** All original messages are preserved when rewinding to old checkpoints. However, messages from before the most recent summary are not included in subsequent API calls to the LLM.
19-
* **Image Handling:** A function (`maybeRemoveImageBlocks`) is used to manage image blocks within messages during summarization, as the underlying summarization API may not directly support them in conversational format.
2019

2120
**Disclaimer**: The LLM call used for summarization has an associated cost. Currently, this cost is not reflected in the usage/cost displayed in the Roo Code UI.
2221

0 commit comments

Comments
 (0)