You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: cline_docs/bedrock/bedrock-cache-strategy-documentation.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -412,7 +412,7 @@ const config = {
412
412
413
413
### Example 4: Adding Messages (With Token Comparison)
414
414
415
-
In this example, we'll demonstrate how the algorithm handles the case when new messages have a token count small enough that cachePoints should not be changed:
415
+
In this example, we'll demonstrate how the algorithm handles the case when new messages have a token count small enough that cache points should not be changed:
416
416
417
417
**Updated Input Configuration with Previous Cache Points:**
418
418
@@ -459,7 +459,7 @@ const config = {
459
459
2. It calculates the token count of the new messages (210 tokens).
460
460
3. It analyzes the token distribution between existing cache points and finds the smallest gap (260 tokens).
461
461
4. It compares the token count of new messages (210) with the smallest gap (260).
462
-
5. Since the new messages have less tokens than the smallest gap (210 <> 260), it decides not to re-allocate cachePoints
462
+
5. Since the new messages have less tokens than the smallest gap (210 < 260), it decides not to re-allocate cache points
463
463
6. All existing cache points are preserved, and no cache point is allocated for the new messages.
464
464
465
465
**Output Cache Point Placements (Unchanged):**
@@ -504,9 +504,9 @@ const config = {
504
504
[Assistant]: Certainly! Supervised learning and unsupervised learning are two fundamental paradigms in machine learning with...
505
505
```
506
506
507
-
**Note**: In this case, the algorithm determined that the new messages are the smallest portion of the message history in comparisson to existing cachePoints. Restructuring the cachePoints to make room to cache the new messages would be a net negative since it would not make use of 2 previously cached blocks, would have to re-write those 2 as a single cachePoint, and would write a new small cachePoint that would be chosen to be merged in the next round of messages.
507
+
**Note**: In this case, the algorithm determined that the new messages are the smallest portion of the message history in comparison to existing cache points. Restructuring the cache points to make room to cache the new messages would be a net negative since it would not make use of 2 previously cached blocks, would have to re-write those 2 as a single cache point, and would write a new small cache point that would be chosen to be merged in the next round of messages.
508
508
509
-
### Example 5: Adding Messages that reallocate cachePoints
509
+
### Example 5: Adding Messages that reallocate cache points
510
510
511
511
Now let's see what happens when we add messages with a larger token count:
512
512
@@ -608,7 +608,7 @@ const config = {
608
608
609
609
### Key Observations
610
610
611
-
1.**Simple Initial Placement Logic**: The last user message in the range that meets the minimum token threshold is set as a cachePoint.
611
+
1.**Simple Initial Placement Logic**: The last user message in the range that meets the minimum token threshold is set as a cache point.
612
612
613
613
2.**User Message Boundary Requirement**: Cache points are placed exclusively after user messages, not after assistant messages. This ensures cache points are placed at natural conversation boundaries where the user has provided input.
0 commit comments