Skip to content

Commit c0b01ca

Browse files
more flexible support for ARNs and hardening of ARN handling
1 parent 89faae9 commit c0b01ca

File tree

6 files changed

+1016
-1616
lines changed

6 files changed

+1016
-1616
lines changed

cline_docs/bedrock-cache-strategy-documentation.md renamed to cline_docs/bedrock/bedrock-cache-strategy-documentation.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -412,7 +412,7 @@ const config = {
412412

413413
### Example 4: Adding Messages (With Token Comparison)
414414

415-
In this example, we'll demonstrate how the algorithm handles the case when new messages have a token count small enough that cachePoints should not be changed:
415+
In this example, we'll demonstrate how the algorithm handles the case when new messages have a token count small enough that cache points should not be changed:
416416

417417
**Updated Input Configuration with Previous Cache Points:**
418418

@@ -459,7 +459,7 @@ const config = {
459459
2. It calculates the token count of the new messages (210 tokens).
460460
3. It analyzes the token distribution between existing cache points and finds the smallest gap (260 tokens).
461461
4. It compares the token count of new messages (210) with the smallest gap (260).
462-
5. Since the new messages have less tokens than the smallest gap (210 <> 260), it decides not to re-allocate cachePoints
462+
5. Since the new messages have less tokens than the smallest gap (210 < 260), it decides not to re-allocate cache points
463463
6. All existing cache points are preserved, and no cache point is allocated for the new messages.
464464

465465
**Output Cache Point Placements (Unchanged):**
@@ -504,9 +504,9 @@ const config = {
504504
[Assistant]: Certainly! Supervised learning and unsupervised learning are two fundamental paradigms in machine learning with...
505505
```
506506

507-
**Note**: In this case, the algorithm determined that the new messages are the smallest portion of the message history in comparisson to existing cachePoints. Restructuring the cachePoints to make room to cache the new messages would be a net negative since it would not make use of 2 previously cached blocks, would have to re-write those 2 as a single cachePoint, and would write a new small cachePoint that would be chosen to be merged in the next round of messages.
507+
**Note**: In this case, the algorithm determined that the new messages are the smallest portion of the message history in comparison to existing cache points. Restructuring the cache points to make room to cache the new messages would be a net negative since it would not make use of 2 previously cached blocks, would have to re-write those 2 as a single cache point, and would write a new small cache point that would be chosen to be merged in the next round of messages.
508508

509-
### Example 5: Adding Messages that reallocate cachePoints
509+
### Example 5: Adding Messages that reallocate cache points
510510

511511
Now let's see what happens when we add messages with a larger token count:
512512

@@ -608,7 +608,7 @@ const config = {
608608

609609
### Key Observations
610610

611-
1. **Simple Initial Placement Logic**: The last user message in the range that meets the minimum token threshold is set as a cachePoint.
611+
1. **Simple Initial Placement Logic**: The last user message in the range that meets the minimum token threshold is set as a cache point.
612612

613613
2. **User Message Boundary Requirement**: Cache points are placed exclusively after user messages, not after assistant messages. This ensures cache points are placed at natural conversation boundaries where the user has provided input.
614614

0 commit comments

Comments
 (0)