Skip to content

feat(cache): add CACHE_MIN_FREQUENCY to gate LRU admission#1121

Open
guitavano wants to merge 3 commits intomainfrom
feat/lru-cache-min-frequency
Open

feat(cache): add CACHE_MIN_FREQUENCY to gate LRU admission#1121
guitavano wants to merge 3 commits intomainfrom
feat/lru-cache-min-frequency

Conversation

@guitavano
Copy link
Copy Markdown
Contributor

@guitavano guitavano commented Mar 18, 2026

Summary

  • Adds a CACHE_MIN_FREQUENCY env var that controls how many times a request must be fetched before it is admitted into the LRU cache
  • Protects the LRU from being polluted by one-off/cold requests that are unlikely to be repeated
  • Default is 3 — a key must be requested 3 times before being cached

How it works

When CACHE_MIN_FREQUENCY=N, each call to put() increments an in-memory counter for that cache key. The response is only stored in the LRU (and written to disk) on the N-th call. Before that, every request is still a MISS and fetched from origin normally.

The pending counters are kept in a secondary bounded LRUCache (capped at CACHE_MAX_ITEMS * 4) to prevent unbounded memory growth from unique one-off keys.

Configuration

Variable Default Description
CACHE_MIN_FREQUENCY 3 Minimum number of requests before a key is admitted to the LRU

Test plan

  • Deploy with CACHE_MIN_FREQUENCY=3 (default) and confirm the first 2 requests are MISSes and the 3rd is cached (HIT on subsequent requests)
  • Deploy with CACHE_MIN_FREQUENCY=1 and confirm existing single-request caching behavior works
  • Confirm memory stays bounded when many unique keys are requested

Summary by CodeRabbit

  • Refactor
    • Implemented frequency-aware admission for the LRU cache: new entries must meet a configurable access frequency before being promoted to the main cache, reducing memory growth from one-off keys and improving cache efficiency.

Introduces a frequency threshold before admitting a request into the LRU
cache. When CACHE_MIN_FREQUENCY=N (default 1, backward-compatible), a key
must be put N times before being stored in the LRU and written to disk.
This protects against one-off cold requests polluting the cache.

The pending counters are stored in a bounded LRUCache (4x CACHE_MAX_ITEMS)
to prevent unbounded memory growth from unique keys.

Made-with: Cursor
@github-actions
Copy link
Copy Markdown
Contributor

Tagging Options

Should a new tag be published when this PR is merged?

  • 👍 for Patch 1.178.1 update
  • 🎉 for Minor 1.179.0 update
  • 🚀 for Major 2.0.0 update

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Mar 18, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 38c58ba5-9060-4ce9-8936-b7ad58bd68f2

📥 Commits

Reviewing files that changed from the base of the PR and between 855015e and db53669.

📒 Files selected for processing (1)
  • runtime/caches/lrucache.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • runtime/caches/lrucache.ts

📝 Walkthrough

Walkthrough

Adds a frequency-tracking layer to the LRU cache: new keys are counted in a bounded frequency store and only admitted to the main cache after reaching CACHE_MIN_FREQUENCY (default 3).

Changes

Cohort / File(s) Summary
Cache admission gating
runtime/caches/lrucache.ts
Adds CACHE_MIN_FREQUENCY config and a bounded frequency-tracking store. put increments frequency; entries below threshold are not admitted and their counts are retained until threshold is met, after which items are promoted to the main LRU cache.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant FrequencyStore as Frequency Store
    participant MainCache as Main LRU Cache

    Client->>MainCache: put(key, value)
    MainCache-->>FrequencyStore: check frequency(key)
    alt frequency < CACHE_MIN_FREQUENCY
        FrequencyStore->>FrequencyStore: increment count(key)
        FrequencyStore-->>Client: do not cache yet
    else frequency >= CACHE_MIN_FREQUENCY
        FrequencyStore->>FrequencyStore: remove count(key)
        MainCache->>MainCache: store key,value
        MainCache-->>Client: cached
    end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~12 minutes

Possibly related PRs

Suggested reviewers

  • hugo-ccabral

Poem

🐰 I counted keys by moonlight's plea,
Three hops and then they join the tree,
A bounded list keeps memory free,
The cache sings soft — come nest with me. 🌙

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main change—adding a CACHE_MIN_FREQUENCY configuration to control LRU cache admission based on request frequency, which directly matches the core functionality introduced in the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/lru-cache-min-frequency
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="runtime/caches/lrucache.ts">

<violation number="1" location="runtime/caches/lrucache.ts:103">
P1: Skip the admission counter for keys that are already in `fileCache`; otherwise stale-while-revalidate refreshes are dropped until the threshold is reached again.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
runtime/caches/lrucache.ts (1)

103-110: Apply frequency gate only to first-time admissions, not refresh/overwrite puts.

Right now every put() is gated, which can suppress legitimate cache refreshes for keys already in the LRU.

Suggested refactor
-          if (CACHE_MIN_FREQUENCY > 1) {
+          if (CACHE_MIN_FREQUENCY > 1 && !fileCache.has(cacheKey)) {
             const count = (frequency.get(cacheKey) ?? 0) + 1;
             if (count < CACHE_MIN_FREQUENCY) {
               frequency.set(cacheKey, count);
               return;
             }
             frequency.delete(cacheKey);
           }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@runtime/caches/lrucache.ts` around lines 103 - 110, The frequency gate in
put() is being applied to every write, blocking legitimate refreshes; change the
logic in the put() method to only apply the CACHE_MIN_FREQUENCY gate for
first-time admissions by checking whether the key already exists in the LRU
(e.g., use this._map.has(cacheKey) or existing lookup) and skip the frequency
increment/gating for keys already present so refresh/overwrite puts proceed
normally; keep using the frequency Map for new keys (increment, early-return if
below CACHE_MIN_FREQUENCY, and delete the entry once admitted).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@runtime/caches/lrucache.ts`:
- Around line 26-30: The default CACHE_MIN_FREQUENCY is set to 3 which changes
admission behavior; change the default to "1" so cache admission remains
first-hit by default. Locate the constant CACHE_MIN_FREQUENCY in
runtime/caches/lrucache.ts and update the parseInt fallback from "3" to "1", and
adjust the surrounding comment to reflect that the default is 1 (opt-in higher
values via the CACHE_MIN_FREQUENCY env var). Ensure parsing/typing remains
unchanged (still using Deno.env.get and parseInt).
- Around line 49-53: The frequency LRU (frequency) holds warm-up counters but is
not cleared on explicit removals, so modify the cache's explicit delete/remove
method (e.g., the LRUCache.delete or LRUCache.prototype.delete implementation)
to also remove the key from frequency; call the corresponding removal API on
frequency (e.g., frequency.delete(key) or frequency.del(key) depending on the
LRUCache API) whenever a key is explicitly deleted so warm-up state is reset and
re-inserted keys don't inherit prior counts.

---

Nitpick comments:
In `@runtime/caches/lrucache.ts`:
- Around line 103-110: The frequency gate in put() is being applied to every
write, blocking legitimate refreshes; change the logic in the put() method to
only apply the CACHE_MIN_FREQUENCY gate for first-time admissions by checking
whether the key already exists in the LRU (e.g., use this._map.has(cacheKey) or
existing lookup) and skip the frequency increment/gating for keys already
present so refresh/overwrite puts proceed normally; keep using the frequency Map
for new keys (increment, early-return if below CACHE_MIN_FREQUENCY, and delete
the entry once admitted).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 104d1f12-cced-4de6-b2a9-cb1e7e481f21

📥 Commits

Reviewing files that changed from the base of the PR and between f8d4b09 and 855015e.

📒 Files selected for processing (1)
  • runtime/caches/lrucache.ts

Comment on lines +26 to +30
// Minimum number of times a request must be made before it is admitted into the LRU cache.
// Protects against one-off/cold requests polluting the cache. Default is 3.
const CACHE_MIN_FREQUENCY = parseInt(
Deno.env.get("CACHE_MIN_FREQUENCY") ?? "3",
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Default CACHE_MIN_FREQUENCY of 3 is a behavior-breaking default.

This changes cache admission from first-hit to third-hit globally. If backward compatibility is expected, default should remain 1 and higher values should be opt-in.

Proposed fix
 const CACHE_MIN_FREQUENCY = parseInt(
-  Deno.env.get("CACHE_MIN_FREQUENCY") ?? "3",
+  Deno.env.get("CACHE_MIN_FREQUENCY") ?? "1",
 );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Minimum number of times a request must be made before it is admitted into the LRU cache.
// Protects against one-off/cold requests polluting the cache. Default is 3.
const CACHE_MIN_FREQUENCY = parseInt(
Deno.env.get("CACHE_MIN_FREQUENCY") ?? "3",
);
const CACHE_MIN_FREQUENCY = parseInt(
Deno.env.get("CACHE_MIN_FREQUENCY") ?? "1",
);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@runtime/caches/lrucache.ts` around lines 26 - 30, The default
CACHE_MIN_FREQUENCY is set to 3 which changes admission behavior; change the
default to "1" so cache admission remains first-hit by default. Locate the
constant CACHE_MIN_FREQUENCY in runtime/caches/lrucache.ts and update the
parseInt fallback from "3" to "1", and adjust the surrounding comment to reflect
that the default is 1 (opt-in higher values via the CACHE_MIN_FREQUENCY env
var). Ensure parsing/typing remains unchanged (still using Deno.env.get and
parseInt).

Comment on lines +49 to +53
// Tracks how many times each key has been put before admission into the LRU.
// Bounded to avoid unbounded memory growth from unique one-off keys.
const frequency = new LRUCache<string, number>({
max: CACHE_MAX_ITEMS * 4,
});
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Clear warm-up counters on explicit delete() to keep invalidation semantics consistent.

frequency state is introduced, but explicit deletes currently do not clear it. A deleted key can retain prior warm-up progress and be admitted earlier than expected on subsequent puts.

Proposed fix
         delete: async (
           request: RequestInfo | URL,
           options?: CacheQueryOptions,
         ): Promise<boolean> => {
           const cacheKey = await requestURLSHA1(request);
           cacheInner.delete(cacheKey, options);
+          frequency.delete(cacheKey);
           return fileCache.delete(cacheKey);
         },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Tracks how many times each key has been put before admission into the LRU.
// Bounded to avoid unbounded memory growth from unique one-off keys.
const frequency = new LRUCache<string, number>({
max: CACHE_MAX_ITEMS * 4,
});
delete: async (
request: RequestInfo | URL,
options?: CacheQueryOptions,
): Promise<boolean> => {
const cacheKey = await requestURLSHA1(request);
cacheInner.delete(cacheKey, options);
frequency.delete(cacheKey);
return fileCache.delete(cacheKey);
},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@runtime/caches/lrucache.ts` around lines 49 - 53, The frequency LRU
(frequency) holds warm-up counters but is not cleared on explicit removals, so
modify the cache's explicit delete/remove method (e.g., the LRUCache.delete or
LRUCache.prototype.delete implementation) to also remove the key from frequency;
call the corresponding removal API on frequency (e.g., frequency.delete(key) or
frequency.del(key) depending on the LRUCache API) whenever a key is explicitly
deleted so warm-up state is reset and re-inserted keys don't inherit prior
counts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant