fix(cache): correctness fixes and stale window bump#1124
fix(cache): correctness fixes and stale window bump#1124
Conversation
- Fix mkdirSync blocking the event loop (use async Deno.mkdir) - Memoize LRU open() to prevent duplicate cache instances per name - Bump STALE_TTL_PERIOD default from 30s to 1h for better SWR coverage - Skip caching oversized entries (CACHE_MAX_ENTRY_SIZE, default 2MB) - Evict stale entry when oversized write is rejected - Add bypass reason dimension to loader_cache metric Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Tagging OptionsShould a new tag be published when this PR is merged?
|
📝 WalkthroughWalkthroughAdds cache-size limits and OTEL metrics, converts filesystem cache dir creation to async, deduplicates concurrent LRU cache initialization, extends stale TTL behavior, and changes background stale-while-revalidate flow to run inside singleFlight with a single bg revalidation metric emission. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant Loader
participant Cache
participant SingleFlight
participant Handler
participant Metrics
Client->>Loader: request
Loader->>Cache: match(cacheKey)
alt cache hit and fresh
Cache-->>Loader: cached response
Loader->>Metrics: record hit, content_length, resolver_latency(json_parse)
Loader-->>Client: return cached response
else cache stale
Cache-->>Loader: cached stale response
Loader->>SingleFlight: acquire revalidation for cacheKey
SingleFlight->>Handler: callHandlerAndCache()
Handler-->>SingleFlight: new response
SingleFlight->>Cache: write new entry (respect size limit)
SingleFlight->>Metrics: record loader_bg_revalidation duration
SingleFlight-->>Loader: revalidation complete
Loader-->>Client: return cached stale response (immediate)
else cache miss
Loader->>Handler: callHandlerAndCache()
Handler-->>Loader: response
Loader->>Cache: attempt write (skip if > CACHE_MAX_ENTRY_SIZE)
Loader->>Metrics: record bypass reason if not cached
Loader-->>Client: return response
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@blocks/loader.ts`:
- Around line 158-160: The current initialization of CACHE_MAX_ENTRY_SIZE uses
parseInt(... ) || 2097152 which converts an explicit "0" env value into the
default; change it to treat "unset" and parse-failures separately: read the raw
value from Deno.env.get("CACHE_MAX_ENTRY_SIZE"), if raw is undefined use the
default 2097152; otherwise parse with parseInt (or Number.parseInt) and if the
result is NaN fall back to 2097152, but if the parsed value is 0 keep it as 0.
Update the logic around the CACHE_MAX_ENTRY_SIZE constant (and the parseInt
call) to implement this explicit undefined/NaN handling so "0" remains
distinguishable from "unset".
- Around line 312-316: The oversized-entry branch returns before the cache
eviction completes; change the un-awaited promise
cache.delete(request).catch(...) so the delete is awaited before returning.
Locate the oversized check that uses jsonStringEncoded and CACHE_MAX_ENTRY_SIZE
and update the branch around cache.delete(request) (referencing cache.delete and
logger.error) to await the deletion (e.g., use await or a try/catch around await
cache.delete(request)) and log any error, then return json only after the await
completes.
In `@runtime/caches/lrucache.ts`:
- Around line 64-65: The code currently uses fileCache.has(cacheKey) followed by
cacheInner.match(cacheKey), which probes without updating LRU recency; change
the read path to call cacheInner.get(cacheKey) when fileCache.has(cacheKey) (or
replace the has+match pair with a single get) so cache hits update recency.
Locate the check around fileCache.has(cacheKey) and replace the subsequent
cacheInner.match(cacheKey) call with cacheInner.get(cacheKey) (or use the get
result directly) to ensure hot entries are promoted in the LRU.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 33c3340a-334e-4fb9-974c-4605bf577985
📒 Files selected for processing (3)
blocks/loader.tsruntime/caches/fileSystem.tsruntime/caches/lrucache.ts
| const CACHE_MAX_ENTRY_SIZE = parseInt( | ||
| Deno.env.get("CACHE_MAX_ENTRY_SIZE") ?? "2097152", // 2 MB | ||
| ) || 2097152; |
There was a problem hiding this comment.
Keep CACHE_MAX_ENTRY_SIZE=0 distinguishable from "unset".
parseInt(...) || 2097152 turns an explicit 0 into the default, so this knob can't intentionally disable writes or force the oversized path in staging. Use an explicit parse-failure check instead.
♻️ Proposed fix
-const CACHE_MAX_ENTRY_SIZE = parseInt(
- Deno.env.get("CACHE_MAX_ENTRY_SIZE") ?? "2097152", // 2 MB
-) || 2097152;
+const parsedCacheMaxEntrySize = Number.parseInt(
+ Deno.env.get("CACHE_MAX_ENTRY_SIZE") ?? "2097152",
+ 10,
+);
+const CACHE_MAX_ENTRY_SIZE = Number.isNaN(parsedCacheMaxEntrySize)
+ ? 2097152
+ : parsedCacheMaxEntrySize;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const CACHE_MAX_ENTRY_SIZE = parseInt( | |
| Deno.env.get("CACHE_MAX_ENTRY_SIZE") ?? "2097152", // 2 MB | |
| ) || 2097152; | |
| const parsedCacheMaxEntrySize = Number.parseInt( | |
| Deno.env.get("CACHE_MAX_ENTRY_SIZE") ?? "2097152", | |
| 10, | |
| ); | |
| const CACHE_MAX_ENTRY_SIZE = Number.isNaN(parsedCacheMaxEntrySize) | |
| ? 2097152 | |
| : parsedCacheMaxEntrySize; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@blocks/loader.ts` around lines 158 - 160, The current initialization of
CACHE_MAX_ENTRY_SIZE uses parseInt(... ) || 2097152 which converts an explicit
"0" env value into the default; change it to treat "unset" and parse-failures
separately: read the raw value from Deno.env.get("CACHE_MAX_ENTRY_SIZE"), if raw
is undefined use the default 2097152; otherwise parse with parseInt (or
Number.parseInt) and if the result is NaN fall back to 2097152, but if the
parsed value is 0 keep it as 0. Update the logic around the CACHE_MAX_ENTRY_SIZE
constant (and the parseInt call) to implement this explicit undefined/NaN
handling so "0" remains distinguishable from "unset".
| if (jsonStringEncoded.length > CACHE_MAX_ENTRY_SIZE) { | ||
| cache.delete(request).catch((error) => | ||
| logger.error(`loader error ${error}`) | ||
| ); | ||
| return json; |
There was a problem hiding this comment.
Await the eviction in the oversized branch.
This path returns before the delete settles. In the stale-while-revalidate flow, another request can still hit the old stale entry even though this refresh already decided it must be removed.
🐛 Proposed fix
if (jsonStringEncoded.length > CACHE_MAX_ENTRY_SIZE) {
- cache.delete(request).catch((error) =>
+ await cache.delete(request).catch((error) =>
logger.error(`loader error ${error}`)
);
return json;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (jsonStringEncoded.length > CACHE_MAX_ENTRY_SIZE) { | |
| cache.delete(request).catch((error) => | |
| logger.error(`loader error ${error}`) | |
| ); | |
| return json; | |
| if (jsonStringEncoded.length > CACHE_MAX_ENTRY_SIZE) { | |
| await cache.delete(request).catch((error) => | |
| logger.error(`loader error ${error}`) | |
| ); | |
| return json; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@blocks/loader.ts` around lines 312 - 316, The oversized-entry branch returns
before the cache eviction completes; change the un-awaited promise
cache.delete(request).catch(...) so the delete is awaited before returning.
Locate the oversized check that uses jsonStringEncoded and CACHE_MAX_ENTRY_SIZE
and update the branch around cache.delete(request) (referencing cache.delete and
logger.error) to await the deletion (e.g., use await or a try/catch around await
cache.delete(request)) and log any error, then return json only after the await
completes.
| if (fileCache.has(cacheKey)) { | ||
| const result = cacheInner.match(cacheKey); | ||
| return result; | ||
| return cacheInner.match(cacheKey); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For npm:lru-cache@10.2.0, does cache.has(key)update recency / LRU ordering by default, or shouldcache.get(key) be used when read hits must keep entries hot?
💡 Result:
For npm:lru-cache@10.2.0, cache.has(key) does NOT update recency / LRU ordering by default. Use cache.get(key) when read hits must keep entries hot.
Citations:
- 1: https://www.npmjs.com/package/lru-cache/v/10.2.0
- 2: https://registry.npmjs.org/lru-cache
- 3: https://www.npmjs.com/package/lru-cache
- 4: https://www.npmjs.com/package/lru-cache/v/10.2.1
- 5: https://github.com/isaacs/node-lru-cache/blob/main/src/index.ts
- 6: https://www.npmjs.com/package/lru-cache?activeTab=versions
- 7: https://isaacs.github.io/node-lru-cache/
- 8: https://github.com/isaacs/node-lru-cache
- 9: https://www.jsdocs.io/package/lru-cache
- 10: https://www.cacheiql.com
Use get() here so cache hits stay recent.
match() is the read path for this index, but has() is a non-touching probe that does not update LRU recency. Hot entries will age out as if idle despite being actively read. Switching to get() keeps eviction aligned with actual traffic patterns.
♻️ Proposed fix
- if (fileCache.has(cacheKey)) {
+ if (fileCache.get(cacheKey) !== undefined) {
return cacheInner.match(cacheKey);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (fileCache.has(cacheKey)) { | |
| const result = cacheInner.match(cacheKey); | |
| return result; | |
| return cacheInner.match(cacheKey); | |
| if (fileCache.get(cacheKey) !== undefined) { | |
| return cacheInner.match(cacheKey); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@runtime/caches/lrucache.ts` around lines 64 - 65, The code currently uses
fileCache.has(cacheKey) followed by cacheInner.match(cacheKey), which probes
without updating LRU recency; change the read path to call
cacheInner.get(cacheKey) when fileCache.has(cacheKey) (or replace the has+match
pair with a single get) so cache hits update recency. Locate the check around
fileCache.has(cacheKey) and replace the subsequent cacheInner.match(cacheKey)
call with cacheInner.get(cacheKey) (or use the get result directly) to ensure
hot entries are promoted in the LRU.
Add LRU eviction counter, size/key gauges with disk fill warning, cache entry size histogram, bg revalidation duration histogram, json parse latency, bypass reason dimension, and X-Cache-Tier span attribute. Ships the observability foundation before feature PRs land. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
LRU eviction counter, size/key gauges, and disk fill warning now ship in the correctness+observability PR for earlier visibility. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Histograms, bgRevalidation timer, and cache entry size recording now ship in the correctness+observability PR for earlier visibility. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
4 issues found across 3 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="blocks/loader.ts">
<violation number="1" location="blocks/loader.ts:404">
P2: Record cached JSON parse time in a separate metric instead of `resolver_latency`, otherwise this histogram no longer represents end-to-end loader latency.</violation>
</file>
<file name="runtime/caches/common.ts">
<violation number="1" location="runtime/caches/common.ts:55">
P2: Don't delete `X-Cache-Tier` from the matched response inside the instrumentation wrapper; this changes the response payload on cache hits instead of only recording telemetry.</violation>
</file>
<file name="runtime/caches/lrucache.ts">
<violation number="1" location="runtime/caches/lrucache.ts:81">
P3: This warning path is unthrottled, so a cache that remains above the threshold will log on every gauge callback execution.</violation>
<violation number="2" location="runtime/caches/lrucache.ts:99">
P2: Tracking opened caches in a global map without any cleanup leaks cache instances and stale metric labels for every distinct cache name.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| if (OTEL_ENABLE_EXTRA_METRICS) { | ||
| const parseStart = performance.now(); | ||
| const result = await matched.json(); | ||
| stats.latency.record(performance.now() - parseStart, { |
There was a problem hiding this comment.
P2: Record cached JSON parse time in a separate metric instead of resolver_latency, otherwise this histogram no longer represents end-to-end loader latency.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At blocks/loader.ts, line 404:
<comment>Record cached JSON parse time in a separate metric instead of `resolver_latency`, otherwise this histogram no longer represents end-to-end loader latency.</comment>
<file context>
@@ -355,13 +369,44 @@ const wrapLoader = (
+ if (OTEL_ENABLE_EXTRA_METRICS) {
+ const parseStart = performance.now();
+ const result = await matched.json();
+ stats.latency.record(performance.now() - parseStart, {
+ loader,
+ status: "json_parse",
</file context>
| const tier = isMatch.headers.get("X-Cache-Tier"); | ||
| if (tier) { | ||
| span.setAttribute("cache_tier", parseInt(tier)); | ||
| isMatch.headers.delete("X-Cache-Tier"); |
There was a problem hiding this comment.
P2: Don't delete X-Cache-Tier from the matched response inside the instrumentation wrapper; this changes the response payload on cache hits instead of only recording telemetry.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At runtime/caches/common.ts, line 55:
<comment>Don't delete `X-Cache-Tier` from the matched response inside the instrumentation wrapper; this changes the response payload on cache hits instead of only recording telemetry.</comment>
<file context>
@@ -46,6 +46,15 @@ export const withInstrumentation = (
+ const tier = isMatch.headers.get("X-Cache-Tier");
+ if (tier) {
+ span.setAttribute("cache_tier", parseInt(tier));
+ isMatch.headers.delete("X-Cache-Tier");
+ }
+ }
</file context>
| if (existing) return existing; | ||
| const fileCache = new LRUCache(cacheOptions(cacheInner)); | ||
| return Promise.resolve({ | ||
| activeCaches.set(_cacheName, fileCache); |
There was a problem hiding this comment.
P2: Tracking opened caches in a global map without any cleanup leaks cache instances and stale metric labels for every distinct cache name.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At runtime/caches/lrucache.ts, line 99:
<comment>Tracking opened caches in a global map without any cleanup leaks cache instances and stale metric labels for every distinct cache name.</comment>
<file context>
@@ -45,6 +96,7 @@ function createLruCacheStorage(cacheStorageInner: CacheStorage): CacheStorage {
const existing = openedCachesByName.get(_cacheName);
if (existing) return existing;
const fileCache = new LRUCache(cacheOptions(cacheInner));
+ activeCaches.set(_cacheName, fileCache);
const cache = Promise.resolve({
...baseCache,
</file context>
| observer.observe(lru.calculatedSize, { cache: name }); | ||
| const ratio = lru.calculatedSize / CACHE_MAX_SIZE; | ||
| if (ratio >= LRU_DISK_WARN_RATIO) { | ||
| logger.warn( |
There was a problem hiding this comment.
P3: This warning path is unthrottled, so a cache that remains above the threshold will log on every gauge callback execution.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At runtime/caches/lrucache.ts, line 81:
<comment>This warning path is unthrottled, so a cache that remains above the threshold will log on every gauge callback execution.</comment>
<file context>
@@ -31,12 +39,55 @@ const cacheOptions = (cache: Cache) => (
+ observer.observe(lru.calculatedSize, { cache: name });
+ const ratio = lru.calculatedSize / CACHE_MAX_SIZE;
+ if (ratio >= LRU_DISK_WARN_RATIO) {
+ logger.warn(
+ `lru_cache: disk usage for cache "${name}" is at ` +
+ `${Math.round(lru.calculatedSize / 1024 / 1024)}MB / ` +
</file context>
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
runtime/caches/lrucache.ts (1)
76-88: Remove logging side effect from metrics callback or implement warning cooldown.ObservableGauge callbacks are invoked on each metric collection cycle (default 60-second intervals). Currently, this callback logs on every invocation whenever cache ratio exceeds
LRU_DISK_WARN_RATIO, causing excessive log spam under sustained high usage. Per OpenTelemetry spec, callbacks should avoid side effects like logging. Either track threshold crossings to warn only on transitions or implement a cooldown period between warnings.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@runtime/caches/lrucache.ts` around lines 76 - 88, The metrics callback registered in lruBytesGauge.addCallback currently logs on every collection when lru.calculatedSize/CACHE_MAX_SIZE exceeds LRU_DISK_WARN_RATIO; change it to avoid per-collection side effects by adding state to track warnings per cache (e.g., a module-level Map or Set like lastWarnTimestamps or previouslyWarned keyed by cache name) and only call logger.warn in the callback when either (a) the ratio crosses the threshold from below to above (transition) or (b) the last warning timestamp for that cache is older than a configured cooldown (e.g., WARN_COOLDOWN_MS); reference activeCaches, lru.calculatedSize, LRU_DISK_WARN_RATIO and logger.warn when implementing the check and updating the tracking structure so subsequent callback invocations skip logging until the next transition or cooldown expiry.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@runtime/caches/lrucache.ts`:
- Around line 82-86: The block constructing the long log string in
runtime/caches/lrucache.ts is improperly formatted and failing deno fmt; run
`deno fmt runtime/caches/lrucache.ts` (or reformat the template string) so the
multiline template string lines are wrapped/indented to satisfy the
formatter—locate the code that references lru.calculatedSize, CACHE_MAX_SIZE,
CACHE_MAX_AGE_S and the variable name `name` in the log call and apply the
formatter so CI passes.
- Around line 42-45: The dispose handler currently always deletes the backing
cache and increments lruEvictionCounter even for synchronous "set"/"delete"
reasons, which causes racey fire-and-forget deletes; update the dispose function
(dispose) to inspect the reason parameter and only perform
lruEvictionCounter.add(...) and await cache.delete(key) when reason is "evict"
or "expire" (skip performing delete for "set" and "delete" reasons) so
revalidation writes from cacheInner.set/put cannot be lost.
---
Nitpick comments:
In `@runtime/caches/lrucache.ts`:
- Around line 76-88: The metrics callback registered in
lruBytesGauge.addCallback currently logs on every collection when
lru.calculatedSize/CACHE_MAX_SIZE exceeds LRU_DISK_WARN_RATIO; change it to
avoid per-collection side effects by adding state to track warnings per cache
(e.g., a module-level Map or Set like lastWarnTimestamps or previouslyWarned
keyed by cache name) and only call logger.warn in the callback when either (a)
the ratio crosses the threshold from below to above (transition) or (b) the last
warning timestamp for that cache is older than a configured cooldown (e.g.,
WARN_COOLDOWN_MS); reference activeCaches, lru.calculatedSize,
LRU_DISK_WARN_RATIO and logger.warn when implementing the check and updating the
tracking structure so subsequent callback invocations skip logging until the
next transition or cooldown expiry.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 81003eb7-febe-4d9c-9250-64de4f21fcd1
📒 Files selected for processing (3)
blocks/loader.tsruntime/caches/common.tsruntime/caches/lrucache.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- blocks/loader.ts
| dispose: async (_value: boolean, key: string, reason: string) => { | ||
| lruEvictionCounter.add(1, { reason }); | ||
| await cache.delete(key); | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
For npm:lru-cache@10.2.0, is the dispose(value, key, reason)callback synchronous/awaited, and whatreason values are emitted for overwrite (set), explicit delete, expiration, and eviction?
💡 Result:
For npm:lru-cache@10.2.0, the dispose(value, key, reason) callback is synchronous and not awaited. It is called before the item is fully removed from the cache. The reason values emitted are: - overwrite (set): "set" - explicit delete: "delete" - expiration: "expire" - eviction: "evict"
Citations:
- 1: https://isaacs.github.io/node-lru-cache/interfaces/LRUCache.OptionsBase.html
- 2: https://github.com/isaacs/node-lru-cache/blob/main/src/index.ts
- 3: https://www.npmjs.com/package/lru-cache/v/10.2.1
- 4: https://www.npmjs.com/package/lru-cache
- 5: https://www.npmjs.com/package/lru-cache/v/10.2.0
Only delete the backing cache for true evictions.
blocks/loader.ts:355-413 rewrites the same cache keys during revalidation, creating a race condition. When cacheInner.set(key, newValue) is called, the dispose callback is invoked synchronously with reason="set", but the await cache.delete(key) inside is not awaited by lru-cache. This creates a fire-and-forget delete that races against the subsequent cacheInner.put(). If the delete promise resolves after the put completes, it removes the freshly revalidated entry. Check the reason parameter and only delete for "evict" and "expire" reasons, skipping "set" and "delete" cases.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@runtime/caches/lrucache.ts` around lines 42 - 45, The dispose handler
currently always deletes the backing cache and increments lruEvictionCounter
even for synchronous "set"/"delete" reasons, which causes racey fire-and-forget
deletes; update the dispose function (dispose) to inspect the reason parameter
and only perform lruEvictionCounter.add(...) and await cache.delete(key) when
reason is "evict" or "expire" (skip performing delete for "set" and "delete"
reasons) so revalidation writes from cacheInner.set/put cannot be lost.
| `lru_cache: disk usage for cache "${name}" is at ` + | ||
| `${Math.round(lru.calculatedSize / 1024 / 1024)}MB / ` + | ||
| `${Math.round(CACHE_MAX_SIZE / 1024 / 1024)}MB (${Math.round(ratio * 100)}%). ` + | ||
| `LRU is evicting aggressively. Consider increasing CACHE_MAX_SIZE or reducing CACHE_MAX_AGE_S.`, | ||
| ); |
There was a problem hiding this comment.
deno fmt --check is already failing on this block.
CI is red for runtime/caches/lrucache.ts:84-86. Please run deno fmt runtime/caches/lrucache.ts before merge.
🧰 Tools
🪛 GitHub Actions: ci
[error] 84-86: deno fmt --check: Text differed by line endings (diff shown at runtime/caches/lrucache.ts:84-86).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@runtime/caches/lrucache.ts` around lines 82 - 86, The block constructing the
long log string in runtime/caches/lrucache.ts is improperly formatted and
failing deno fmt; run `deno fmt runtime/caches/lrucache.ts` (or reformat the
template string) so the multiline template string lines are wrapped/indented to
satisfy the formatter—locate the code that references lru.calculatedSize,
CACHE_MAX_SIZE, CACHE_MAX_AGE_S and the variable name `name` in the log call and
apply the formatter so CI passes.
Summary
Deno.mkdirSyncblocking the event loop → asyncDeno.mkdiropen()to prevent duplicate cache instances per nameSTALE_TTL_PERIODdefault from 30s → 1h for better stale-while-revalidate coverage on low-traffic sitesCACHE_MAX_ENTRY_SIZE, default 2MB) and evict stale entry when rejectedloader_cachemetricTest plan
STALE_TTL_PERIOD=3600000in logs — stale content served for up to 1h🤖 Generated with Claude Code
PR 1 of 5 — split from #1122. Merge order: 1 → 2 → 3 → 4 → 5
Summary by cubic
Fixes cache correctness issues and expands stale-while-revalidate to 1 hour by default to improve reliability, especially on low-traffic sites. Adds cache observability to track evictions, sizes, revalidation time, and parse costs.
Bug Fixes
New Features
Written for commit e3c9ecf. Summary will update on new commits.
Summary by CodeRabbit
Performance & Reliability
New Features
Refactor