Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
70 changes: 67 additions & 3 deletions blocks/loader.ts
Original file line number Diff line number Diff line change
Expand Up @@ -146,6 +146,16 @@ const stats = {
unit: "ms",
valueType: ValueType.DOUBLE,
}),
cacheEntrySize: meter.createHistogram("loader_cache_entry_size", {
description: "size of cached loader responses in bytes",
unit: "bytes",
valueType: ValueType.DOUBLE,
}),
bgRevalidation: meter.createHistogram("loader_bg_revalidation", {
description: "duration of background stale-while-revalidate calls",
unit: "ms",
valueType: ValueType.DOUBLE,
}),
};

let maybeCache: Cache | undefined;
Expand All @@ -155,6 +165,9 @@ caches?.open("loader")
.catch(() => maybeCache = undefined);

const MAX_AGE_S = parseInt(Deno.env.get("CACHE_MAX_AGE_S") ?? "60"); // 60 seconds
const CACHE_MAX_ENTRY_SIZE = parseInt(
Deno.env.get("CACHE_MAX_ENTRY_SIZE") ?? "2097152", // 2 MB
) || 2097152;
Comment on lines +168 to +170
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Keep CACHE_MAX_ENTRY_SIZE=0 distinguishable from "unset".

parseInt(...) || 2097152 turns an explicit 0 into the default, so this knob can't intentionally disable writes or force the oversized path in staging. Use an explicit parse-failure check instead.

♻️ Proposed fix
-const CACHE_MAX_ENTRY_SIZE = parseInt(
-  Deno.env.get("CACHE_MAX_ENTRY_SIZE") ?? "2097152", // 2 MB
-) || 2097152;
+const parsedCacheMaxEntrySize = Number.parseInt(
+  Deno.env.get("CACHE_MAX_ENTRY_SIZE") ?? "2097152",
+  10,
+);
+const CACHE_MAX_ENTRY_SIZE = Number.isNaN(parsedCacheMaxEntrySize)
+  ? 2097152
+  : parsedCacheMaxEntrySize;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const CACHE_MAX_ENTRY_SIZE = parseInt(
Deno.env.get("CACHE_MAX_ENTRY_SIZE") ?? "2097152", // 2 MB
) || 2097152;
const parsedCacheMaxEntrySize = Number.parseInt(
Deno.env.get("CACHE_MAX_ENTRY_SIZE") ?? "2097152",
10,
);
const CACHE_MAX_ENTRY_SIZE = Number.isNaN(parsedCacheMaxEntrySize)
? 2097152
: parsedCacheMaxEntrySize;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@blocks/loader.ts` around lines 158 - 160, The current initialization of
CACHE_MAX_ENTRY_SIZE uses parseInt(... ) || 2097152 which converts an explicit
"0" env value into the default; change it to treat "unset" and parse-failures
separately: read the raw value from Deno.env.get("CACHE_MAX_ENTRY_SIZE"), if raw
is undefined use the default 2097152; otherwise parse with parseInt (or
Number.parseInt) and if the result is NaN fall back to 2097152, but if the
parsed value is 0 keep it as 0. Update the logic around the CACHE_MAX_ENTRY_SIZE
constant (and the parseInt call) to implement this explicit undefined/NaN
handling so "0" remains distinguishable from "unset".


// Reuse TextEncoder instance to avoid repeated instantiation
const textEncoder = new TextEncoder();
Expand Down Expand Up @@ -248,7 +261,14 @@ const wrapLoader = (
!shouldNotCache && ctx.vary?.push(cacheKeyValue);

status = "bypass";
stats.cache.add(1, { status, loader });
const bypassReason = isCacheNoStore
? "no-store"
: isCacheNoCache
? "no-cache"
: isCacheKeyNull
? "null-key"
: "disabled";
stats.cache.add(1, { status, loader, reason: bypassReason });

RequestContext?.signal?.throwIfAborted();
return await handler(props, req, ctx);
Expand Down Expand Up @@ -297,6 +317,19 @@ const wrapLoader = (
// Serialize and encode once on the main thread.
const jsonStringEncoded = textEncoder.encode(JSON.stringify(json));

// Skip caching oversized entries to protect disk and memory.
// Also evict any existing stale entry so it doesn't stay pinned forever.
if (jsonStringEncoded.length > CACHE_MAX_ENTRY_SIZE) {
cache.delete(request).catch((error) =>
logger.error(`loader error ${error}`)
);
return json;
Comment on lines +322 to +326
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Await the eviction in the oversized branch.

This path returns before the delete settles. In the stale-while-revalidate flow, another request can still hit the old stale entry even though this refresh already decided it must be removed.

🐛 Proposed fix
           if (jsonStringEncoded.length > CACHE_MAX_ENTRY_SIZE) {
-            cache.delete(request).catch((error) =>
+            await cache.delete(request).catch((error) =>
               logger.error(`loader error ${error}`)
             );
             return json;
           }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (jsonStringEncoded.length > CACHE_MAX_ENTRY_SIZE) {
cache.delete(request).catch((error) =>
logger.error(`loader error ${error}`)
);
return json;
if (jsonStringEncoded.length > CACHE_MAX_ENTRY_SIZE) {
await cache.delete(request).catch((error) =>
logger.error(`loader error ${error}`)
);
return json;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@blocks/loader.ts` around lines 312 - 316, The oversized-entry branch returns
before the cache eviction completes; change the un-awaited promise
cache.delete(request).catch(...) so the delete is awaited before returning.
Locate the oversized check that uses jsonStringEncoded and CACHE_MAX_ENTRY_SIZE
and update the branch around cache.delete(request) (referencing cache.delete and
logger.error) to await the deletion (e.g., use await or a try/catch around await
cache.delete(request)) and log any error, then return json only after the await
completes.

}

if (OTEL_ENABLE_EXTRA_METRICS) {
stats.cacheEntrySize.record(jsonStringEncoded.length, { loader });
}

const expires = new Date(Date.now() + (cacheMaxAge * 1e3))
.toUTCString();
const headerPairs: [string, string][] = [
Expand Down Expand Up @@ -336,13 +369,44 @@ const wrapLoader = (
status = "stale";
stats.cache.add(1, { status, loader });

bgFlights.do(request.url, callHandlerAndCache)
.catch((error) => logger.error(`loader error ${error}`));
// Timer lives inside the singleFlight fn so it records exactly once
// per revalidation, not once per concurrent waiter on the same key.
bgFlights.do(request.url, async () => {
const bgStart = performance.now();
try {
return await callHandlerAndCache();
} finally {
if (OTEL_ENABLE_EXTRA_METRICS) {
stats.bgRevalidation.record(
performance.now() - bgStart,
{ loader },
);
}
}
}).catch((error) => logger.error(`loader error ${error}`));
} else {
status = "hit";
stats.cache.add(1, { status, loader });
}

if (OTEL_ENABLE_EXTRA_METRICS) {
const cl = parseInt(
matched.headers.get("Content-Length") ?? "0",
);
if (cl > 0) {
stats.cacheEntrySize.record(cl, { loader, status });
}
}

if (OTEL_ENABLE_EXTRA_METRICS) {
const parseStart = performance.now();
const result = await matched.json();
stats.latency.record(performance.now() - parseStart, {
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Record cached JSON parse time in a separate metric instead of resolver_latency, otherwise this histogram no longer represents end-to-end loader latency.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At blocks/loader.ts, line 404:

<comment>Record cached JSON parse time in a separate metric instead of `resolver_latency`, otherwise this histogram no longer represents end-to-end loader latency.</comment>

<file context>
@@ -355,13 +369,44 @@ const wrapLoader = (
+          if (OTEL_ENABLE_EXTRA_METRICS) {
+            const parseStart = performance.now();
+            const result = await matched.json();
+            stats.latency.record(performance.now() - parseStart, {
+              loader,
+              status: "json_parse",
</file context>
Fix with Cubic

loader,
status: "json_parse",
});
return result;
}
return await matched.json();
};

Expand Down
9 changes: 9 additions & 0 deletions runtime/caches/common.ts
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,15 @@ export const withInstrumentation = (
const result = getCacheStatus(isMatch);

span.setAttribute("cache_status", result);
if (isMatch) {
const cl = isMatch.headers.get("Content-Length");
if (cl) span.setAttribute("content_length", parseInt(cl));
const tier = isMatch.headers.get("X-Cache-Tier");
if (tier) {
span.setAttribute("cache_tier", parseInt(tier));
isMatch.headers.delete("X-Cache-Tier");
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Don't delete X-Cache-Tier from the matched response inside the instrumentation wrapper; this changes the response payload on cache hits instead of only recording telemetry.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At runtime/caches/common.ts, line 55:

<comment>Don't delete `X-Cache-Tier` from the matched response inside the instrumentation wrapper; this changes the response payload on cache hits instead of only recording telemetry.</comment>

<file context>
@@ -46,6 +46,15 @@ export const withInstrumentation = (
+              const tier = isMatch.headers.get("X-Cache-Tier");
+              if (tier) {
+                span.setAttribute("cache_tier", parseInt(tier));
+                isMatch.headers.delete("X-Cache-Tier");
+              }
+            }
</file context>
Fix with Cubic

}
}
cacheHit.add(1, {
result,
engine,
Expand Down
2 changes: 1 addition & 1 deletion runtime/caches/fileSystem.ts
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ function createFileSystemCache(): CacheStorage {
if (
FILE_SYSTEM_CACHE_DIRECTORY && !existsSync(FILE_SYSTEM_CACHE_DIRECTORY)
) {
await Deno.mkdirSync(FILE_SYSTEM_CACHE_DIRECTORY, { recursive: true });
await Deno.mkdir(FILE_SYSTEM_CACHE_DIRECTORY, { recursive: true });
}
isCacheInitialized = true;
} catch (err) {
Expand Down
71 changes: 64 additions & 7 deletions runtime/caches/lrucache.ts
Original file line number Diff line number Diff line change
@@ -1,11 +1,19 @@
import { LRUCache } from "npm:lru-cache@10.2.0";
import { ValueType } from "../../deps.ts";
import { logger } from "../../observability/otel/config.ts";
import { meter } from "../../observability/otel/metrics.ts";
import {
assertCanBeCached,
assertNoOptions,
baseCache,
createBaseCacheStorage,
} from "./utils.ts";

const lruEvictionCounter = meter.createCounter("lru_cache_eviction", {
unit: "1",
valueType: ValueType.DOUBLE,
});

// keep compatible with old variable name
const CACHE_MAX_SIZE = parseInt(
Deno.env.get("CACHE_MAX_SIZE") ?? Deno.env.get("MAX_CACHE_SIZE") ??
Expand All @@ -18,10 +26,11 @@ const CACHE_TTL_AUTOPURGE = Deno.env.get("CACHE_TTL_AUTOPURGE") === "true"; // c
const CACHE_TTL_RESOLUTION = parseInt(
Deno.env.get("CACHE_TTL_RESOLUTION") ?? "1000",
); // updates the lru cache timer every 1 second
// Additional time-to-live increment in milliseconds to extend the cache expiration beyond the response's Expires header.
// If not set, the cache will use only the expiration timestamp from response headers
// How long stale content remains serveable (and stays on disk) beyond its expires header.
// Default: 1 hour — long enough for low-traffic sites to keep serving cached content across
// quiet periods while background revalidation catches up.
const STALE_TTL_PERIOD = parseInt(
Deno.env.get("STALE_TTL_PERIOD") ?? "30000",
Deno.env.get("STALE_TTL_PERIOD") ?? "3600000", // 1h
);

const cacheOptions = (cache: Cache) => (
Expand All @@ -30,18 +39,65 @@ const cacheOptions = (cache: Cache) => (
maxSize: CACHE_MAX_SIZE,
ttlAutopurge: CACHE_TTL_AUTOPURGE,
ttlResolution: CACHE_TTL_RESOLUTION,
dispose: async (_value: boolean, key: string) => {
dispose: async (_value: boolean, key: string, reason: string) => {
lruEvictionCounter.add(1, { reason });
await cache.delete(key);
},
Comment on lines +42 to 45
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

For npm:lru-cache@10.2.0, is the dispose(value, key, reason)callback synchronous/awaited, and whatreason values are emitted for overwrite (set), explicit delete, expiration, and eviction?

💡 Result:

For npm:lru-cache@10.2.0, the dispose(value, key, reason) callback is synchronous and not awaited. It is called before the item is fully removed from the cache. The reason values emitted are: - overwrite (set): "set" - explicit delete: "delete" - expiration: "expire" - eviction: "evict"

Citations:


Only delete the backing cache for true evictions.

blocks/loader.ts:355-413 rewrites the same cache keys during revalidation, creating a race condition. When cacheInner.set(key, newValue) is called, the dispose callback is invoked synchronously with reason="set", but the await cache.delete(key) inside is not awaited by lru-cache. This creates a fire-and-forget delete that races against the subsequent cacheInner.put(). If the delete promise resolves after the put completes, it removes the freshly revalidated entry. Check the reason parameter and only delete for "evict" and "expire" reasons, skipping "set" and "delete" cases.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@runtime/caches/lrucache.ts` around lines 42 - 45, The dispose handler
currently always deletes the backing cache and increments lruEvictionCounter
even for synchronous "set"/"delete" reasons, which causes racey fire-and-forget
deletes; update the dispose function (dispose) to inspect the reason parameter
and only perform lruEvictionCounter.add(...) and await cache.delete(key) when
reason is "evict" or "expire" (skip performing delete for "set" and "delete"
reasons) so revalidation writes from cacheInner.set/put cannot be lost.

}
);

const lruSizeGauge = meter.createObservableGauge("lru_cache_keys", {
description: "number of keys in the LRU cache",
unit: "1",
valueType: ValueType.DOUBLE,
});

const lruBytesGauge = meter.createObservableGauge("lru_cache_bytes", {
description: "total bytes tracked by the LRU cache",
unit: "bytes",
valueType: ValueType.DOUBLE,
});

// deno-lint-ignore no-explicit-any
const activeCaches = new Map<string, LRUCache<string, any>>();

lruSizeGauge.addCallback((observer) => {
for (const [name, lru] of activeCaches) {
observer.observe(lru.size, { cache: name });
}
});

// Warn when LRU disk usage exceeds this fraction of CACHE_MAX_SIZE.
// At this point the LRU is evicting aggressively and disk is nearly full.
const LRU_DISK_WARN_RATIO = parseFloat(
Deno.env.get("LRU_DISK_WARN_RATIO") ?? "0.9",
);

lruBytesGauge.addCallback((observer) => {
for (const [name, lru] of activeCaches) {
observer.observe(lru.calculatedSize, { cache: name });
const ratio = lru.calculatedSize / CACHE_MAX_SIZE;
if (ratio >= LRU_DISK_WARN_RATIO) {
logger.warn(
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P3: This warning path is unthrottled, so a cache that remains above the threshold will log on every gauge callback execution.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At runtime/caches/lrucache.ts, line 81:

<comment>This warning path is unthrottled, so a cache that remains above the threshold will log on every gauge callback execution.</comment>

<file context>
@@ -31,12 +39,55 @@ const cacheOptions = (cache: Cache) => (
+    observer.observe(lru.calculatedSize, { cache: name });
+    const ratio = lru.calculatedSize / CACHE_MAX_SIZE;
+    if (ratio >= LRU_DISK_WARN_RATIO) {
+      logger.warn(
+        `lru_cache: disk usage for cache "${name}" is at ` +
+          `${Math.round(lru.calculatedSize / 1024 / 1024)}MB / ` +
</file context>
Fix with Cubic

`lru_cache: disk usage for cache "${name}" is at ` +
`${Math.round(lru.calculatedSize / 1024 / 1024)}MB / ` +
`${Math.round(CACHE_MAX_SIZE / 1024 / 1024)}MB (${Math.round(ratio * 100)}%). ` +
`LRU is evicting aggressively. Consider increasing CACHE_MAX_SIZE or reducing CACHE_MAX_AGE_S.`,
);
Comment on lines +82 to +86
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

deno fmt --check is already failing on this block.

CI is red for runtime/caches/lrucache.ts:84-86. Please run deno fmt runtime/caches/lrucache.ts before merge.

🧰 Tools
🪛 GitHub Actions: ci

[error] 84-86: deno fmt --check: Text differed by line endings (diff shown at runtime/caches/lrucache.ts:84-86).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@runtime/caches/lrucache.ts` around lines 82 - 86, The block constructing the
long log string in runtime/caches/lrucache.ts is improperly formatted and
failing deno fmt; run `deno fmt runtime/caches/lrucache.ts` (or reformat the
template string) so the multiline template string lines are wrapped/indented to
satisfy the formatter—locate the code that references lru.calculatedSize,
CACHE_MAX_SIZE, CACHE_MAX_AGE_S and the variable name `name` in the log call and
apply the formatter so CI passes.

}
}
});

function createLruCacheStorage(cacheStorageInner: CacheStorage): CacheStorage {
const openedCachesByName = new Map<string, Promise<Cache>>();
const caches = createBaseCacheStorage(
cacheStorageInner,
(_cacheName, cacheInner, requestURLSHA1) => {
const existing = openedCachesByName.get(_cacheName);
if (existing) return existing;
const fileCache = new LRUCache(cacheOptions(cacheInner));
return Promise.resolve({
activeCaches.set(_cacheName, fileCache);
Copy link
Copy Markdown
Contributor

@cubic-dev-ai cubic-dev-ai bot Mar 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Tracking opened caches in a global map without any cleanup leaks cache instances and stale metric labels for every distinct cache name.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At runtime/caches/lrucache.ts, line 99:

<comment>Tracking opened caches in a global map without any cleanup leaks cache instances and stale metric labels for every distinct cache name.</comment>

<file context>
@@ -45,6 +96,7 @@ function createLruCacheStorage(cacheStorageInner: CacheStorage): CacheStorage {
       const existing = openedCachesByName.get(_cacheName);
       if (existing) return existing;
       const fileCache = new LRUCache(cacheOptions(cacheInner));
+      activeCaches.set(_cacheName, fileCache);
       const cache = Promise.resolve({
         ...baseCache,
</file context>
Fix with Cubic

const cache = Promise.resolve({
...baseCache,
delete: async (
request: RequestInfo | URL,
Expand All @@ -58,8 +114,7 @@ function createLruCacheStorage(cacheStorageInner: CacheStorage): CacheStorage {
assertNoOptions(options);
const cacheKey = await requestURLSHA1(request);
if (fileCache.has(cacheKey)) {
const result = cacheInner.match(cacheKey);
return result;
return cacheInner.match(cacheKey);
Comment on lines 116 to +117
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

For npm:lru-cache@10.2.0, does cache.has(key)update recency / LRU ordering by default, or shouldcache.get(key) be used when read hits must keep entries hot?

💡 Result:

For npm:lru-cache@10.2.0, cache.has(key) does NOT update recency / LRU ordering by default. Use cache.get(key) when read hits must keep entries hot.

Citations:


Use get() here so cache hits stay recent.

match() is the read path for this index, but has() is a non-touching probe that does not update LRU recency. Hot entries will age out as if idle despite being actively read. Switching to get() keeps eviction aligned with actual traffic patterns.

♻️ Proposed fix
-          if (fileCache.has(cacheKey)) {
+          if (fileCache.get(cacheKey) !== undefined) {
             return cacheInner.match(cacheKey);
           }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (fileCache.has(cacheKey)) {
const result = cacheInner.match(cacheKey);
return result;
return cacheInner.match(cacheKey);
if (fileCache.get(cacheKey) !== undefined) {
return cacheInner.match(cacheKey);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@runtime/caches/lrucache.ts` around lines 64 - 65, The code currently uses
fileCache.has(cacheKey) followed by cacheInner.match(cacheKey), which probes
without updating LRU recency; change the read path to call
cacheInner.get(cacheKey) when fileCache.has(cacheKey) (or replace the has+match
pair with a single get) so cache hits update recency. Locate the check around
fileCache.has(cacheKey) and replace the subsequent cacheInner.match(cacheKey)
call with cacheInner.get(cacheKey) (or use the get result directly) to ensure
hot entries are promoted in the LRU.

}
return undefined;
},
Expand Down Expand Up @@ -96,6 +151,8 @@ function createLruCacheStorage(cacheStorageInner: CacheStorage): CacheStorage {
return cacheInner.put(cacheKey, response);
},
});
openedCachesByName.set(_cacheName, cache);
return cache;
},
);
return caches;
Expand Down
Loading