Skip to content
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
21 changes: 20 additions & 1 deletion blocks/loader.ts
Original file line number Diff line number Diff line change
Expand Up @@ -155,6 +155,9 @@ caches?.open("loader")
.catch(() => maybeCache = undefined);

const MAX_AGE_S = parseInt(Deno.env.get("CACHE_MAX_AGE_S") ?? "60"); // 60 seconds
const CACHE_MAX_ENTRY_SIZE = parseInt(
Deno.env.get("CACHE_MAX_ENTRY_SIZE") ?? "2097152", // 2 MB
) || 2097152;
Comment on lines +168 to +170
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Keep CACHE_MAX_ENTRY_SIZE=0 distinguishable from "unset".

parseInt(...) || 2097152 turns an explicit 0 into the default, so this knob can't intentionally disable writes or force the oversized path in staging. Use an explicit parse-failure check instead.

♻️ Proposed fix
-const CACHE_MAX_ENTRY_SIZE = parseInt(
-  Deno.env.get("CACHE_MAX_ENTRY_SIZE") ?? "2097152", // 2 MB
-) || 2097152;
+const parsedCacheMaxEntrySize = Number.parseInt(
+  Deno.env.get("CACHE_MAX_ENTRY_SIZE") ?? "2097152",
+  10,
+);
+const CACHE_MAX_ENTRY_SIZE = Number.isNaN(parsedCacheMaxEntrySize)
+  ? 2097152
+  : parsedCacheMaxEntrySize;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const CACHE_MAX_ENTRY_SIZE = parseInt(
Deno.env.get("CACHE_MAX_ENTRY_SIZE") ?? "2097152", // 2 MB
) || 2097152;
const parsedCacheMaxEntrySize = Number.parseInt(
Deno.env.get("CACHE_MAX_ENTRY_SIZE") ?? "2097152",
10,
);
const CACHE_MAX_ENTRY_SIZE = Number.isNaN(parsedCacheMaxEntrySize)
? 2097152
: parsedCacheMaxEntrySize;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@blocks/loader.ts` around lines 158 - 160, The current initialization of
CACHE_MAX_ENTRY_SIZE uses parseInt(... ) || 2097152 which converts an explicit
"0" env value into the default; change it to treat "unset" and parse-failures
separately: read the raw value from Deno.env.get("CACHE_MAX_ENTRY_SIZE"), if raw
is undefined use the default 2097152; otherwise parse with parseInt (or
Number.parseInt) and if the result is NaN fall back to 2097152, but if the
parsed value is 0 keep it as 0. Update the logic around the CACHE_MAX_ENTRY_SIZE
constant (and the parseInt call) to implement this explicit undefined/NaN
handling so "0" remains distinguishable from "unset".


// Reuse TextEncoder instance to avoid repeated instantiation
const textEncoder = new TextEncoder();
Expand Down Expand Up @@ -248,7 +251,14 @@ const wrapLoader = (
!shouldNotCache && ctx.vary?.push(cacheKeyValue);

status = "bypass";
stats.cache.add(1, { status, loader });
const bypassReason = isCacheNoStore
? "no-store"
: isCacheNoCache
? "no-cache"
: isCacheKeyNull
? "null-key"
: "disabled";
stats.cache.add(1, { status, loader, reason: bypassReason });

RequestContext?.signal?.throwIfAborted();
return await handler(props, req, ctx);
Expand Down Expand Up @@ -297,6 +307,15 @@ const wrapLoader = (
// Serialize and encode once on the main thread.
const jsonStringEncoded = textEncoder.encode(JSON.stringify(json));

// Skip caching oversized entries to protect disk and memory.
// Also evict any existing stale entry so it doesn't stay pinned forever.
if (jsonStringEncoded.length > CACHE_MAX_ENTRY_SIZE) {
cache.delete(request).catch((error) =>
logger.error(`loader error ${error}`)
);
return json;
Comment on lines +322 to +326
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Await the eviction in the oversized branch.

This path returns before the delete settles. In the stale-while-revalidate flow, another request can still hit the old stale entry even though this refresh already decided it must be removed.

🐛 Proposed fix
           if (jsonStringEncoded.length > CACHE_MAX_ENTRY_SIZE) {
-            cache.delete(request).catch((error) =>
+            await cache.delete(request).catch((error) =>
               logger.error(`loader error ${error}`)
             );
             return json;
           }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (jsonStringEncoded.length > CACHE_MAX_ENTRY_SIZE) {
cache.delete(request).catch((error) =>
logger.error(`loader error ${error}`)
);
return json;
if (jsonStringEncoded.length > CACHE_MAX_ENTRY_SIZE) {
await cache.delete(request).catch((error) =>
logger.error(`loader error ${error}`)
);
return json;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@blocks/loader.ts` around lines 312 - 316, The oversized-entry branch returns
before the cache eviction completes; change the un-awaited promise
cache.delete(request).catch(...) so the delete is awaited before returning.
Locate the oversized check that uses jsonStringEncoded and CACHE_MAX_ENTRY_SIZE
and update the branch around cache.delete(request) (referencing cache.delete and
logger.error) to await the deletion (e.g., use await or a try/catch around await
cache.delete(request)) and log any error, then return json only after the await
completes.

}

const expires = new Date(Date.now() + (cacheMaxAge * 1e3))
.toUTCString();
const headerPairs: [string, string][] = [
Expand Down
2 changes: 1 addition & 1 deletion runtime/caches/fileSystem.ts
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ function createFileSystemCache(): CacheStorage {
if (
FILE_SYSTEM_CACHE_DIRECTORY && !existsSync(FILE_SYSTEM_CACHE_DIRECTORY)
) {
await Deno.mkdirSync(FILE_SYSTEM_CACHE_DIRECTORY, { recursive: true });
await Deno.mkdir(FILE_SYSTEM_CACHE_DIRECTORY, { recursive: true });
}
isCacheInitialized = true;
} catch (err) {
Expand Down
17 changes: 11 additions & 6 deletions runtime/caches/lrucache.ts
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,11 @@ const CACHE_TTL_AUTOPURGE = Deno.env.get("CACHE_TTL_AUTOPURGE") === "true"; // c
const CACHE_TTL_RESOLUTION = parseInt(
Deno.env.get("CACHE_TTL_RESOLUTION") ?? "1000",
); // updates the lru cache timer every 1 second
// Additional time-to-live increment in milliseconds to extend the cache expiration beyond the response's Expires header.
// If not set, the cache will use only the expiration timestamp from response headers
// How long stale content remains serveable (and stays on disk) beyond its expires header.
// Default: 1 hour — long enough for low-traffic sites to keep serving cached content across
// quiet periods while background revalidation catches up.
const STALE_TTL_PERIOD = parseInt(
Deno.env.get("STALE_TTL_PERIOD") ?? "30000",
Deno.env.get("STALE_TTL_PERIOD") ?? "3600000", // 1h
);

const cacheOptions = (cache: Cache) => (
Expand All @@ -37,11 +38,14 @@ const cacheOptions = (cache: Cache) => (
);

function createLruCacheStorage(cacheStorageInner: CacheStorage): CacheStorage {
const openedCachesByName = new Map<string, Promise<Cache>>();
const caches = createBaseCacheStorage(
cacheStorageInner,
(_cacheName, cacheInner, requestURLSHA1) => {
const existing = openedCachesByName.get(_cacheName);
if (existing) return existing;
const fileCache = new LRUCache(cacheOptions(cacheInner));
return Promise.resolve({
const cache = Promise.resolve({
...baseCache,
delete: async (
request: RequestInfo | URL,
Expand All @@ -58,8 +62,7 @@ function createLruCacheStorage(cacheStorageInner: CacheStorage): CacheStorage {
assertNoOptions(options);
const cacheKey = await requestURLSHA1(request);
if (fileCache.has(cacheKey)) {
const result = cacheInner.match(cacheKey);
return result;
return cacheInner.match(cacheKey);
Comment on lines 116 to +117
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

For npm:lru-cache@10.2.0, does cache.has(key)update recency / LRU ordering by default, or shouldcache.get(key) be used when read hits must keep entries hot?

💡 Result:

For npm:lru-cache@10.2.0, cache.has(key) does NOT update recency / LRU ordering by default. Use cache.get(key) when read hits must keep entries hot.

Citations:


Use get() here so cache hits stay recent.

match() is the read path for this index, but has() is a non-touching probe that does not update LRU recency. Hot entries will age out as if idle despite being actively read. Switching to get() keeps eviction aligned with actual traffic patterns.

♻️ Proposed fix
-          if (fileCache.has(cacheKey)) {
+          if (fileCache.get(cacheKey) !== undefined) {
             return cacheInner.match(cacheKey);
           }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (fileCache.has(cacheKey)) {
const result = cacheInner.match(cacheKey);
return result;
return cacheInner.match(cacheKey);
if (fileCache.get(cacheKey) !== undefined) {
return cacheInner.match(cacheKey);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@runtime/caches/lrucache.ts` around lines 64 - 65, The code currently uses
fileCache.has(cacheKey) followed by cacheInner.match(cacheKey), which probes
without updating LRU recency; change the read path to call
cacheInner.get(cacheKey) when fileCache.has(cacheKey) (or replace the has+match
pair with a single get) so cache hits update recency. Locate the check around
fileCache.has(cacheKey) and replace the subsequent cacheInner.match(cacheKey)
call with cacheInner.get(cacheKey) (or use the get result directly) to ensure
hot entries are promoted in the LRU.

}
return undefined;
},
Expand Down Expand Up @@ -96,6 +99,8 @@ function createLruCacheStorage(cacheStorageInner: CacheStorage): CacheStorage {
return cacheInner.put(cacheKey, response);
},
});
openedCachesByName.set(_cacheName, cache);
return cache;
},
);
return caches;
Expand Down
Loading