Skip to content

Commit c3c622a

Browse files
committed
fix: address review round 4 — simplify parallel(), add Sentry cache spans
Review comments addressed: #1: issue/list.ts FRESH_ALIASES — verified no conflict, only log/list.ts has f: 'follow' and it doesn't use FRESH_ALIASES #2: cacheHeuristic 0.1 vs CLEANUP_PROBABILITY — different semantics, not shared (RFC heuristic vs probabilistic cleanup trigger) #3: Simplified parallel() — replaced custom wrapper with p-limit's built-in .map() method (cacheIO.map(items, fn)) at all 3 call sites #4: evictExcessEntries/deleteExpiredEntries — run both in parallel via Promise.all() since they operate on disjoint file sets #5: Added Sentry instrumentation — withCacheSpan() helper in telemetry.ts, cache.lookup and cache.store spans in response-cache.ts with URL attrs
1 parent 2cee148 commit c3c622a

File tree

3 files changed

+72
-51
lines changed

3 files changed

+72
-51
lines changed

AGENTS.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -635,7 +635,7 @@ mock.module("./some-module", () => ({
635635
* **Sentry API: events require org+project, issues have legacy global endpoint**: Sentry API scoping: Events require org+project in URL path (\`/projects/{org}/{project}/events/{id}/\`). Issues use legacy global endpoint (\`/api/0/issues/{id}/\`) without org context. Traces need only org (\`/organizations/{org}/trace/{traceId}/\`). Two-step lookup for events: fetch issue → extract org/project from response → fetch event. Cross-project event search possible via Discover endpoint \`/organizations/{org}/events/\` with \`query=id:{eventId}\`.
636636
637637
<!-- lore:019cb6ab-ab98-7a9c-a25f-e154a5adbbe1 -->
638-
* **Sentry CLI authenticated fetch architecture with response caching**: \`createAuthenticatedFetch()\` in \`sentry-client.ts\` wraps native \`fetch()\` with auth injection, 30s timeout, retry with backoff (max 2), 401 refresh, and HTTP span tracing. Response caching (src/lib/response-cache.ts) integrates BEFORE auth/retry via \`http-cache-semantics\` (RFC 7234) with filesystem storage at \`~/.sentry/cache/responses/\`. URL-based fallback TTL tiers: immutable (1hr), stable (5min), volatile (60s), no-cache (0). Only GET 2xx cached. \`--fresh\` flag and \`SENTRY\_NO\_CACHE=1\` bypass cache. Cache cleared on login/logout via \`clearAuth()\`\`clearResponseCache()\`. Corrupted \`CachePolicy\` entries trigger cache miss with best-effort cleanup.
638+
* **Sentry CLI authenticated fetch architecture with response caching**: \`createAuthenticatedFetch()\` in \`sentry-client.ts\` wraps native \`fetch()\` with auth injection, 30s timeout, retry with backoff (max 2), 401 refresh, and HTTP span tracing. Response caching (src/lib/response-cache.ts) integrates BEFORE auth/retry via \`http-cache-semantics\` (RFC 7234) with filesystem storage at \`~/.sentry/cache/responses/\`. URL-based fallback TTL tiers: immutable (24hr), stable (5min), volatile (60s), no-cache (0). URL patterns stored as \`Record\<TtlTier, RegExp\[]>\`. Only GET 2xx cached. \`--fresh\` flag (with exported \`FRESH\_ALIASES\` constant shared across all 15+ command files) and \`SENTRY\_NO\_CACHE=1\` bypass cache. Cache entries store \`expiresAt\` for O(1) cleanup. All cache I/O parallelized with concurrency limiter (max 8). Cache cleared on login/logout via \`clearAuth()\`\`clearResponseCache()\`. Corrupted \`CachePolicy\` entries trigger cache miss with best-effort cleanup.
639639
640640
<!-- lore:019c8c72-b871-7d5e-a1a4-5214359a5a77 -->
641641
* **Sentry CLI has two distribution channels with different runtimes**: Sentry CLI ships two ways: (1) Standalone binary via \`Bun.build()\` with \`compile: true\`. (2) npm package via esbuild producing CJS \`dist/bin.cjs\` for Node 22+, with Bun API polyfills from \`script/node-polyfills.ts\`. \`Bun.$\` has NO polyfill — use \`execSync\` instead. \`require()\` in ESM is safe (Bun native, esbuild resolves at bundle time).

src/lib/response-cache.ts

Lines changed: 52 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -28,6 +28,7 @@ import CachePolicy from "http-cache-semantics";
2828
import pLimit from "p-limit";
2929

3030
import { getConfigDir } from "./db/index.js";
31+
import { withCacheSpan } from "./telemetry.js";
3132

3233
// ---------------------------------------------------------------------------
3334
// TTL tiers — used as fallback when the server sends no cache headers
@@ -341,31 +342,37 @@ export async function getCachedResponse(
341342
return;
342343
}
343344

344-
const key = buildCacheKey(method, url);
345-
const entry = await readCacheEntry(key);
346-
if (!entry) {
347-
return;
348-
}
349-
350-
try {
351-
const policy = CachePolicy.fromObject(entry.policy);
352-
if (!isEntryFresh(policy, entry, requestHeaders, url)) {
353-
return;
354-
}
345+
return await withCacheSpan(
346+
"cache.lookup",
347+
async () => {
348+
const key = buildCacheKey(method, url);
349+
const entry = await readCacheEntry(key);
350+
if (!entry) {
351+
return;
352+
}
355353

356-
const responseHeaders = buildResponseHeaders(policy, entry);
357-
return new Response(JSON.stringify(entry.body), {
358-
status: entry.status,
359-
headers: responseHeaders,
360-
});
361-
} catch {
362-
// Corrupted or version-incompatible policy object — treat as cache miss.
363-
// Best-effort cleanup of the broken entry.
364-
unlink(cacheFilePath(key)).catch(() => {
365-
// Ignored — fire-and-forget
366-
});
367-
return;
368-
}
354+
try {
355+
const policy = CachePolicy.fromObject(entry.policy);
356+
if (!isEntryFresh(policy, entry, requestHeaders, url)) {
357+
return;
358+
}
359+
360+
const responseHeaders = buildResponseHeaders(policy, entry);
361+
return new Response(JSON.stringify(entry.body), {
362+
status: entry.status,
363+
headers: responseHeaders,
364+
});
365+
} catch {
366+
// Corrupted or version-incompatible policy object — treat as cache miss.
367+
// Best-effort cleanup of the broken entry.
368+
unlink(cacheFilePath(key)).catch(() => {
369+
// Ignored — fire-and-forget
370+
});
371+
return;
372+
}
373+
},
374+
{ "cache.url": url }
375+
);
369376
}
370377

371378
/**
@@ -424,7 +431,11 @@ export async function storeCachedResponse(
424431
}
425432

426433
try {
427-
await writeResponseToCache(method, url, requestHeaders, response);
434+
await withCacheSpan(
435+
"cache.store",
436+
() => writeResponseToCache(method, url, requestHeaders, response),
437+
{ "cache.url": url }
438+
);
428439
} catch {
429440
// Cache write failures are non-fatal — silently ignore
430441
}
@@ -498,17 +509,8 @@ export async function clearResponseCache(): Promise<void> {
498509
/** Concurrency limit for parallel cache file I/O operations */
499510
const CACHE_IO_CONCURRENCY = 8;
500511

501-
/**
502-
* Run an async function over items with bounded concurrency.
503-
* Uses p-limit to prevent overwhelming the filesystem with simultaneous reads.
504-
*/
505-
async function parallel<T>(
506-
items: readonly T[],
507-
fn: (item: T) => Promise<void>
508-
): Promise<void> {
509-
const limit = pLimit(CACHE_IO_CONCURRENCY);
510-
await Promise.all(items.map((item) => limit(() => fn(item))));
511-
}
512+
/** Shared concurrency limiter for all cache I/O — created once, reused across calls */
513+
const cacheIO = pLimit(CACHE_IO_CONCURRENCY);
512514

513515
// ---------------------------------------------------------------------------
514516
// Cache cleanup
@@ -539,11 +541,11 @@ async function cleanupCache(): Promise<void> {
539541

540542
const entries = await collectEntryMetadata(cacheDir, jsonFiles);
541543

542-
// Delete expired entries
543-
await deleteExpiredEntries(cacheDir, entries);
544-
545-
// Enforce max entry count — evict oldest first
546-
await evictExcessEntries(cacheDir, entries);
544+
// Both operations are best-effort — run them in parallel without blocking
545+
await Promise.all([
546+
deleteExpiredEntries(cacheDir, entries),
547+
evictExcessEntries(cacheDir, entries),
548+
]);
547549
}
548550

549551
/** Metadata for a cache entry, used for cleanup decisions */
@@ -563,7 +565,7 @@ async function collectEntryMetadata(
563565
const entries: EntryMetadata[] = [];
564566
const now = Date.now();
565567

566-
await parallel(jsonFiles, async (file) => {
568+
await cacheIO.map(jsonFiles, async (file) => {
567569
const filePath = join(cacheDir, file);
568570
try {
569571
const raw = await readFile(filePath, "utf-8");
@@ -591,11 +593,11 @@ async function deleteExpiredEntries(
591593
entries: EntryMetadata[]
592594
): Promise<void> {
593595
const expired = entries.filter((e) => e.expired);
594-
await parallel(expired, async (entry) => {
595-
await unlink(join(cacheDir, entry.file)).catch(() => {
596+
await cacheIO.map(expired, (entry) =>
597+
unlink(join(cacheDir, entry.file)).catch(() => {
596598
// Best-effort: file may have been deleted by another process
597-
});
598-
});
599+
})
600+
);
599601
}
600602

601603
/** Evict the oldest entries when over the max count */
@@ -610,9 +612,9 @@ async function evictExcessEntries(
610612

611613
remaining.sort((a, b) => a.createdAt - b.createdAt);
612614
const toEvict = remaining.slice(0, remaining.length - MAX_CACHE_ENTRIES);
613-
await parallel(toEvict, async (entry) => {
614-
await unlink(join(cacheDir, entry.file)).catch(() => {
615+
await cacheIO.map(toEvict, (entry) =>
616+
unlink(join(cacheDir, entry.file)).catch(() => {
615617
// Best-effort eviction
616-
});
617-
});
618+
})
619+
);
618620
}

src/lib/telemetry.ts

Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -945,3 +945,22 @@ export function withFsSpan<T>(
945945
): Promise<T> {
946946
return withTracing(operation, "file", fn);
947947
}
948+
949+
/**
950+
* Wrap a cache operation with a span for tracing.
951+
*
952+
* Creates a child span under the current active span to track
953+
* response cache hit/miss/store operations.
954+
*
955+
* @param operation - Name of the operation (e.g., "cache.lookup", "cache.store")
956+
* @param fn - The function that performs the cache operation
957+
* @param attributes - Optional span attributes (e.g., url, cache.hit)
958+
* @returns The result of the function
959+
*/
960+
export function withCacheSpan<T>(
961+
operation: string,
962+
fn: () => T | Promise<T>,
963+
attributes?: Record<string, string | number | boolean>
964+
): Promise<T> {
965+
return withTracing(operation, "cache", fn, attributes);
966+
}

0 commit comments

Comments
 (0)