Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions frontend/package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions frontend/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@
"@types/node": "^20",
"@types/react": "^18",
"@types/react-dom": "^18",
"@types/js-yaml": "^4.0.9",
"autoprefixer": "^10.0.1",
"eslint": "^8",
"eslint-config-next": "14.2.0",
Expand Down
277 changes: 277 additions & 0 deletions frontend/tests/e2e/perf/perf-budget.spec.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,277 @@
import { expect, test } from '@playwright/test';
import type { Browser } from '@playwright/test';
import fs from 'node:fs';
import path from 'node:path';
import yaml from 'js-yaml';

type WaitConfig = {
type: 'selector' | 'networkidle';
selector?: string;
timeout_ms?: number;
idle_ms?: number;
};

type MetricConfig = {
id: string;
aggregation: string;
threshold: number;
unit: string;
};

type JourneyConfig = {
id: string;
description?: string;
from_url: string;
to_url: string;
waits?: WaitConfig[];
metrics?: MetricConfig[];
};

type PerfBudgetConfig = {
run_count?: number;
journeys?: JourneyConfig[];
};

type JourneyMetricSummary = {
quantile: string;
value_ms: number;
threshold_ms?: number;
passed: boolean;
};

type JourneyResult = {
id: string;
description?: string;
runs: Array<{
navigation_duration_ms: number;
largest_contentful_paint_ms: number;
}>;
aggregated: {
navigation_duration?: JourneyMetricSummary;
largest_contentful_paint?: JourneyMetricSummary;
};
};

const percentile = (values: number[], percentileValue: number): number => {
if (!values.length) {
return 0;
}
const sorted = [...values].sort((a, b) => a - b);
const rank = (sorted.length - 1) * percentileValue;
const lowerIndex = Math.floor(rank);
const upperIndex = Math.ceil(rank);
if (lowerIndex === upperIndex) {
return sorted[lowerIndex];
}
const weight = rank - lowerIndex;
return sorted[lowerIndex] + (sorted[upperIndex] - sorted[lowerIndex]) * weight;
};

const loadPerfBudget = (): PerfBudgetConfig => {
const configPath = process.env.PERF_BUDGET_FILE
? path.resolve(process.cwd(), process.env.PERF_BUDGET_FILE)
: path.resolve(__dirname, '../../../../perf-budget.yml');

const raw = fs.readFileSync(configPath, 'utf8');
return yaml.load(raw) as PerfBudgetConfig;
};
Comment on lines +70 to +77
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

🧩 Analysis chain

Add error handling for file operations and config validation.

The function lacks error handling for:

  • Missing or unreadable config files
  • YAML parsing errors
  • Invalid config structure after loading

Additionally, __dirname usage on line 73 may not be available in ES module contexts or certain TypeScript configurations.

Apply this diff to add error handling:

 const loadPerfBudget = (): PerfBudgetConfig => {
   const configPath = process.env.PERF_BUDGET_FILE
     ? path.resolve(process.cwd(), process.env.PERF_BUDGET_FILE)
     : path.resolve(__dirname, '../../../../perf-budget.yml');
 
+  if (!fs.existsSync(configPath)) {
+    throw new Error(`Performance budget config not found at: ${configPath}`);
+  }
+
+  try {
     const raw = fs.readFileSync(configPath, 'utf8');
-    return yaml.load(raw) as PerfBudgetConfig;
+    const config = yaml.load(raw) as PerfBudgetConfig;
+    
+    if (!config || typeof config !== 'object') {
+      throw new Error('Invalid config structure');
+    }
+    
+    return config;
+  } catch (error) {
+    throw new Error(`Failed to load perf budget config: ${error instanceof Error ? error.message : String(error)}`);
+  }
 };

Run the following script to verify if __dirname is available in the TypeScript/build configuration:


🏁 Script executed:

#!/bin/bash
# Check if the project uses ES modules and how __dirname is handled

# Check package.json for module type
echo "=== Checking package.json for module type ==="
fd -t f '^package\.json$' frontend | head -1 | xargs cat | jq -r '.type // "commonjs"'

# Check tsconfig.json for module settings
echo -e "\n=== Checking TypeScript module configuration ==="
fd -t f '^tsconfig.*\.json$' frontend | head -3 | xargs -I {} sh -c 'echo "File: {}" && cat {} | jq -r ".compilerOptions.module // \"not set\""'

# Search for other usages of __dirname in test files
echo -e "\n=== Other __dirname usages in test files ==="
rg -t ts '__dirname' frontend/tests/ -C 2

Length of output: 1122


Optional: Add error handling and validation in loadPerfBudget.

The loader may throw on missing files or invalid YAML—wrapping readFileSync and yaml.load in a try/catch and validating the parsed object can provide clearer failure messages.

🤖 Prompt for AI Agents
In frontend/tests/e2e/perf/perf-budget.spec.ts around lines 70 to 77, wrap the
file read and YAML parsing in a try/catch to surface a clear, descriptive error
when the PERF_BUDGET_FILE is missing or the YAML is invalid; before reading,
optionally check fs.existsSync(configPath) and throw a clear error if missing,
then call fs.readFileSync and yaml.load inside the try block, validate the
returned object matches the expected PerfBudgetConfig shape (required
fields/types) and throw a descriptive error if validation fails, and rethrow or
wrap parsing/file I/O errors with contextual messages so test failures indicate
exactly whether the file was missing, unreadable, or the YAML invalid.


const toQuantile = (aggregation: string): number | undefined => {
const match = /^p(\d{1,2})$/i.exec(aggregation.trim());
if (!match) {
return undefined;
}
const percentileNumber = Number(match[1]);
return percentileNumber / 100;
};

const ensureResultsDir = (): string => {
const dir = process.env.PERF_RESULTS_DIR
? path.resolve(process.cwd(), process.env.PERF_RESULTS_DIR)
: path.resolve(__dirname, '../../../../perf-results');
fs.mkdirSync(dir, { recursive: true });
return dir;
};

const writeJourneyResult = (dir: string, result: JourneyResult): void => {
const filePath = path.join(dir, `${result.id}.json`);
fs.writeFileSync(filePath, JSON.stringify(result, null, 2), 'utf8');
};

const attachJourneyResult = async (result: JourneyResult): Promise<void> => {
await test.info().attach(`${result.id}-metrics.json`, {
body: Buffer.from(JSON.stringify(result, null, 2)),
contentType: 'application/json',
});
};
Comment on lines +79 to +106
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider adding error handling to file write operations.

The writeJourneyResult function (lines 96-99) lacks error handling for file write failures. While not critical for test execution, it could cause silent failures in results persistence.

Apply this diff to add error handling:

 const writeJourneyResult = (dir: string, result: JourneyResult): void => {
   const filePath = path.join(dir, `${result.id}.json`);
-  fs.writeFileSync(filePath, JSON.stringify(result, null, 2), 'utf8');
+  try {
+    fs.writeFileSync(filePath, JSON.stringify(result, null, 2), 'utf8');
+  } catch (error) {
+    console.warn(`Failed to write journey result to ${filePath}:`, error);
+  }
 };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const toQuantile = (aggregation: string): number | undefined => {
const match = /^p(\d{1,2})$/i.exec(aggregation.trim());
if (!match) {
return undefined;
}
const percentileNumber = Number(match[1]);
return percentileNumber / 100;
};
const ensureResultsDir = (): string => {
const dir = process.env.PERF_RESULTS_DIR
? path.resolve(process.cwd(), process.env.PERF_RESULTS_DIR)
: path.resolve(__dirname, '../../../../perf-results');
fs.mkdirSync(dir, { recursive: true });
return dir;
};
const writeJourneyResult = (dir: string, result: JourneyResult): void => {
const filePath = path.join(dir, `${result.id}.json`);
fs.writeFileSync(filePath, JSON.stringify(result, null, 2), 'utf8');
};
const attachJourneyResult = async (result: JourneyResult): Promise<void> => {
await test.info().attach(`${result.id}-metrics.json`, {
body: Buffer.from(JSON.stringify(result, null, 2)),
contentType: 'application/json',
});
};
const writeJourneyResult = (dir: string, result: JourneyResult): void => {
const filePath = path.join(dir, `${result.id}.json`);
try {
fs.writeFileSync(filePath, JSON.stringify(result, null, 2), 'utf8');
} catch (error) {
console.warn(`Failed to write journey result to ${filePath}:`, error);
}
};
🤖 Prompt for AI Agents
In frontend/tests/e2e/perf/perf-budget.spec.ts around lines 79 to 106, the
writeJourneyResult function (lines 96-99) writes to disk without error handling;
wrap the fs.writeFileSync call in a try/catch, build the same filePath before
writing, and on failure log a clear error (including filePath and the caught
error) and attach the error/details to the current test via test.info().attach
so the failure is recorded; do not let the exception crash the test run (do not
rethrow) unless you intentionally want the test to fail.


const collectNavigationMetrics = async (
browser: Browser,
fromUrl: string,
toUrl: string,
waits: WaitConfig[] | undefined,
): Promise<{ navigation: number; lcp: number }> => {
const context = await browser.newContext();

await context.addInitScript(() => {
(window as typeof window & {
__lcpEntries?: PerformanceEntry[];
__lcpObserver?: PerformanceObserver;
}).__lcpEntries = [];
});

const page = await context.newPage();
await page.goto(fromUrl, { waitUntil: 'networkidle' });

await page.evaluate(() => {
const globalWindow = window as typeof window & {
__lcpEntries?: PerformanceEntry[];
__lcpObserver?: PerformanceObserver;
};

globalWindow.__lcpEntries = [];
if (globalWindow.__lcpObserver) {
globalWindow.__lcpObserver.disconnect();
}
globalWindow.__lcpObserver = new PerformanceObserver((entryList) => {
const entries = entryList.getEntries();
globalWindow.__lcpEntries = [
...(globalWindow.__lcpEntries ?? []),
...entries,
];
});
globalWindow.__lcpObserver.observe({ type: 'largest-contentful-paint', buffered: true });

performance.clearResourceTimings();
performance.clearMeasures();
performance.clearMarks();
});

await page.goto(toUrl, { waitUntil: 'networkidle' });

if (waits) {
for (const wait of waits) {
if (wait.type === 'selector' && wait.selector) {
await page.waitForSelector(wait.selector, {
state: 'attached',
timeout: wait.timeout_ms ?? 30_000,
});
}
if (wait.type === 'networkidle') {
await page.waitForLoadState('networkidle', { timeout: wait.timeout_ms ?? 30_000 });
if (wait.idle_ms) {
await page.waitForTimeout(wait.idle_ms);
}
}
}
}

await page.waitForTimeout(500);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Document or make the fixed delay configurable.

Line 169 introduces an arbitrary 500ms wait after processing configured waits. This magic number lacks explanation for why this specific duration is needed.

Consider either:

  1. Adding a comment explaining why this delay is necessary, or
  2. Making it configurable via the journey configuration, or
  3. Removing it if it's not essential
+  // Brief delay to ensure all performance entries are captured
   await page.waitForTimeout(500);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await page.waitForTimeout(500);
// Brief delay to ensure all performance entries are captured
await page.waitForTimeout(500);
🤖 Prompt for AI Agents
In frontend/tests/e2e/perf/perf-budget.spec.ts around line 169, there is a
hardcoded 500ms wait (await page.waitForTimeout(500)) acting as an unexplained
"magic" delay; replace or document it by either (A) adding a brief comment that
explains the purpose and why 500ms was chosen, (B) exposing the delay as a
configurable value in the journey/test config (e.g., a named timeout like
postWaitDelayMs) and using that config value here, or (C) remove the wait
entirely if verification shows it is unnecessary; implement the chosen option
and update any related tests or config defaults accordingly.


const navigationDuration = await page.evaluate(() => {
const entries = performance.getEntriesByType('navigation');
const last = entries[entries.length - 1] as PerformanceNavigationTiming | undefined;
return last ? last.duration : Number.NaN;
});

const lcp = await page.evaluate(() => {
const globalWindow = window as typeof window & {
__lcpEntries?: PerformanceEntry[];
};
const entries = (globalWindow.__lcpEntries ?? []) as Array<PerformanceEntry & { renderTime?: number; loadTime?: number; startTime: number }>;
if (!entries.length) {
return Number.NaN;
}
return entries.reduce((acc, entry) => {
const candidate = entry.startTime ?? entry.renderTime ?? entry.loadTime ?? 0;
return Math.max(acc, candidate);
}, 0);
});

await context.close();

return { navigation: navigationDuration, lcp };
};
Comment on lines +108 to +194
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Add resource cleanup in error scenarios.

The function creates a browser context (line 114) but doesn't guarantee cleanup if an error occurs between context creation and the close call (line 191). This could lead to resource leaks in test failures.

Apply this diff to ensure cleanup:

 ): Promise<{ navigation: number; lcp: number }> => {
   const context = await browser.newContext();
+  
+  try {
+    await context.addInitScript(() => {
+      (window as typeof window & {
+        __lcpEntries?: PerformanceEntry[];
+        __lcpObserver?: PerformanceObserver;
+      }).__lcpEntries = [];
+    });

-  await context.addInitScript(() => {
-    (window as typeof window & {
-      __lcpEntries?: PerformanceEntry[];
-      __lcpObserver?: PerformanceObserver;
-    }).__lcpEntries = [];
-  });
-
-  const page = await context.newPage();
+    const page = await context.newPage();
   
   // ... rest of the function logic ...
   
-  await context.close();
-
-  return { navigation: navigationDuration, lcp };
+    return { navigation: navigationDuration, lcp };
+  } finally {
+    await context.close();
+  }
 };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const collectNavigationMetrics = async (
browser: Browser,
fromUrl: string,
toUrl: string,
waits: WaitConfig[] | undefined,
): Promise<{ navigation: number; lcp: number }> => {
const context = await browser.newContext();
await context.addInitScript(() => {
(window as typeof window & {
__lcpEntries?: PerformanceEntry[];
__lcpObserver?: PerformanceObserver;
}).__lcpEntries = [];
});
const page = await context.newPage();
await page.goto(fromUrl, { waitUntil: 'networkidle' });
await page.evaluate(() => {
const globalWindow = window as typeof window & {
__lcpEntries?: PerformanceEntry[];
__lcpObserver?: PerformanceObserver;
};
globalWindow.__lcpEntries = [];
if (globalWindow.__lcpObserver) {
globalWindow.__lcpObserver.disconnect();
}
globalWindow.__lcpObserver = new PerformanceObserver((entryList) => {
const entries = entryList.getEntries();
globalWindow.__lcpEntries = [
...(globalWindow.__lcpEntries ?? []),
...entries,
];
});
globalWindow.__lcpObserver.observe({ type: 'largest-contentful-paint', buffered: true });
performance.clearResourceTimings();
performance.clearMeasures();
performance.clearMarks();
});
await page.goto(toUrl, { waitUntil: 'networkidle' });
if (waits) {
for (const wait of waits) {
if (wait.type === 'selector' && wait.selector) {
await page.waitForSelector(wait.selector, {
state: 'attached',
timeout: wait.timeout_ms ?? 30_000,
});
}
if (wait.type === 'networkidle') {
await page.waitForLoadState('networkidle', { timeout: wait.timeout_ms ?? 30_000 });
if (wait.idle_ms) {
await page.waitForTimeout(wait.idle_ms);
}
}
}
}
await page.waitForTimeout(500);
const navigationDuration = await page.evaluate(() => {
const entries = performance.getEntriesByType('navigation');
const last = entries[entries.length - 1] as PerformanceNavigationTiming | undefined;
return last ? last.duration : Number.NaN;
});
const lcp = await page.evaluate(() => {
const globalWindow = window as typeof window & {
__lcpEntries?: PerformanceEntry[];
};
const entries = (globalWindow.__lcpEntries ?? []) as Array<PerformanceEntry & { renderTime?: number; loadTime?: number; startTime: number }>;
if (!entries.length) {
return Number.NaN;
}
return entries.reduce((acc, entry) => {
const candidate = entry.startTime ?? entry.renderTime ?? entry.loadTime ?? 0;
return Math.max(acc, candidate);
}, 0);
});
await context.close();
return { navigation: navigationDuration, lcp };
};
const collectNavigationMetrics = async (
browser: Browser,
fromUrl: string,
toUrl: string,
waits: WaitConfig[] | undefined,
): Promise<{ navigation: number; lcp: number }> => {
const context = await browser.newContext();
try {
await context.addInitScript(() => {
(window as typeof window & {
__lcpEntries?: PerformanceEntry[];
__lcpObserver?: PerformanceObserver;
}).__lcpEntries = [];
});
const page = await context.newPage();
await page.goto(fromUrl, { waitUntil: 'networkidle' });
await page.evaluate(() => {
const globalWindow = window as typeof window & {
__lcpEntries?: PerformanceEntry[];
__lcpObserver?: PerformanceObserver;
};
globalWindow.__lcpEntries = [];
if (globalWindow.__lcpObserver) {
globalWindow.__lcpObserver.disconnect();
}
globalWindow.__lcpObserver = new PerformanceObserver((entryList) => {
const entries = entryList.getEntries();
globalWindow.__lcpEntries = [
...(globalWindow.__lcpEntries ?? []),
...entries,
];
});
globalWindow.__lcpObserver.observe({ type: 'largest-contentful-paint', buffered: true });
performance.clearResourceTimings();
performance.clearMeasures();
performance.clearMarks();
});
await page.goto(toUrl, { waitUntil: 'networkidle' });
if (waits) {
for (const wait of waits) {
if (wait.type === 'selector' && wait.selector) {
await page.waitForSelector(wait.selector, {
state: 'attached',
timeout: wait.timeout_ms ?? 30_000,
});
}
if (wait.type === 'networkidle') {
await page.waitForLoadState('networkidle', { timeout: wait.timeout_ms ?? 30_000 });
if (wait.idle_ms) {
await page.waitForTimeout(wait.idle_ms);
}
}
}
}
await page.waitForTimeout(500);
const navigationDuration = await page.evaluate(() => {
const entries = performance.getEntriesByType('navigation');
const last = entries[entries.length - 1] as PerformanceNavigationTiming | undefined;
return last ? last.duration : Number.NaN;
});
const lcp = await page.evaluate(() => {
const globalWindow = window as typeof window & {
__lcpEntries?: PerformanceEntry[];
};
const entries = (globalWindow.__lcpEntries ?? []) as Array<
PerformanceEntry & { renderTime?: number; loadTime?: number; startTime: number }
>;
if (!entries.length) {
return Number.NaN;
}
return entries.reduce((acc, entry) => {
const candidate = entry.startTime ?? entry.renderTime ?? entry.loadTime ?? 0;
return Math.max(acc, candidate);
}, 0);
});
return { navigation: navigationDuration, lcp };
} finally {
await context.close();
}
};
🤖 Prompt for AI Agents
In frontend/tests/e2e/perf/perf-budget.spec.ts around lines 108 to 194, the
function creates a browser context at line 114 but only calls context.close() at
the end, so if an exception is thrown before line 191 the context (and page) may
never be closed; to fix this, allocate the context (and page) to variables
scoped outside the try, then wrap the navigation/measurement logic in try { ...
} finally { if (page) await page.close().catch(() => {}); if (context) await
context.close().catch(() => {}); } so the context and page are always cleaned up
even on errors.


const perfConfig = loadPerfBudget();
const runCount = perfConfig.run_count ?? 3;
const resultsDir = ensureResultsDir();

test.describe('performance budget journeys', () => {
if (!perfConfig.journeys || perfConfig.journeys.length === 0) {
test('no journeys defined', async () => {
test.skip(true, 'No journeys defined in perf budget');
});
return;
}

for (const journey of perfConfig.journeys) {
test(journey.id, async ({ browser }) => {
const runs: Array<{ navigation_duration_ms: number; largest_contentful_paint_ms: number }> = [];

for (let iteration = 0; iteration < runCount; iteration += 1) {
const metrics = await collectNavigationMetrics(browser, journey.from_url, journey.to_url, journey.waits);
runs.push({
navigation_duration_ms: metrics.navigation,
largest_contentful_paint_ms: metrics.lcp,
});
}

const navigationValues = runs.map((run) => run.navigation_duration_ms).filter((value) => Number.isFinite(value));
const lcpValues = runs.map((run) => run.largest_contentful_paint_ms).filter((value) => Number.isFinite(value));

const aggregated: JourneyResult['aggregated'] = {};

const metricConfigById = new Map<string, MetricConfig>();
for (const metric of journey.metrics ?? []) {
metricConfigById.set(metric.id, metric);
}

if (navigationValues.length) {
const metricConfig = metricConfigById.get('navigation-duration');
const quantile = metricConfig ? toQuantile(metricConfig.aggregation) : 0.9;
const value = percentile(navigationValues, quantile ?? 0.9);
aggregated.navigation_duration = {
quantile: metricConfig ? metricConfig.aggregation.toLowerCase() : 'p90',
value_ms: value,
threshold_ms: metricConfig?.threshold,
passed: metricConfig ? value <= metricConfig.threshold : true,
};
}
Comment on lines +230 to +240
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Simplify redundant quantile fallback logic.

Lines 232-233 and 244-245 contain redundant fallback logic. The toQuantile function returns undefined on parse failure, which is handled by the ternary operator, making the subsequent ?? 0.9 and ?? 0.95 unnecessary.

Apply this diff to simplify:

       if (navigationValues.length) {
         const metricConfig = metricConfigById.get('navigation-duration');
-        const quantile = metricConfig ? toQuantile(metricConfig.aggregation) : 0.9;
-        const value = percentile(navigationValues, quantile ?? 0.9);
+        const quantile = metricConfig ? toQuantile(metricConfig.aggregation) ?? 0.9 : 0.9;
+        const value = percentile(navigationValues, quantile);
         aggregated.navigation_duration = {
           quantile: metricConfig ? metricConfig.aggregation.toLowerCase() : 'p90',
           value_ms: value,
           threshold_ms: metricConfig?.threshold,
           passed: metricConfig ? value <= metricConfig.threshold : true,
         };
       }
 
       if (lcpValues.length) {
         const metricConfig = metricConfigById.get('largest-contentful-paint');
-        const quantile = metricConfig ? toQuantile(metricConfig.aggregation) : 0.95;
-        const value = percentile(lcpValues, quantile ?? 0.95);
+        const quantile = metricConfig ? toQuantile(metricConfig.aggregation) ?? 0.95 : 0.95;
+        const value = percentile(lcpValues, quantile);
         aggregated.largest_contentful_paint = {

Also applies to: 242-252

🤖 Prompt for AI Agents
In frontend/tests/e2e/perf/perf-budget.spec.ts around lines 230 to 240 (and
similarly 242 to 252), the code uses redundant nullish-coalescing fallbacks
after already handling parse failure with a ternary; remove the unnecessary "??
0.9" and "?? 0.95" parts and rely on the ternary that assigns a default quantile
(e.g., const quantile = metricConfig ? toQuantile(metricConfig.aggregation) :
0.9;), and ensure percentile() is called with that quantile variable (no extra
??). Also apply the same simplification to the second block referenced (lines
~242-252) so both places use the single fallback mechanism.


if (lcpValues.length) {
const metricConfig = metricConfigById.get('largest-contentful-paint');
const quantile = metricConfig ? toQuantile(metricConfig.aggregation) : 0.95;
const value = percentile(lcpValues, quantile ?? 0.95);
aggregated.largest_contentful_paint = {
quantile: metricConfig ? metricConfig.aggregation.toLowerCase() : 'p95',
value_ms: value,
threshold_ms: metricConfig?.threshold,
passed: metricConfig ? value <= metricConfig.threshold : true,
};
}
Comment on lines +242 to +252
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Property naming inconsistency with downstream script.

Line 246 uses largest_contentful_paint (with underscores) as the property name, but the publish metrics script at scripts/publish-perf-metrics.mjs line 94 expects largest-contentful-paint (with hyphens), causing the downstream script to receive undefined values.

Apply this diff to maintain consistency with the expected format:

       if (lcpValues.length) {
         const metricConfig = metricConfigById.get('largest-contentful-paint');
         const quantile = metricConfig ? toQuantile(metricConfig.aggregation) : 0.95;
         const value = percentile(lcpValues, quantile ?? 0.95);
-        aggregated.largest_contentful_paint = {
+        aggregated['largest-contentful-paint'] = {
           quantile: metricConfig ? metricConfig.aggregation.toLowerCase() : 'p95',
           value_ms: value,
           threshold_ms: metricConfig?.threshold,
           passed: metricConfig ? value <= metricConfig.threshold : true,
         };
       }

Also update the type definition to match:

 type JourneyResult = {
   id: string;
   description?: string;
   runs: Array<{
     navigation_duration_ms: number;
     largest_contentful_paint_ms: number;
   }>;
   aggregated: {
     navigation_duration?: JourneyMetricSummary;
-    largest_contentful_paint?: JourneyMetricSummary;
+    'largest-contentful-paint'?: JourneyMetricSummary;
   };
 };

And update line 270:

-      if (aggregated.largest_contentful_paint && aggregated.largest_contentful_paint.threshold_ms !== undefined) {
+      if (aggregated['largest-contentful-paint'] && aggregated['largest-contentful-paint'].threshold_ms !== undefined) {
-        expect(aggregated.largest_contentful_paint.value_ms, 'homepage to configurator LCP P95 should remain under budget').toBeLessThanOrEqual(
+        expect(aggregated['largest-contentful-paint'].value_ms, 'homepage to configurator LCP P95 should remain under budget').toBeLessThanOrEqual(
-          aggregated.largest_contentful_paint.threshold_ms,
+          aggregated['largest-contentful-paint'].threshold_ms,
         );
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (lcpValues.length) {
const metricConfig = metricConfigById.get('largest-contentful-paint');
const quantile = metricConfig ? toQuantile(metricConfig.aggregation) : 0.95;
const value = percentile(lcpValues, quantile ?? 0.95);
aggregated.largest_contentful_paint = {
quantile: metricConfig ? metricConfig.aggregation.toLowerCase() : 'p95',
value_ms: value,
threshold_ms: metricConfig?.threshold,
passed: metricConfig ? value <= metricConfig.threshold : true,
};
}
if (lcpValues.length) {
const metricConfig = metricConfigById.get('largest-contentful-paint');
const quantile = metricConfig ? toQuantile(metricConfig.aggregation) : 0.95;
const value = percentile(lcpValues, quantile ?? 0.95);
aggregated['largest-contentful-paint'] = {
quantile: metricConfig ? metricConfig.aggregation.toLowerCase() : 'p95',
value_ms: value,
threshold_ms: metricConfig?.threshold,
passed: metricConfig ? value <= metricConfig.threshold : true,
};
}
Suggested change
if (lcpValues.length) {
const metricConfig = metricConfigById.get('largest-contentful-paint');
const quantile = metricConfig ? toQuantile(metricConfig.aggregation) : 0.95;
const value = percentile(lcpValues, quantile ?? 0.95);
aggregated.largest_contentful_paint = {
quantile: metricConfig ? metricConfig.aggregation.toLowerCase() : 'p95',
value_ms: value,
threshold_ms: metricConfig?.threshold,
passed: metricConfig ? value <= metricConfig.threshold : true,
};
}
type JourneyResult = {
id: string;
description?: string;
runs: Array<{
navigation_duration_ms: number;
largest_contentful_paint_ms: number;
}>;
aggregated: {
navigation_duration?: JourneyMetricSummary;
'largest-contentful-paint'?: JourneyMetricSummary;
};
};
Suggested change
if (lcpValues.length) {
const metricConfig = metricConfigById.get('largest-contentful-paint');
const quantile = metricConfig ? toQuantile(metricConfig.aggregation) : 0.95;
const value = percentile(lcpValues, quantile ?? 0.95);
aggregated.largest_contentful_paint = {
quantile: metricConfig ? metricConfig.aggregation.toLowerCase() : 'p95',
value_ms: value,
threshold_ms: metricConfig?.threshold,
passed: metricConfig ? value <= metricConfig.threshold : true,
};
}
if (aggregated['largest-contentful-paint'] && aggregated['largest-contentful-paint'].threshold_ms !== undefined) {
expect(
aggregated['largest-contentful-paint'].value_ms,
'homepage to configurator LCP P95 should remain under budget'
).toBeLessThanOrEqual(
aggregated['largest-contentful-paint'].threshold_ms,
);
}
🤖 Prompt for AI Agents
In frontend/tests/e2e/perf/perf-budget.spec.ts around lines 242 to 252, the test
populates aggregated.largest_contentful_paint (underscores) but downstream
scripts expect the hyphenated key 'largest-contentful-paint'; change the
property name to use 'largest-contentful-paint' when assigning the aggregated
metric, update the corresponding test/type definition so the aggregated object
type includes 'largest-contentful-paint' (string key with hyphens) instead of
largest_contentful_paint, and adjust the reference at line 270 to read the
hyphenated key as well so the test produces the same shape expected by
scripts/publish-perf-metrics.mjs.


const result: JourneyResult = {
id: journey.id,
description: journey.description,
runs,
aggregated,
};

writeJourneyResult(resultsDir, result);
await attachJourneyResult(result);

if (aggregated.navigation_duration && aggregated.navigation_duration.threshold_ms !== undefined) {
expect(aggregated.navigation_duration.value_ms, 'homepage to configurator navigation P90 should remain under budget').toBeLessThanOrEqual(
aggregated.navigation_duration.threshold_ms,
);
}

if (aggregated.largest_contentful_paint && aggregated.largest_contentful_paint.threshold_ms !== undefined) {
expect(aggregated.largest_contentful_paint.value_ms, 'homepage to configurator LCP P95 should remain under budget').toBeLessThanOrEqual(
aggregated.largest_contentful_paint.threshold_ms,
);
}
Comment on lines +264 to +274
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fix hardcoded assertion messages to reflect actual journey.

Lines 265 and 271 contain hardcoded assertion messages referring to "homepage to configurator" but this test runs for all journeys in a loop. The messages should dynamically reflect the actual journey being tested.

Apply this diff to use dynamic messages:

       if (aggregated.navigation_duration && aggregated.navigation_duration.threshold_ms !== undefined) {
-        expect(aggregated.navigation_duration.value_ms, 'homepage to configurator navigation P90 should remain under budget').toBeLessThanOrEqual(
+        expect(aggregated.navigation_duration.value_ms, `${journey.id} navigation ${aggregated.navigation_duration.quantile.toUpperCase()} should remain under budget`).toBeLessThanOrEqual(
           aggregated.navigation_duration.threshold_ms,
         );
       }
 
       if (aggregated.largest_contentful_paint && aggregated.largest_contentful_paint.threshold_ms !== undefined) {
-        expect(aggregated.largest_contentful_paint.value_ms, 'homepage to configurator LCP P95 should remain under budget').toBeLessThanOrEqual(
+        expect(aggregated.largest_contentful_paint.value_ms, `${journey.id} LCP ${aggregated.largest_contentful_paint.quantile.toUpperCase()} should remain under budget`).toBeLessThanOrEqual(
           aggregated.largest_contentful_paint.threshold_ms,
         );
       }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (aggregated.navigation_duration && aggregated.navigation_duration.threshold_ms !== undefined) {
expect(aggregated.navigation_duration.value_ms, 'homepage to configurator navigation P90 should remain under budget').toBeLessThanOrEqual(
aggregated.navigation_duration.threshold_ms,
);
}
if (aggregated.largest_contentful_paint && aggregated.largest_contentful_paint.threshold_ms !== undefined) {
expect(aggregated.largest_contentful_paint.value_ms, 'homepage to configurator LCP P95 should remain under budget').toBeLessThanOrEqual(
aggregated.largest_contentful_paint.threshold_ms,
);
}
if (aggregated.navigation_duration && aggregated.navigation_duration.threshold_ms !== undefined) {
expect(
aggregated.navigation_duration.value_ms,
`${journey.id} navigation ${aggregated.navigation_duration.quantile.toUpperCase()} should remain under budget`
).toBeLessThanOrEqual(
aggregated.navigation_duration.threshold_ms,
);
}
if (aggregated.largest_contentful_paint && aggregated.largest_contentful_paint.threshold_ms !== undefined) {
expect(
aggregated.largest_contentful_paint.value_ms,
`${journey.id} LCP ${aggregated.largest_contentful_paint.quantile.toUpperCase()} should remain under budget`
).toBeLessThanOrEqual(
aggregated.largest_contentful_paint.threshold_ms,
);
}
🤖 Prompt for AI Agents
In frontend/tests/e2e/perf/perf-budget.spec.ts around lines 264 to 274, the
assertion messages are hardcoded as "homepage to configurator" but this test
runs for multiple journeys; replace those hardcoded strings with a dynamic
message that includes the current journey identifier (for example using the loop
variable like journey.name or journeyId) so the expect messages read something
like `${journey.name} navigation P90 should remain under budget` and
`${journey.name} LCP P95 should remain under budget`; update both expect calls
to use the dynamic journey-based message variable.

});
}
});
1 change: 1 addition & 0 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ nav:
- Getting Started: getting-started.md
- Backend Development: backend-development.md
- Frontend Development: frontend-development.md
- Release Checklist: release-checklist.md
- Docker and Deployment: docker-deployment.md
- Architecture: ARCHITECTURE.md
- API Spec: API_SPEC.md
Expand Down
1 change: 1 addition & 0 deletions scripts/ci/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
"""CI helper scripts for observability automation."""
Loading
Loading