Conversation
|
Caution Review failedThe pull request is closed. Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughFrontend package.json updated to include OpenTelemetry OTLP HTTP exporter dependency. A duplicate entry for the OpenTelemetry API package was also added, creating a redundant dependency declaration. Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes Rationale: Single-file dependency update with straightforward additions. However, note the duplicate Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: ASSERTIVE Plan: Pro 📒 Files selected for processing (1)
Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting
| return { | ||
| traceId: randomId(16), | ||
| spanId: randomId(8), | ||
| name: event.name ?? 'web-vital', |
There was a problem hiding this comment.
Encode OTLP span IDs in base64, not hex
The OTLP/HTTP JSON schema treats traceId and spanId as protobuf bytes, which must be base64-encoded strings. The span objects built here use randomBytes(...).toString('hex'), producing hex-encoded IDs. Collectors will attempt to base64-decode these fields and either reject the payload or drop the spans because the decoded length won’t be 16/8 bytes. As written, every exported web-vital span is invalid and telemetry never reaches the collector.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 2
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
frontend/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (5)
frontend/README.md(1 hunks)frontend/package.json(1 hunks)frontend/src/app/api/otel/webvital/route.ts(1 hunks)frontend/src/app/providers.tsx(2 hunks)frontend/src/app/web-vitals.ts(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
frontend/src/app/api/otel/webvital/route.ts (1)
frontend/next.config.mjs (1)
process(10-17)
frontend/src/app/providers.tsx (1)
frontend/src/app/web-vitals.ts (1)
initWebVitals(181-191)
🪛 LanguageTool
frontend/README.md
[grammar] ~32-~32: Use correct spacing
Context: ...entions. ## Observability Configuration The frontend exports Web Vitals to an Op...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~34-~34: Use correct spacing
Context: ...onment variables before running the app: | Variable | Description | | --- | --- |...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
[grammar] ~42-~42: Use correct spacing
Context: ...hat overrides the host/port variables. | You can also set OTEL_SERVICE_NAME to ...
(QB_NEW_EN_OTHER_ERROR_IDS_5)
| export async function POST(request: Request) { | ||
| let payload: IncomingPayload; | ||
| try { | ||
| payload = (await request.json()) as IncomingPayload; | ||
| } catch (error) { | ||
| return NextResponse.json( | ||
| { error: 'Invalid JSON payload', details: `${error}` }, | ||
| { status: 400 }, | ||
| ); | ||
| } | ||
|
|
||
| const events = Array.isArray(payload.events) ? payload.events : []; | ||
|
|
||
| if (events.length === 0) { | ||
| return NextResponse.json({ accepted: 0 }); | ||
| } | ||
|
|
||
| const endpoint = resolveCollectorEndpoint(); | ||
|
|
||
| if (!endpoint) { | ||
| return NextResponse.json( | ||
| { error: 'Collector endpoint is not configured' }, | ||
| { status: 500 }, | ||
| ); | ||
| } | ||
|
|
||
| const sentAt = payload.sentAt ? Date.parse(payload.sentAt) : Date.now(); | ||
| const fallbackTime = Number.isNaN(sentAt) ? Date.now() : sentAt; | ||
|
|
||
| const spans = events.map((event) => createSpan(event, fallbackTime)); | ||
|
|
||
| const otlpPayload = { | ||
| resourceSpans: [ | ||
| { | ||
| resource: { | ||
| attributes: [ | ||
| { | ||
| key: 'service.name', | ||
| value: { | ||
| stringValue: | ||
| process.env.OTEL_SERVICE_NAME ?? 'paform-frontend', | ||
| }, | ||
| }, | ||
| ], | ||
| }, | ||
| scopeSpans: [ | ||
| { | ||
| scope: DEFAULT_SCOPE, | ||
| spans, | ||
| }, | ||
| ], | ||
| }, | ||
| ], | ||
| }; | ||
|
|
||
| const response = await fetch(endpoint, { | ||
| method: 'POST', | ||
| headers: { | ||
| 'Content-Type': 'application/json', | ||
| }, | ||
| body: JSON.stringify(otlpPayload), | ||
| }); | ||
|
|
||
| if (!response.ok) { | ||
| const text = await response.text(); | ||
| return NextResponse.json( | ||
| { | ||
| error: 'Failed to export web vitals', | ||
| status: response.status, | ||
| body: text, | ||
| }, | ||
| { status: 502 }, | ||
| ); | ||
| } | ||
|
|
||
| return NextResponse.json({ accepted: spans.length }); | ||
| } |
There was a problem hiding this comment.
Route is shadowed by global /api/* rewrite.
frontend/next.config.mjs rewrites every /api/:path* request to the backend. That rewrite runs before filesystem routing, so this handler never executes—your client POSTs to /api/otel/webvital will be proxied to the backend (which likely has no matching endpoint). To make this route usable, update the rewrite config to exclude /api/otel/webvital (or narrow the rewrite scope) so the Next.js route handler can actually receive the telemetry.
🤖 Prompt for AI Agents
frontend/src/app/api/otel/webvital/route.ts lines 153-229: this Next.js route
handler is never reached because frontend/next.config.mjs rewrites /api/:path*
to the backend before filesystem routing; update the rewrite rules to exclude
the telemetry endpoint (e.g., add a rule that bypasses rewrites for
/api/otel/webvital or make the existing rewrite more specific) so the POST to
/api/otel/webvital is handled by this route, or alternatively change the route
path to one not under /api/* and adjust the client accordingly.
| } catch (error) { | ||
| if (process.env.NODE_ENV !== 'production') { | ||
| console.warn('Failed to report web-vitals', error); | ||
| } | ||
| // Re-queue events so they are not lost. | ||
| queue.unshift(...events); | ||
| } |
There was a problem hiding this comment.
Ensure failed batches are retried.
When the POST fails, events are re-queued but no new flush is scheduled. If no additional metrics arrive, those events sit in the queue forever and never get retried (e.g., user goes offline briefly and comes back without triggering new vitals). Please reschedule a flush after re-queuing so failures are retried even without new data.
} catch (error) {
if (process.env.NODE_ENV !== 'production') {
console.warn('Failed to report web-vitals', error);
}
// Re-queue events so they are not lost.
queue.unshift(...events);
+ scheduleFlush();
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } catch (error) { | |
| if (process.env.NODE_ENV !== 'production') { | |
| console.warn('Failed to report web-vitals', error); | |
| } | |
| // Re-queue events so they are not lost. | |
| queue.unshift(...events); | |
| } | |
| } catch (error) { | |
| if (process.env.NODE_ENV !== 'production') { | |
| console.warn('Failed to report web-vitals', error); | |
| } | |
| // Re-queue events so they are not lost. | |
| queue.unshift(...events); | |
| scheduleFlush(); | |
| } | |
| } |
🤖 Prompt for AI Agents
In frontend/src/app/web-vitals.ts around lines 92 to 98, the catch block
re-queues failed events but does not schedule another flush, so failed batches
are never retried if no new metrics arrive; after queue.unshift(...events) call
a resend attempt by scheduling a flush (preferably by invoking the existing
scheduleFlush/flushSoon helper or, if none, call setTimeout(() => flushQueue(),
retryDelay)) so the queue is retried, and guard against creating duplicate
timers by checking existing scheduled timer/flag before scheduling.
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (1)
frontend/src/app/api/otel/webvital/route.ts (1)
153-229: Verify the route is not shadowed by the global/api/*rewrite.As flagged in the previous review, if
frontend/next.config.mjscontains a rewrite rule that proxies all/api/:path*requests to the backend, this Next.js route handler will never execute. Client POSTs to/api/otel/webvitalwould be forwarded to the backend instead.Run the following script to check for conflicting rewrite rules:
#!/bin/bash # Description: Check if next.config.mjs has a rewrite that would shadow this route. # Search for rewrites in next.config.mjs fd -t f 'next.config.mjs' frontend/ --exec cat {} \; | \ rg -A 10 'rewrites|rewrite' || echo "No rewrite configuration found"
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
frontend/src/app/api/otel/webvital/route.ts(1 hunks)
🧰 Additional context used
🔍 Remote MCP Sentry
Additional Context from Sentry Documentation
I pulled in Sentry’s official guidance on using OpenTelemetry APIs with Next.js to understand how custom spans should be integrated and to compare it against the PR’s manual OTLP‐HTTP exporter approach:
-
Sentry captures all spans started via OpenTelemetry APIs out of the box. You can add extra instrumentations by passing an array of OpenTelemetry
Instrumentationinstances to theopenTelemetryInstrumentationsoption inSentry.init– e.g.Sentry.init({ // … your Sentry config … openTelemetryInstrumentations: [ new GenericPoolInstrumentation(), /* …other instrumentations… */ ], });
-
You can also register additional span processors on Sentry’s tracer provider via
openTelemetrySpanProcessors:Sentry.init({ // … your Sentry config … openTelemetrySpanProcessors: [ new MySpanProcessor() ], });
This may be preferable to manually batching and exporting spans in some contexts.
-
If needed, you can access the native OpenTelemetry tracer that Sentry uses and start spans directly via the OTel API:
const tracer = Sentry.getClient()?.tracer; tracer.startActiveSpan("my-op", span => { /* … */ });
These patterns offer an alternative to the PR’s handwritten OTLP span assembly and HTTP POST, and might simplify instrumentation or improve consistency with existing Sentry traces.
🔇 Additional comments (1)
frontend/src/app/api/otel/webvital/route.ts (1)
35-37: Previous base64 encoding issue has been fixed.The switch from hex to base64 encoding correctly addresses the previous review concern. OTLP/HTTP JSON requires trace and span IDs as base64-encoded strings representing the underlying bytes.
| return undefined; | ||
| } | ||
|
|
||
| const protocol = process.env.OTEL_COLLECTOR_PROTOCOL ?? 'http'; |
There was a problem hiding this comment.
Consider defaulting to HTTPS or documenting the insecure default.
The default protocol 'http' is insecure for production environments. Consider either defaulting to 'https' or adding a comment warning that operators should explicitly set OTEL_COLLECTOR_PROTOCOL=https in production.
- const protocol = process.env.OTEL_COLLECTOR_PROTOCOL ?? 'http';
+ const protocol = process.env.OTEL_COLLECTOR_PROTOCOL ?? 'https';📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const protocol = process.env.OTEL_COLLECTOR_PROTOCOL ?? 'http'; | |
| const protocol = process.env.OTEL_COLLECTOR_PROTOCOL ?? 'https'; |
🤖 Prompt for AI Agents
In frontend/src/app/api/otel/webvital/route.ts around line 51, the
OTEL_COLLECTOR_PROTOCOL defaults to the insecure value 'http'; change the
default to 'https' or add an inline comment and/or config validation that warns
operators to explicitly set OTEL_COLLECTOR_PROTOCOL=https in production. Update
the code to default to 'https' (or, if keeping 'http', add a clear comment and
throw or log a warning when NODE_ENV is production and the env var is not set)
and ensure any related docs/config samples mention the secure default.
| function createSpan(event: IncomingEvent, fallbackTime: number) { | ||
| const duration = | ||
| typeof event.detail?.duration === 'number' && event.detail.duration > 0 | ||
| ? event.detail.duration | ||
| : 1; | ||
| const startTime = toUnixNano(fallbackTime); | ||
| const endTime = toUnixNano(fallbackTime + duration); | ||
|
|
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider documenting the duration fallback behavior.
When event.detail.duration is missing or zero, the span defaults to a 1ms duration. For point-in-time web vitals (e.g., LCP value), this produces an artificial span duration rather than a true time window. While spans require a duration, a brief inline comment would clarify this design choice.
function createSpan(event: IncomingEvent, fallbackTime: number) {
+ // Spans require a duration; default to 1ms for point-in-time metrics
const duration =
typeof event.detail?.duration === 'number' && event.detail.duration > 0
? event.detail.duration
: 1;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function createSpan(event: IncomingEvent, fallbackTime: number) { | |
| const duration = | |
| typeof event.detail?.duration === 'number' && event.detail.duration > 0 | |
| ? event.detail.duration | |
| : 1; | |
| const startTime = toUnixNano(fallbackTime); | |
| const endTime = toUnixNano(fallbackTime + duration); | |
| function createSpan(event: IncomingEvent, fallbackTime: number) { | |
| // Spans require a duration; default to 1ms for point-in-time metrics | |
| const duration = | |
| typeof event.detail?.duration === 'number' && event.detail.duration > 0 | |
| ? event.detail.duration | |
| : 1; | |
| const startTime = toUnixNano(fallbackTime); | |
| const endTime = toUnixNano(fallbackTime + duration); | |
| // …rest of the function… |
🤖 Prompt for AI Agents
In frontend/src/app/api/otel/webvital/route.ts around lines 59 to 66, add a
brief inline comment immediately above the duration calculation documenting that
when event.detail.duration is missing or zero the code falls back to 1 (ms) to
satisfy span duration requirements; note this produces an artificial short span
for point-in-time vitals (e.g., LCP) rather than a true time window, and that
the fallback exists to avoid zero-length spans expected by tracing systems.
| const response = await fetch(endpoint, { | ||
| method: 'POST', | ||
| headers: { | ||
| 'Content-Type': 'application/json', | ||
| }, | ||
| body: JSON.stringify(otlpPayload), | ||
| }); | ||
|
|
||
| if (!response.ok) { | ||
| const text = await response.text(); | ||
| return NextResponse.json( | ||
| { | ||
| error: 'Failed to export web vitals', | ||
| status: response.status, | ||
| body: text, | ||
| }, | ||
| { status: 502 }, | ||
| ); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider retry logic for transient collector failures.
The handler returns a 502 error immediately if the collector responds with a non-OK status. For transient failures (5xx errors, network blips), a retry with exponential backoff could improve reliability and reduce data loss.
Example retry logic:
async function fetchWithRetry(
url: string,
options: RequestInit,
maxRetries = 2,
delayMs = 100,
) {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
try {
const response = await fetch(url, options);
if (response.ok || attempt === maxRetries) {
return response;
}
// Retry on 5xx errors only
if (response.status >= 500) {
await new Promise((resolve) =>
setTimeout(resolve, delayMs * 2 ** attempt),
);
continue;
}
return response;
} catch (error) {
if (attempt === maxRetries) throw error;
await new Promise((resolve) =>
setTimeout(resolve, delayMs * 2 ** attempt),
);
}
}
throw new Error('Unreachable');
}
// Usage
const response = await fetchWithRetry(endpoint, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(otlpPayload),
});🤖 Prompt for AI Agents
In frontend/src/app/api/otel/webvital/route.ts around lines 208 to 226, replace
the direct fetch and immediate 502 return with retry logic: implement a
fetchWithRetry wrapper that retries on transient failures (network errors and
5xx responses) using exponential backoff (e.g., baseDelayMs and maxRetries,
multiply delay by 2^attempt and add small jitter), aborts retrying on
non-retryable responses (4xx), and returns the final Response; use this wrapper
for the POST to the collector, preserve reading response.text() and returning
NextResponse.json with status 502 only when the final response is non-ok after
retries (or when retries exhaust due to exceptions). Ensure sensible defaults
for maxRetries and delays and keep the same headers/body and error payload
shape.
| const response = await fetch(endpoint, { | ||
| method: 'POST', | ||
| headers: { | ||
| 'Content-Type': 'application/json', | ||
| }, | ||
| body: JSON.stringify(otlpPayload), | ||
| }); |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider adding authentication headers for the collector.
Depending on your OTLP collector setup, the endpoint may require authentication (e.g., API keys, bearer tokens). The current implementation sends no Authorization header.
If authentication is required, read credentials from an environment variable and include them in the fetch:
+ const authHeader = process.env.OTEL_EXPORTER_OTLP_HEADERS;
const response = await fetch(endpoint, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
+ ...(authHeader ? { Authorization: authHeader } : {}),
},
body: JSON.stringify(otlpPayload),
});Note: OTEL_EXPORTER_OTLP_HEADERS can be a comma-separated list of key=value pairs per the OpenTelemetry spec, so a more robust parser might be needed for multiple headers.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const response = await fetch(endpoint, { | |
| method: 'POST', | |
| headers: { | |
| 'Content-Type': 'application/json', | |
| }, | |
| body: JSON.stringify(otlpPayload), | |
| }); | |
| const authHeader = process.env.OTEL_EXPORTER_OTLP_HEADERS; | |
| const response = await fetch(endpoint, { | |
| method: 'POST', | |
| headers: { | |
| 'Content-Type': 'application/json', | |
| ...(authHeader ? { Authorization: authHeader } : {}), | |
| }, | |
| body: JSON.stringify(otlpPayload), | |
| }); |
🤖 Prompt for AI Agents
In frontend/src/app/api/otel/webvital/route.ts around lines 208 to 214, the
fetch to the OTLP collector omits authentication headers; update the request to
read authorization/headers from environment (e.g., OTEL_EXPORTER_OTLP_HEADERS or
a specific token env var), parse the header string (support comma-separated
key=value pairs per the OpenTelemetry spec) and merge them into the fetch
headers (including an Authorization header when present) before sending the
body; ensure no secrets are hard-coded and handle missing env gracefully.
Add timeout to the collector fetch call.
The fetch to the OTLP collector has no explicit timeout. If the collector is slow or unresponsive, this handler will hang until the Next.js serverless timeout (typically 10s) or indefinitely in a Node.js environment. This blocks the request and delays error feedback.
Apply this diff to add a 5-second timeout:
+ const controller = new AbortController();
+ const timeoutId = setTimeout(() => controller.abort(), 5000);
+
const response = await fetch(endpoint, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify(otlpPayload),
+ signal: controller.signal,
- });
+ }).finally(() => clearTimeout(timeoutId));📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const response = await fetch(endpoint, { | |
| method: 'POST', | |
| headers: { | |
| 'Content-Type': 'application/json', | |
| }, | |
| body: JSON.stringify(otlpPayload), | |
| }); | |
| const controller = new AbortController(); | |
| const timeoutId = setTimeout(() => controller.abort(), 5000); | |
| const response = await fetch(endpoint, { | |
| method: 'POST', | |
| headers: { | |
| 'Content-Type': 'application/json', | |
| }, | |
| body: JSON.stringify(otlpPayload), | |
| signal: controller.signal, | |
| }).finally(() => clearTimeout(timeoutId)); |
🤖 Prompt for AI Agents
In frontend/src/app/api/otel/webvital/route.ts around lines 208 to 214, the
fetch to the OTLP collector lacks a timeout and can hang; add a 5-second timeout
using an AbortController: create an AbortController before calling fetch, pass
its signal into fetch options, start a setTimeout that calls controller.abort()
after 5000ms, and clear the timeout when fetch resolves or rejects; also ensure
the fetch error path treats an abort as a timeout (e.g., throw or return a
descriptive timeout error) and clean up the timer in finally.
|
Note Docstrings generation - SUCCESS |
Docstrings generation was requested by @shayancoin. * #125 (comment) The following files were modified: * `frontend/src/app/api/otel/webvital/route.ts` * `frontend/src/app/providers.tsx` * `frontend/src/app/web-vitals.ts`
…i9` (#284) Docstrings generation was requested by @shayancoin. * #125 (comment) The following files were modified: * `frontend/src/app/api/otel/webvital/route.ts` * `frontend/src/app/providers.tsx` * `frontend/src/app/web-vitals.ts` Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com> Co-authored-by: Shayan <shayan@coin.link>
Summary
Testing
https://chatgpt.com/codex/tasks/task_e_68f13623064c8330ab8866e41923312f
Summary by CodeRabbit