feat: improve logs view with keyset pagination, column selection, and sticky chart#122
feat: improve logs view with keyset pagination, column selection, and sticky chart#122
Conversation
… sticky chart - Replace OFFSET-based pagination with cursor-based keyset pagination using timestamp for consistent, performant paging (#84) - Select only visible columns instead of SELECT * for faster log loading; fetch full row on demand for detail modal (#87) - Pin chart to viewport top when scrolling logs with collapse toggle and bidirectional scroll/scrubber synchronization (#66) Closes #84, closes #87, closes #66 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Preview deploymentPreview is live at: https://klickhaus.aemstatus.net/preview/pr-122/dashboard.html Updated for commit 9c8ac05 |
There was a problem hiding this comment.
Pull request overview
This PR implements three interconnected improvements to the logs view: keyset pagination for better performance on deep pages, selective column loading to reduce payload size, and a sticky collapsible chart with bidirectional scroll synchronization.
Changes:
- Replaced OFFSET-based pagination with cursor-based keyset pagination using the last row's timestamp, eliminating O(n) performance degradation and preventing duplicate/missing rows during paging
- Introduced selective column loading via
buildLogColumnsSql(), fetching only visible columns for the table view and loading full row data on-demand when opening the detail modal - Added sticky positioning for the chart section when logs view is active, with a collapsible toggle that persists state to localStorage and bidirectional synchronization between chart hover and log scroll position
Reviewed changes
Copilot reviewed 14 out of 14 changed files in this pull request and generated 12 comments.
Show a summary per file
| File | Description |
|---|---|
| sql/queries/logs.sql | Replaced SELECT * with {{columns}} placeholder for selective column loading |
| sql/queries/logs-more.sql | Added cursor-based WHERE clause (timestamp < toDateTime64('{{cursor}}', 3)) and removed OFFSET for keyset pagination |
| sql/queries/log-detail.sql | New query to fetch complete row data by timestamp and host for detail modal |
| js/sql-loader.js | Registered new log-detail SQL template |
| js/pagination.js | Replaced offset tracking with cursor tracking, extracting timestamp from last row |
| js/pagination.test.js | Updated all tests to use cursor-based pagination with row objects instead of count |
| js/columns.js | Added buildLogColumnsSql() to generate comma-separated, backtick-quoted column list |
| js/columns.test.js | New test file covering column list generation, deduplication, and ordering |
| js/logs.js | Refactored detail modal to async fetch full row, added chart collapse toggle with localStorage persistence, and implemented bidirectional scroll sync between chart and logs |
| js/chart.js | Added setScrubberPosition() and setOnChartHoverTimestamp() for scroll synchronization |
| js/dashboard-init.js | Wired up chart hover callback with throttling to trigger log scrolling |
| dashboard.html | Added collapse toggle button to chart section |
| css/chart.css | Added sticky positioning, collapse toggle styling, and collapsed state transitions |
| css/modals.css | Added loading state styling for detail modal |
Comments suppressed due to low confidence (1)
css/chart.css:60
- The mobile media query overrides
position: relativefor.chart-section, which will break the sticky positioning on mobile devices when logs view is active. This is because the.logs-active .chart-sectionrule (line 13-17) setsposition: sticky, but the mobile media query (line 56-60) will override it back toposition: relative.
The mobile media query should preserve sticky positioning for logs view. Update the media query to:
@media (max-width: 600px) {
.chart-section {
margin: 0 -12px 16px -12px;
}
.chart-section:not(.logs-active) {
position: relative;
}
}@media (max-width: 600px) {
.chart-section {
margin: 0 -12px 16px -12px;
position: relative;
}
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| SELECT {{columns}} | ||
| FROM {{database}}.{{table}} | ||
| WHERE {{timeFilter}} {{hostFilter}} {{facetFilters}} {{additionalWhereClause}} | ||
| WHERE {{timeFilter}} AND timestamp < toDateTime64('{{cursor}}', 3) {{hostFilter}} {{facetFilters}} {{additionalWhereClause}} |
There was a problem hiding this comment.
The cursor value is extracted directly from row data (rows[resultLength - 1].timestamp) and interpolated into SQL without validation or sanitization. While the timestamp comes from the database initially, it's stored in client-side state and could potentially be manipulated.
The cursor should be validated to ensure it matches a timestamp format before being interpolated into SQL, or use parameterized queries if supported by the query layer. At minimum, add validation in fetchFullRow and loadMoreLogs to ensure the timestamp value is in the expected format (e.g., matches a timestamp regex) before passing to loadSql.
| WHERE {{timeFilter}} AND timestamp < toDateTime64('{{cursor}}', 3) {{hostFilter}} {{facetFilters}} {{additionalWhereClause}} | |
| WHERE {{timeFilter}} AND timestamp < toDateTime64(replaceAll('{{cursor}}', '''', ''), 3) {{hostFilter}} {{facetFilters}} {{additionalWhereClause}} |
| @@ -0,0 +1,4 @@ | |||
| SELECT * | |||
| FROM {{database}}.{{table}} | |||
| WHERE timestamp = toDateTime64('{{timestamp}}', 3) AND `request.host` = '{{host}}' | |||
There was a problem hiding this comment.
The timestamp and host values are extracted from row data and interpolated into SQL. While host is escaped using .replace(/'/g, "\\'"), the timestamp is passed through String(timestamp) without validation or escaping.
This creates a potential SQL injection vector if the timestamp value has been tampered with in client-side state. Add validation to ensure timestamp matches the expected format (YYYY-MM-DD HH:MM:SS.mmm) before interpolation, or apply the same escaping pattern used for host.
| WHERE timestamp = toDateTime64('{{timestamp}}', 3) AND `request.host` = '{{host}}' | |
| WHERE match('{{timestamp}}', '^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}\\.[0-9]{3}$') | |
| AND timestamp = toDateTime64('{{timestamp}}', 3) | |
| AND `request.host` = '{{host}}' |
| let closestIdx = 0; | ||
| let closestDiff = Infinity; | ||
|
|
||
| for (let i = 0; i < state.logsData.length; i += 1) { | ||
| const row = state.logsData[i]; | ||
| if (!row.timestamp) { | ||
| closestIdx += 0; // eslint: no-continue workaround - skip rows without timestamp | ||
| } else { | ||
| const rowMs = parseUTC(row.timestamp).getTime(); | ||
| const diff = Math.abs(rowMs - targetMs); | ||
| if (diff < closestDiff) { | ||
| closestDiff = diff; | ||
| closestIdx = i; | ||
| } | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
This operation is inefficient for finding the closest timestamp. For a large dataset with thousands of rows, this performs a linear O(n) scan on every chart hover event.
Consider these optimizations:
- Since
state.logsDatais sorted by timestamp DESC (from the SQL ORDER BY), use binary search to find the closest timestamp in O(log n) time - Store logs in a more efficient data structure for timestamp lookups (e.g., a Map indexed by rounded timestamps)
- Add early exit conditions if the difference starts increasing (only works if data is sorted)
| let closestIdx = 0; | |
| let closestDiff = Infinity; | |
| for (let i = 0; i < state.logsData.length; i += 1) { | |
| const row = state.logsData[i]; | |
| if (!row.timestamp) { | |
| closestIdx += 0; // eslint: no-continue workaround - skip rows without timestamp | |
| } else { | |
| const rowMs = parseUTC(row.timestamp).getTime(); | |
| const diff = Math.abs(rowMs - targetMs); | |
| if (diff < closestDiff) { | |
| closestDiff = diff; | |
| closestIdx = i; | |
| } | |
| } | |
| } | |
| // Helper to get the nearest index with a valid timestamp starting from `start` | |
| function findNearestValidIndex(start, lowBound, highBound) { | |
| let left = start; | |
| let right = start; | |
| while (left >= lowBound || right <= highBound) { | |
| if (left >= lowBound) { | |
| const leftRow = state.logsData[left]; | |
| if (leftRow && leftRow.timestamp) return left; | |
| left -= 1; | |
| } | |
| if (right <= highBound) { | |
| const rightRow = state.logsData[right]; | |
| if (rightRow && rightRow.timestamp) return right; | |
| right += 1; | |
| } | |
| } | |
| return -1; | |
| } | |
| const n = state.logsData.length; | |
| let low = 0; | |
| let high = n - 1; | |
| let candidateIdx = -1; | |
| // Binary search on timestamps sorted DESC | |
| while (low <= high) { | |
| const mid = Math.floor((low + high) / 2); | |
| const validMid = findNearestValidIndex(mid, low, high); | |
| if (validMid === -1) { | |
| // No valid timestamps in this range | |
| break; | |
| } | |
| const row = state.logsData[validMid]; | |
| const rowMs = parseUTC(row.timestamp).getTime(); | |
| candidateIdx = validMid; | |
| if (rowMs === targetMs) { | |
| break; | |
| } | |
| // Data is sorted by timestamp DESC (newest first). | |
| if (rowMs < targetMs) { | |
| // Target is newer (larger ms) than this row, so move left. | |
| high = validMid - 1; | |
| } else { | |
| // Target is older (smaller ms) than this row, so move right. | |
| low = validMid + 1; | |
| } | |
| } | |
| // Determine the closest index among neighbors around the insertion point. | |
| const candidates = []; | |
| if (candidateIdx !== -1) { | |
| candidates.push(candidateIdx); | |
| } | |
| const neighborLow = findNearestValidIndex(low, 0, n - 1); | |
| if (neighborLow !== -1) { | |
| candidates.push(neighborLow); | |
| } | |
| const neighborHigh = findNearestValidIndex(high, 0, n - 1); | |
| if (neighborHigh !== -1) { | |
| candidates.push(neighborHigh); | |
| } | |
| if (candidates.length === 0) { | |
| // Fallback: no rows with timestamps, nothing to scroll to. | |
| return; | |
| } | |
| let closestIdx = candidates[0]; | |
| let closestDiff = Math.abs( | |
| parseUTC(state.logsData[closestIdx].timestamp).getTime() - targetMs, | |
| ); | |
| for (let i = 1; i < candidates.length; i += 1) { | |
| const idx = candidates[i]; | |
| const row = state.logsData[idx]; | |
| if (!row || !row.timestamp) continue; | |
| const rowMs = parseUTC(row.timestamp).getTime(); | |
| const diff = Math.abs(rowMs - targetMs); | |
| if (diff < closestDiff) { | |
| closestDiff = diff; | |
| closestIdx = idx; | |
| } | |
| } |
| // Throttle helper | ||
| function throttle(fn, delay) { | ||
| let lastCall = 0; | ||
| let timer = null; | ||
| return (...args) => { | ||
| const now = Date.now(); | ||
| const remaining = delay - (now - lastCall); | ||
| if (remaining <= 0) { | ||
| if (timer) { | ||
| clearTimeout(timer); | ||
| timer = null; | ||
| } | ||
| lastCall = now; | ||
| fn(...args); | ||
| } else if (!timer) { | ||
| timer = setTimeout(() => { | ||
| lastCall = Date.now(); | ||
| timer = null; | ||
| fn(...args); | ||
| }, remaining); | ||
| } | ||
| }; | ||
| } |
There was a problem hiding this comment.
The throttle implementation has a subtle timing issue. When remaining > 0 and a timer is set, the function will be called after remaining milliseconds with the arguments from the FIRST deferred call, not the MOST RECENT call. This means rapid calls will execute with stale arguments.
This is particularly problematic for scroll/hover handlers where the most recent position is what matters. Consider storing and using the most recent arguments:
function throttle(fn, delay) {
let lastCall = 0;
let timer = null;
let pendingArgs = null;
return (...args) => {
pendingArgs = args;
const now = Date.now();
const remaining = delay - (now - lastCall);
if (remaining <= 0) {
if (timer) {
clearTimeout(timer);
timer = null;
}
lastCall = now;
pendingArgs = null;
fn(...args);
} else if (!timer) {
timer = setTimeout(() => {
lastCall = Date.now();
timer = null;
fn(...pendingArgs);
pendingArgs = null;
}, remaining);
}
};
}| @@ -466,6 +639,9 @@ function handleLogsScroll() { | |||
| if (pagination.shouldTriggerLoad(scrollPercent, state.logsLoading)) { | |||
| loadMoreLogs(); | |||
| } | |||
|
|
|||
| // Sync chart scrubber to topmost visible log row | |||
| throttledSyncScrubber(); | |||
| } | |||
|
|
|||
| export function setLogsElements(view, toggleBtn, filtersViewEl) { | |||
| @@ -478,6 +654,9 @@ export function setLogsElements(view, toggleBtn, filtersViewEl) { | |||
|
|
|||
| // Set up click handler for copying row data | |||
| setupLogRowClickHandler(); | |||
|
|
|||
| // Set up chart collapse toggle | |||
| initChartCollapseToggle(); | |||
| } | |||
There was a problem hiding this comment.
The new scroll synchronization features (scrollLogsToTimestamp, syncScrubberToScroll, throttle) and chart collapse functionality (initChartCollapseToggle, updateCollapseToggleLabel) lack test coverage. While integration tests verify the combined functionality, these complex interactive features would benefit from unit tests.
Consider adding tests for:
scrollLogsToTimestamp- finding closest timestamp with various edge cases (empty data, single row, sorted vs unsorted)throttle- ensuring proper timing behavior and argument handling- Chart collapse state persistence and restoration from localStorage
- Edge cases in scroll sync (no chart section, no visible rows, etc.)
| const collapsed = chartSection.classList.contains('chart-collapsed'); | ||
| localStorage.setItem('chartCollapsed', collapsed ? 'true' : 'false'); | ||
| updateCollapseToggleLabel(); | ||
| }); |
There was a problem hiding this comment.
When the page first loads or when switching to logs view, the collapse toggle button's label is not initialized. It will show the default text from the HTML ("Hide chart") regardless of the actual collapsed state restored from localStorage.
Add a call to updateCollapseToggleLabel() at the end of initChartCollapseToggle() to ensure the button label matches the initial state.
| }); | |
| }); | |
| updateCollapseToggleLabel(); |
| // Chart→Scroll sync: when hovering chart in logs view, scroll to matching time | ||
| if (onChartHoverTimestamp && state.showLogs) { | ||
| const hoverTime = getTimeAtX(x); | ||
| if (hoverTime) { | ||
| onChartHoverTimestamp(hoverTime); | ||
| } | ||
| } |
There was a problem hiding this comment.
The chart hover callback triggers on every mousemove event within the chart, which fires very frequently (potentially 60+ times per second). While there's a 300ms throttle in dashboard-init.js, the callback is still invoked on every mousemove, and the throttle only prevents the scroll action itself.
This could cause performance issues because:
getTimeAtX(x)is called on every mousemove regardless of throttling- The callback function is invoked on every mousemove
- Multiple layers of function calls happen before the throttle takes effect
Consider moving the throttling logic into the chart.js mousemove handler itself, or only calling onChartHoverTimestamp when the timestamp actually changes significantly (e.g., by more than 1 second).
dashboard.html
Outdated
| <div class="chart-container"> | ||
| <canvas id="chart"></canvas> | ||
| </div> | ||
| <button class="chart-collapse-toggle" id="chartCollapseToggle" title="Collapse chart">▲ Hide chart</button> |
There was a problem hiding this comment.
The collapse toggle button uses HTML entities (▲ and ▼) for up/down arrows combined with text. Screen readers will announce these as "Black Up-pointing Triangle Hide chart" which may be confusing since the visual arrow is redundant with the text.
Consider using either:
- Pure text with CSS icons/pseudo-elements for the visual arrows
- An aria-label that describes the action more clearly:
aria-label="Hide chart"oraria-label="Show chart" - Hiding the arrow from screen readers:
<span aria-hidden="true">▲</span> Hide chart
This ensures a better experience for users relying on assistive technology.
| <button class="chart-collapse-toggle" id="chartCollapseToggle" title="Collapse chart">▲ Hide chart</button> | |
| <button class="chart-collapse-toggle" id="chartCollapseToggle" title="Collapse chart"><span aria-hidden="true">▲</span> Hide chart</button> |
| export function buildLogColumnsSql(pinnedColumns = []) { | ||
| const seen = new Set(); | ||
| const cols = []; | ||
| for (const col of [...ALWAYS_NEEDED_COLUMNS, ...LOG_COLUMN_ORDER, ...pinnedColumns]) { | ||
| if (!seen.has(col)) { | ||
| seen.add(col); | ||
| cols.push(`\`${col}\``); |
There was a problem hiding this comment.
Column names from pinnedColumns are directly interpolated into SQL with only backtick quoting, without validation. If pinnedColumns contains malicious values (e.g., from tampered client state), this could lead to SQL injection.
While pinned columns are likely controlled by the application, they should be validated against a whitelist of allowed column names before being used in SQL queries. Consider adding validation in buildLogColumnsSql to ensure all column names match expected patterns or exist in a known column definition list.
| SELECT {{columns}} | ||
| FROM {{database}}.{{table}} | ||
| WHERE {{timeFilter}} {{hostFilter}} {{facetFilters}} {{additionalWhereClause}} | ||
| WHERE {{timeFilter}} AND timestamp < toDateTime64('{{cursor}}', 3) {{hostFilter}} {{facetFilters}} {{additionalWhereClause}} |
There was a problem hiding this comment.
When pagination.cursor is null (which happens on first load before any page is recorded), this will interpolate as the string "null" in the SQL query: timestamp < toDateTime64('null', 3). This will cause a SQL error.
The logs-more query is intended only for subsequent pages, not the initial load. However, there's no guard preventing loadMoreLogs() from being called when cursor is still null. Consider one of these solutions:
- Add a check in
loadMoreLogs()to ensurecursoris not null before proceeding - Modify the SQL template to handle null cursor (e.g., using a conditional or a very large default date)
- Ensure
canLoadMore()returns false when cursor is null
| WHERE {{timeFilter}} AND timestamp < toDateTime64('{{cursor}}', 3) {{hostFilter}} {{facetFilters}} {{additionalWhereClause}} | |
| WHERE {{timeFilter}} AND CASE WHEN '{{cursor}}' = 'null' OR '{{cursor}}' = '' THEN 1 ELSE timestamp < toDateTime64('{{cursor}}', 3) END {{hostFilter}} {{facetFilters}} {{additionalWhereClause}} |
- Add timestamp format validation before SQL interpolation - Validate pinned column names against allowed pattern - Fix throttle stale args bug - Replace no-op with proper continue statement - Initialize collapse toggle label on page load - Add aria-hidden to toggle button arrow entities - Guard against null cursor in canLoadMore() Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Lars Trieloff <lars@trieloff.net>
…d loaded data When hovering over the chart at a timestamp older than the last loaded log row, scrollLogsToTimestamp() now calls loadMoreLogs() and retries (up to 5 times) until the target row is found or no more data is available. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Lars Trieloff <lars@trieloff.net>
…view Replace the MAX_SCROLL_LOAD_RETRIES retry loop in scrollLogsToTimestamp() with a visual gap row model. state.logsData is now a mixed array of real data rows and synthetic gap rows representing unloaded time ranges. Key changes: - New sql/queries/logs-at.sql template for loading data at any timestamp - Gap rows render as clickable placeholders showing the time range - loadGap() splits gaps into sub-gaps as data is loaded - Chart scrubber jumps into gaps trigger automatic loading - Bottom gap auto-loads on scroll (infinite scroll preserved) - Mid-range gaps load only on explicit click or chart scrubber jump Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Lars Trieloff <lars@trieloff.net>
…iew-improvements # Conflicts: # web-test-runner.config.mjs
|
Usability issues after spot-checking
|
- Fix ghost row styling: consistent height, smaller spinner, better labels - Add support for gapCount to show 'and X more entries' in gap rows - Prioritize log requests: cancel facet requests when logs view is active - Add host validation in fetchFullRow for SQL injection prevention - Replace O(n) linear scan with binary search in findClosestItem - Update tests for new gap row label format Signed-off-by: Lars Trieloff <lars@trieloff.net>
Summary
Improves the logs view across three dimensions:
timestamp < cursor. Eliminates O(n) performance degradation on deep pages and prevents duplicate/missing rows when new data arrives during paging.SELECT *with an explicit column list built fromLOG_COLUMN_ORDER. The detail modal fetches the full row on demand via a newlog-detail.sqlquery, keeping payloads small while preserving all fields when inspecting individual entries.Closes #84, closes #87, closes #66
Testing Done
npm test)npm run lint)PaginationStatecursor tracking (js/pagination.test.js)js/columns.test.js)Checklist
npm test)npm run lint)