Skip to content

Commit 0172c59

Browse files
craig[bot]angles-n-daemonskev-cao
committed
147089: db-console: add client side hot range filtering by node r=angles-n-daemons a=angles-n-daemons db-console: add client side hot range filtering by node Loading the hot ranges page is a performance intensive operation in large clusters, due to the fact the a cluster fanout is required to gather the information. Despite having a node filter within the component, the filtering has historically been applied after the api has responded with the full cluster's information. This change moves the filtering of the nodes to the back end, and cleans up some of how the filters are organized. It also merges the uses of the NodeRegionSelector so that it can be shared between the Databases and Hot Ranges pages. Fixes: #143528 Epic: CRDB-43150 Release note (ui change): Moves the hot ranges node filter out of the primary filter container in the hot ranges page, and applies the filtering on the backend. 147439: ui: surface SQL commenter query tags in insights r=angles-n-daemons a=angles-n-daemons This commit adds support for displaying SQL commenter query tags in the CockroachDB DB Console insights UI, addressing GitHub issue #146664. Changes: - Add query_tags to statement insights API response - Update TypeScript types to include queryTags field - Add Query Tags column to statement insights table (hidden by default) - Display query tags on statement insight detail pages - Add proper column titles and tooltips for query tags The query tags were already being stored in the backend execution insights tables by PR #145435. This change surfaces that data in the frontend UI, allowing users to correlate query performance with application context provided via SQL commenter tags. Fixes #146664 Release note (ui change): The DB Console insights page now displays SQL commenter query tags for statement executions. Query tags provide application context (such as application name, user ID, or feature flags) embedded in SQL comments using the sqlcommenter format. This information can help correlate slow query performance with specific application state. The Query Tags column is available in the statement insights table but hidden by default - it can be enabled via the Columns selector. 🤖 Generated with [Claude Code](https://claude.ai/code) 147447: ui: fix statement activity timepicker for sub-hour ranges r=angles-n-daemons a=angles-n-daemons ui: fix statement activity timepicker for sub-hour ranges This commit fixes a bug where the statement activity time picker wasn't working correctly with time ranges less than an hour when sql.stats.aggregation.interval is set to values smaller than 1h. The issue was that API calls were using toRoundedDateRange() which rounds timestamps to hour boundaries instead of toDateRange() which uses the exact selected time range. This prevented sub-hour time ranges from working properly even when the aggregation interval supported them. Changed the following components to use toDateRange(): - StatementsPage - TransactionsPage - TransactionDetails - StatementDetails selectors - IndexDetails API Fixes #145430 Epic: None Release note (bug fix): Fixed statement activity page time picker to work correctly with time ranges less than an hour when sql.stats.aggregation.interval is configured to sub-hour values. Previously, selecting a 10-minute window would query for a full hour of data instead of the precise selected range. 147635: roachtest: fail fixture roachtests on failed backups r=msbutler a=kev-cao Previously, if a backup job failed during the fixture roachtest, the test would continue along until the required number of backups completed. In this situation, if a backup failed for whatever reason, we would ignore it. This commit teaches the fixture roachtest to detect when a backup job has failed and to fail with the corresponding error. Epic: None Release note: None Co-authored-by: Brian Dillmann <[email protected]> Co-authored-by: Kevin Cao <[email protected]>
5 parents 810615e + 4ccbba6 + 56829d8 + a444987 + bdee6f0 commit 0172c59

File tree

34 files changed

+964
-566
lines changed

34 files changed

+964
-566
lines changed

pkg/cmd/roachtest/tests/backup_fixtures.go

Lines changed: 57 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -257,7 +257,7 @@ func (bd *backupDriver) monitorBackups(ctx context.Context) error {
257257
const (
258258
WaitingFirstFull = iota
259259
RunningIncrementals
260-
WaitingCompactions
260+
WaitingCompletion
261261
Done
262262
)
263263
state := WaitingFirstFull
@@ -267,30 +267,41 @@ func (bd *backupDriver) monitorBackups(ctx context.Context) error {
267267
if err != nil {
268268
return err
269269
}
270+
_, backupRunning, backupFailed, err := bd.backupJobStates(sql)
271+
if err != nil {
272+
return err
273+
}
270274
switch state {
271275
case WaitingFirstFull:
272276
var activeScheduleCount int
273277
scheduleCountQuery := fmt.Sprintf(
274278
`SELECT count(*) FROM [SHOW SCHEDULES] WHERE label='%s' AND schedule_status='ACTIVE'`, scheduleLabel,
275279
)
276280
sql.QueryRow(bd.t, scheduleCountQuery).Scan(&activeScheduleCount)
277-
if activeScheduleCount < 2 {
281+
if len(backupFailed) > 0 {
282+
return errors.Newf("backup jobs failed while waiting first full: %v", backupFailed)
283+
} else if activeScheduleCount < 2 {
278284
bd.t.L().Printf(`First full backup still running`)
279285
} else {
280286
state = RunningIncrementals
281287
}
282288
case RunningIncrementals:
283289
var backupCount int
290+
// We track completed backups via SHOW BACKUP as opposed to SHOW JOBs in
291+
// the case that a fixture runs for a long enough time that old backup
292+
// jobs stop showing up in SHOW JOBS.
284293
backupCountQuery := fmt.Sprintf(
285294
`SELECT count(DISTINCT end_time) FROM [SHOW BACKUP FROM LATEST IN '%s']`, fixtureURI.String(),
286295
)
287296
sql.QueryRow(bd.t, backupCountQuery).Scan(&backupCount)
288297
bd.t.L().Printf(`%d scheduled backups taken`, backupCount)
289298

290-
if bd.sp.fixture.CompactionThreshold > 0 {
299+
if len(backupFailed) > 0 {
300+
return errors.Newf("backup jobs failed while running incrementals: %v", backupFailed)
301+
} else if bd.sp.fixture.CompactionThreshold > 0 {
291302
bd.t.L().Printf("%d compaction jobs succeeded, %d running", len(compSuccess), len(compRunning))
292303
if len(compFailed) > 0 {
293-
return errors.Newf("compaction jobs failed: %v", compFailed)
304+
return errors.Newf("compaction jobs failed while running incrementals: %v", compFailed)
294305
}
295306
}
296307

@@ -299,15 +310,19 @@ func (bd *backupDriver) monitorBackups(ctx context.Context) error {
299310
`PAUSE SCHEDULES WITH x AS (SHOW SCHEDULES) SELECT id FROM x WHERE label = '%s'`, scheduleLabel,
300311
)
301312
sql.Exec(bd.t, pauseSchedulesQuery)
302-
if len(compRunning) > 0 {
303-
state = WaitingCompactions
313+
if len(compRunning) > 0 || len(backupRunning) > 0 {
314+
state = WaitingCompletion
304315
} else {
305316
state = Done
306317
}
307318
}
308-
case WaitingCompactions:
309-
if len(compFailed) > 0 {
310-
return errors.Newf("compaction jobs failed: %v", compFailed)
319+
case WaitingCompletion:
320+
if len(backupFailed) > 0 {
321+
return errors.Newf("backup jobs failed while waiting completion: %v", backupFailed)
322+
} else if len(compFailed) > 0 {
323+
return errors.Newf("compaction jobs failed while waiting completion: %v", compFailed)
324+
} else if len(backupRunning) > 0 {
325+
bd.t.L().Printf("waiting for %d backup jobs to finish", len(backupRunning))
311326
} else if len(compRunning) > 0 {
312327
bd.t.L().Printf("waiting for %d compaction jobs to finish", len(compRunning))
313328
} else {
@@ -332,20 +347,45 @@ func (bd *backupDriver) compactionJobStates(
332347
if bd.sp.fixture.CompactionThreshold == 0 {
333348
return nil, nil, nil, nil
334349
}
335-
compactionQuery := `SELECT job_id, status, error FROM [SHOW JOBS] WHERE job_type = 'BACKUP' AND
336-
description ILIKE 'COMPACT BACKUPS%'`
337-
rows := sql.Query(bd.t, compactionQuery)
350+
s, r, f, err := bd.queryJobStates(
351+
sql, "job_type = 'BACKUP' AND description ILIKE 'COMPACT BACKUPS%'",
352+
)
353+
return s, r, f, errors.Wrapf(err, "error querying compaction job states")
354+
}
355+
356+
// backupJobStates returns the state of the backup jobs, returning
357+
// a partition of jobs that succeeded, are running, and failed.
358+
func (bd *backupDriver) backupJobStates(
359+
sql *sqlutils.SQLRunner,
360+
) ([]jobMeta, []jobMeta, []jobMeta, error) {
361+
s, r, f, err := bd.queryJobStates(
362+
sql, "job_type = 'BACKUP' AND description ILIKE 'BACKUP %'",
363+
)
364+
return s, r, f, errors.Wrapf(err, "error querying backup job states")
365+
}
366+
367+
// queryJobStates queries the job table and returns a partition of jobs that
368+
// succeeded, are running, and failed. The filter is applied to the query to
369+
// limit the jobs searched. If the filter is empty, all jobs are searched.
370+
func (bd *backupDriver) queryJobStates(
371+
sql *sqlutils.SQLRunner, filter string,
372+
) ([]jobMeta, []jobMeta, []jobMeta, error) {
373+
query := "SELECT job_id, status, error FROM [SHOW JOBS]"
374+
if filter != "" {
375+
query += fmt.Sprintf(" WHERE %s", filter)
376+
}
377+
rows := sql.Query(bd.t, query)
338378
defer rows.Close()
339-
var compJobs []jobMeta
379+
var jobMetas []jobMeta
340380
for rows.Next() {
341381
var job jobMeta
342382
if err := rows.Scan(&job.jobID, &job.state, &job.error); err != nil {
343-
return nil, nil, nil, errors.Wrapf(err, "error scanning compaction job")
383+
return nil, nil, nil, errors.Wrapf(err, "error scanning job")
344384
}
345-
compJobs = append(compJobs, job)
385+
jobMetas = append(jobMetas, job)
346386
}
347387
var successes, running, failures []jobMeta
348-
for _, job := range compJobs {
388+
for _, job := range jobMetas {
349389
switch job.state {
350390
case jobs.StateSucceeded:
351391
successes = append(successes, job)
@@ -354,7 +394,7 @@ func (bd *backupDriver) compactionJobStates(
354394
case jobs.StateFailed:
355395
failures = append(failures, job)
356396
default:
357-
bd.t.L().Printf(`unexpected compaction job %d in state %s`, job.jobID, job.state)
397+
bd.t.L().Printf(`unexpected job %d in state %s`, job.jobID, job.state)
358398
}
359399
}
360400
return successes, running, failures, nil

pkg/ui/workspaces/cluster-ui/src/api/indexDetailsApi.ts

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,7 @@ import {
2020

2121
import { INTERNAL_APP_NAME_PREFIX } from "../activeExecutions/activeStatementUtils";
2222
import { AggregateStatistics } from "../statementsTable";
23-
import { TimeScale, toRoundedDateRange } from "../timeScaleDropdown";
23+
import { TimeScale, toDateRange } from "../timeScaleDropdown";
2424

2525
export type TableIndexStatsRequest =
2626
cockroach.server.serverpb.TableIndexStatsRequest;
@@ -77,7 +77,7 @@ export function StatementsListRequestFromDetails(
7777
ts: TimeScale,
7878
): StatementsUsingIndexRequest {
7979
if (ts === null) return { table, index, database };
80-
const [start, end] = toRoundedDateRange(ts);
80+
const [start, end] = toDateRange(ts);
8181
return { table, index, database, start, end };
8282
}
8383

pkg/ui/workspaces/cluster-ui/src/api/nodesApi.ts

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,6 +21,7 @@ export const getNodes =
2121
};
2222

2323
export type NodeStatus = {
24+
id: NodeID;
2425
region: string;
2526
stores: StoreID[];
2627
};
@@ -48,7 +49,9 @@ export const useNodeStatuses = () => {
4849
.node_id as NodeID;
4950
});
5051

51-
nodeStatusByID[ns.desc.node_id as NodeID] = {
52+
const id = ns.desc.node_id as NodeID;
53+
nodeStatusByID[id] = {
54+
id,
5255
region: getRegionFromLocality(ns.desc.locality),
5356
stores: ns.store_statuses?.map(s => s.desc.store_id as StoreID),
5457
};

pkg/ui/workspaces/cluster-ui/src/api/stmtInsightsApi.ts

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,6 +66,7 @@ export type StmtInsightsResponseRow = {
6666
error_code: string;
6767
last_error_redactable: string;
6868
status: StatementStatus;
69+
query_tags: Array<{ name: string; value: string }>;
6970
};
7071

7172
const stmtColumns = `
@@ -97,7 +98,8 @@ plan_gist,
9798
cpu_sql_nanos,
9899
error_code,
99100
last_error_redactable,
100-
status
101+
status,
102+
query_tags
101103
`;
102104

103105
const stmtInsightsOverviewQuery = (req?: StmtInsightsReq): string => {
@@ -241,6 +243,7 @@ export function formatStmtInsights(
241243
errorCode: row.error_code,
242244
errorMsg: row.last_error_redactable,
243245
status: row.status,
246+
queryTags: row.query_tags || [],
244247
} as StmtInsightEvent;
245248
});
246249
}
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
// Copyright 2021 The Cockroach Authors.
2+
//
3+
// Use of this software is governed by the CockroachDB Software License
4+
// included in the /LICENSE file.
5+
6+
export * from "./links/indexStatsLink";
7+
export * from "./liveDataPercent/liveDataPercent";
8+
export * from "./nodeSelector/nodeSelector";
9+
export * from "./tooltip";
10+
// Seems like there are duplicate exports with this file. We omit it here.
11+
// export * from "./tooltipMessages";
Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
// Copyright 2025 The Cockroach Authors.
2+
//
3+
// Use of this software is governed by the CockroachDB Software License
4+
// included in the /LICENSE file.
5+
6+
@import "src/core/index.module.scss";
7+
8+
.selector {
9+
min-width: 200px;
10+
}
11+
12+
.regionsSection {
13+
max-height: 40vh;
14+
overflow-y: scroll;
15+
}
16+
17+
.option {
18+
padding: 2px 8px;
19+
cursor: pointer;
20+
input {
21+
margin-right: 4px;
22+
}
23+
24+
label {
25+
cursor: pointer; /* Explicitly set cursor for labels */
26+
margin-left: 4px;
27+
display: inline-block;
28+
}
29+
}
30+
31+
.regionNodeOption {
32+
padding-left: 24px;
33+
}
34+
.divider {
35+
margin: 4px 0;
36+
}
37+
38+
.applyBtnContainer {
39+
padding: 4px 12px 16px 12px;
40+
}
41+
42+
.applyBtn {
43+
width: 100%;
44+
45+
div {
46+
div {
47+
justify-content: center;
48+
}
49+
}
50+
}

0 commit comments

Comments
 (0)