You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
154552: changefeedccl: improve parallel io metrics r=log-head,asg0451 a=KeithCh
**changefeedccl: improve parallel io metrics**
Rename function parameter used to update pending rows metric and y-axis
label for that metric to be more accurate.
Release note(ops change): Fix changefeed.parallel_io_pending_rows metric
y-axis label to match the metric's definition.
Fixes: #147625
---
**changefeedccl: add parallel io workers metric**
Add a gauge metric to track the number of workers in ParallelIO.
Release note(ops change): Add metric changefeed.parallel_io_workers to
track the number of workers in ParallelIO.
Resolves: #147625
154651: opt/bench: improve BenchmarkEndToEnd for INSERTs r=yuzefovich a=yuzefovich
In `BenchmarkEndToEnd` we have 3 bench cases where we have INSERT statements. Previously, we always used the same placeholder values, which forced us to do TRUNCATE TABLE after _every_ iteration, and that TRUNCATE was included into the operation time. We recently saw a supposed regression on this benchmark because the performance of TRUNCATE has regressed.
In my initial approach I tried simply stopping and starting the timer around the TRUNCATE, but it made the benchmark extremely long. (Timer operations require stop-the-world pause, and since the time to perform TRUNCATE wasn't included into the benchmark time, now every single iteration seemed very short, so we'd do thousands of iterations with the default `bench-time=1s`, but truncating the table would make it about 1 minute instead.)
To go around this issue I refactored three INSERT queries to generate slightly different arguments for each iteration so that we don't get PK duplicates and then moved the TRUNCATE outside the benchmark loop (and also excluded it from the timer). Now these benchmark cases truly measure what they were supposed to.
Fixes: #154597.
Release note: None
154750: dbconsole: custom metrics update when units change r=dhartunian a=stevendanna
This fixes a long-standing bug in which changing the axis units fails to update the graph unless you also make some other change.
This PR was generated by Claude Code.
I asked it to write a test and it produced something with enough mocks that I wasn't sure on the value.
I have manually tested this and have confirmed it does result in custom graphs being updated immediately when the axis units are changed.
Epic: none
Release note: None
154771: workflows: change name of provider r=rail a=rickystewart
Part of: DEVINFHD-1916
Co-authored-by: Keith Chow <[email protected]>
Co-authored-by: Yahor Yuzefovich <[email protected]>
Co-authored-by: Steven Danna <[email protected]>
Co-authored-by: Ricky Stewart <[email protected]>
0 commit comments