Skip to content

Commit f8e37e8

Browse files
craig[bot]BramGruneirDedej-Berginangles-n-daemonsnormanchenn
committed
142826: sql: add latency metrics for historical queries r=BramGruneir a=BramGruneir This commit adds 4 new latency metrics specifically to separate out AOST queries: * sql.exec.latency.consistent -- Latency of SQL statement execution of non-historical queries * sql.exec.latency.historical -- Latency of SQL statement execution of historical queries * sql.service.latency.consistent -- Latency of SQL request execution of non-historical queries * sql.service.latency.historical -- Latency of SQL request execution of historical queries This will help when trying to optimize workloads that have a combination of historical and non-historical queries. Fixes: #121507 Part of: https://cockroachlabs.atlassian.net/browse/CRDB-37293 Part of: https://cockroachlabs.atlassian.net/browse/TREQ-152 Part of: https://cockroachlabs.atlassian.net/browse/FEB-22 Release note (ops change): Added 4 new latency metrics: sql.service.latency.historical, sql.service.latency.consistent, sql.exec.latency.historical, sql.exec.latency.consistent for better query optimizations. 143598: roachtest: handle flaky node-postgres tests r=Dedej-Bergin a=Dedej-Bergin Previously, we would only handle the "pool size of 1" flaky test in the node-postgres roachtest. However, there is another flaky test "events" which fails with "expected 0 to equal 20". This change updates the error handling to properly handle both known flaky tests while maintaining the existing behavior of failing on any other errors. Fixes: #143047 Release note: none 144091: server: add information filtering to hot ranges endpoint r=angles-n-daemons a=angles-n-daemons server: add information filtering to hot ranges endpoint This change introduces two enhancements to the hot ranges page. The first is the omission of table descriptors if specified, the second allows callers to specify per-node limits on the number of ranges requested. Specifying `StatsOnly` on the hot ranges call will cause the call to skip collecting table descriptors to include in the response, which means the call will not be required to read from the keyspace. The `PerNodeLimit` specifies a local limit for a hot ranges call, so that we only include a number of replicas for each node local call made, (different than the global limit enforced today). Fixes: #142595 Epic: CRDB-43150 Release note (general change): Allows api callers to specify statistics only and a per-node limit for the hot ranges response. 144188: jsonpath: separate `silent` error and `strict` structural checks r=normanchenn a=normanchenn The `jsonb_path_*` functions include a `silent` argument, and JSONPath queries have a `strict` mode. Previously, the implementation combined these variables, treating `silent=true` as equivalent to forcing lax mode (`strict=false`). However, `strict` mode primary handles errors related to structural issues, whereas `silent` controls whether runtime errors encountered during path evaluation should cause the query to fail or are suppressed. This commit refactors the evaluation context (`jsonpathCtx`) to handle these distinctly. Epic: None Release note: None Co-authored-by: BramGruneir <[email protected]> Co-authored-by: Bergin Dedej <[email protected]> Co-authored-by: Brian Dillmann <[email protected]> Co-authored-by: Norman Chen <[email protected]>
5 parents f07cd76 + 5260861 + 55dd064 + 11c3665 + 95ce1de commit f8e37e8

File tree

15 files changed

+526
-73
lines changed

15 files changed

+526
-73
lines changed

docs/generated/http/full.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3599,6 +3599,8 @@ of ranges currently considered “hot” by the node(s).
35993599
| page_token | [string](#cockroach.server.serverpb.HotRangesRequest-string) | | | [reserved](#support-status) |
36003600
| tenant_id | [string](#cockroach.server.serverpb.HotRangesRequest-string) | | | [reserved](#support-status) |
36013601
| nodes | [string](#cockroach.server.serverpb.HotRangesRequest-string) | repeated | | [reserved](#support-status) |
3602+
| per_node_limit | [int32](#cockroach.server.serverpb.HotRangesRequest-int32) | | per_node_limit indicates the maximum number of hot ranges to return for each node. If left empty, the default is 128. | [reserved](#support-status) |
3603+
| stats_only | [bool](#cockroach.server.serverpb.HotRangesRequest-bool) | | stats_only indicates whether to return only the stats for the hot ranges, without pulling descriptor information. | [reserved](#support-status) |
36023604

36033605

36043606

docs/generated/metrics/metrics.html

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1695,8 +1695,12 @@
16951695
<tr><td>APPLICATION</td><td>sql.distsql.service.latency.internal</td><td>Latency of DistSQL request execution (internal queries)</td><td>SQL Internal Statements</td><td>HISTOGRAM</td><td>NANOSECONDS</td><td>AVG</td><td>NONE</td></tr>
16961696
<tr><td>APPLICATION</td><td>sql.distsql.vec.openfds</td><td>Current number of open file descriptors used by vectorized external storage</td><td>Files</td><td>GAUGE</td><td>COUNT</td><td>AVG</td><td>NONE</td></tr>
16971697
<tr><td>APPLICATION</td><td>sql.exec.latency</td><td>Latency of SQL statement execution</td><td>Latency</td><td>HISTOGRAM</td><td>NANOSECONDS</td><td>AVG</td><td>NONE</td></tr>
1698+
<tr><td>APPLICATION</td><td>sql.exec.latency.consistent</td><td>Latency of SQL statement execution of non-historical queries</td><td>Latency</td><td>HISTOGRAM</td><td>NANOSECONDS</td><td>AVG</td><td>NONE</td></tr>
1699+
<tr><td>APPLICATION</td><td>sql.exec.latency.consistent.internal</td><td>Latency of SQL statement execution of non-historical queries (internal queries)</td><td>SQL Internal Statements</td><td>HISTOGRAM</td><td>NANOSECONDS</td><td>AVG</td><td>NONE</td></tr>
16981700
<tr><td>APPLICATION</td><td>sql.exec.latency.detail</td><td>Latency of SQL statement execution, by statement fingerprint</td><td>Latency</td><td>HISTOGRAM</td><td>NANOSECONDS</td><td>AVG</td><td>NONE</td></tr>
16991701
<tr><td>APPLICATION</td><td>sql.exec.latency.detail.internal</td><td>Latency of SQL statement execution, by statement fingerprint (internal queries)</td><td>SQL Internal Statements</td><td>HISTOGRAM</td><td>NANOSECONDS</td><td>AVG</td><td>NONE</td></tr>
1702+
<tr><td>APPLICATION</td><td>sql.exec.latency.historical</td><td>Latency of SQL statement execution of historical queries</td><td>Latency</td><td>HISTOGRAM</td><td>NANOSECONDS</td><td>AVG</td><td>NONE</td></tr>
1703+
<tr><td>APPLICATION</td><td>sql.exec.latency.historical.internal</td><td>Latency of SQL statement execution of historical queries (internal queries)</td><td>SQL Internal Statements</td><td>HISTOGRAM</td><td>NANOSECONDS</td><td>AVG</td><td>NONE</td></tr>
17001704
<tr><td>APPLICATION</td><td>sql.exec.latency.internal</td><td>Latency of SQL statement execution (internal queries)</td><td>SQL Internal Statements</td><td>HISTOGRAM</td><td>NANOSECONDS</td><td>AVG</td><td>NONE</td></tr>
17011705
<tr><td>APPLICATION</td><td>sql.failure.count</td><td>Number of statements resulting in a planning or runtime error</td><td>SQL Statements</td><td>COUNTER</td><td>COUNT</td><td>AVG</td><td>NON_NEGATIVE_DERIVATIVE</td></tr>
17021706
<tr><td>APPLICATION</td><td>sql.failure.count.internal</td><td>Number of statements resulting in a planning or runtime error (internal queries)</td><td>SQL Internal Statements</td><td>COUNTER</td><td>COUNT</td><td>AVG</td><td>NON_NEGATIVE_DERIVATIVE</td></tr>
@@ -1819,6 +1823,10 @@
18191823
<tr><td>APPLICATION</td><td>sql.select.started.count</td><td>Number of SQL SELECT statements started</td><td>SQL Statements</td><td>COUNTER</td><td>COUNT</td><td>AVG</td><td>NON_NEGATIVE_DERIVATIVE</td></tr>
18201824
<tr><td>APPLICATION</td><td>sql.select.started.count.internal</td><td>Number of SQL SELECT statements started (internal queries)</td><td>SQL Internal Statements</td><td>COUNTER</td><td>COUNT</td><td>AVG</td><td>NON_NEGATIVE_DERIVATIVE</td></tr>
18211825
<tr><td>APPLICATION</td><td>sql.service.latency</td><td>Latency of SQL request execution</td><td>Latency</td><td>HISTOGRAM</td><td>NANOSECONDS</td><td>AVG</td><td>NONE</td></tr>
1826+
<tr><td>APPLICATION</td><td>sql.service.latency.consistent</td><td>Latency of SQL request execution of non-historical queries</td><td>Latency</td><td>HISTOGRAM</td><td>NANOSECONDS</td><td>AVG</td><td>NONE</td></tr>
1827+
<tr><td>APPLICATION</td><td>sql.service.latency.consistent.internal</td><td>Latency of SQL request execution of non-historical queries (internal queries)</td><td>SQL Internal Statements</td><td>HISTOGRAM</td><td>NANOSECONDS</td><td>AVG</td><td>NONE</td></tr>
1828+
<tr><td>APPLICATION</td><td>sql.service.latency.historical</td><td>Latency of SQL request execution of historical queries</td><td>Latency</td><td>HISTOGRAM</td><td>NANOSECONDS</td><td>AVG</td><td>NONE</td></tr>
1829+
<tr><td>APPLICATION</td><td>sql.service.latency.historical.internal</td><td>Latency of SQL request execution of historical queries (internal queries)</td><td>SQL Internal Statements</td><td>HISTOGRAM</td><td>NANOSECONDS</td><td>AVG</td><td>NONE</td></tr>
18221830
<tr><td>APPLICATION</td><td>sql.service.latency.internal</td><td>Latency of SQL request execution (internal queries)</td><td>SQL Internal Statements</td><td>HISTOGRAM</td><td>NANOSECONDS</td><td>AVG</td><td>NONE</td></tr>
18231831
<tr><td>APPLICATION</td><td>sql.statement_timeout.count</td><td>Count of statements that failed because they exceeded the statement timeout</td><td>SQL Statements</td><td>COUNTER</td><td>COUNT</td><td>AVG</td><td>NON_NEGATIVE_DERIVATIVE</td></tr>
18241832
<tr><td>APPLICATION</td><td>sql.statement_timeout.count.internal</td><td>Count of statements that failed because they exceeded the statement timeout (internal queries)</td><td>SQL Internal Statements</td><td>COUNTER</td><td>COUNT</td><td>AVG</td><td>NON_NEGATIVE_DERIVATIVE</td></tr>

pkg/cmd/roachtest/tests/nodejs_postgres.go

Lines changed: 16 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -150,14 +150,22 @@ PGSSLCERT=$HOME/certs/client.%[1]s.crt PGSSLKEY=$HOME/certs/client.%[1]s.key PGS
150150
rawResultsStr := result.Stdout + result.Stderr
151151
t.L().Printf("Test Results: %s", rawResultsStr)
152152
if err != nil {
153-
// The one failing test is `pool size of 1` which
154-
// fails because it does SELECT count(*) FROM pg_stat_activity which is
155-
// not implemented in CRDB.
156-
if strings.Contains(rawResultsStr, "1 failing") &&
157-
// Failing tests are listed numerically, we only expect one.
158-
// The one failing test should be "pool size of 1".
159-
strings.Contains(rawResultsStr, "1) pool size of 1") {
160-
err = nil
153+
// Check for expected test failures. We allow:
154+
// 1. One failing test that is "pool size of 1"
155+
// 2. One failing test that is "events"
156+
// 3. Two failing tests that are exactly "events" and "pool size of 1"
157+
if strings.Contains(rawResultsStr, "1 failing") {
158+
// Single test failure case
159+
if strings.Contains(rawResultsStr, "1) pool size of 1") ||
160+
strings.Contains(rawResultsStr, "1) events") {
161+
err = nil
162+
}
163+
} else if strings.Contains(rawResultsStr, "2 failing") {
164+
// Two test failures case - must be exactly events and pool size of 1
165+
if strings.Contains(rawResultsStr, "1) events") &&
166+
strings.Contains(rawResultsStr, "2) pool size of 1") {
167+
err = nil
168+
}
161169
}
162170
if err != nil {
163171
t.Fatal(err)

pkg/server/serverpb/status.proto

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1387,6 +1387,18 @@ message HotRangesRequest {
13871387
(gogoproto.customname) = "Nodes",
13881388
(gogoproto.nullable) = true
13891389
];
1390+
// per_node_limit indicates the maximum number of hot ranges
1391+
// to return for each node. If left empty, the default is 128.
1392+
int32 per_node_limit = 6 [
1393+
(gogoproto.customname) = "PerNodeLimit",
1394+
(gogoproto.nullable) = true
1395+
];
1396+
// stats_only indicates whether to return only the stats
1397+
// for the hot ranges, without pulling descriptor information.
1398+
bool stats_only = 7 [
1399+
(gogoproto.customname) = "StatsOnly",
1400+
(gogoproto.nullable) = true
1401+
];
13901402
}
13911403

13921404
// HotRangesResponseV2 is a response payload returned by `HotRangesV2` service.

pkg/server/status.go

Lines changed: 22 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ package server
77

88
import (
99
"bytes"
10+
"cmp"
1011
"context"
1112
"crypto/ecdsa"
1213
"crypto/rsa"
@@ -19,6 +20,7 @@ import (
1920
"os/exec"
2021
"reflect"
2122
"regexp"
23+
"slices"
2224
"sort"
2325
"strconv"
2426
"strings"
@@ -2863,7 +2865,7 @@ func (t *statusServer) HotRangesV2(
28632865
}
28642866

28652867
ti, _ := t.sqlServer.tenantConnect.TenantInfo()
2866-
if ti.TenantID.IsSet() {
2868+
if ti.TenantID.IsSet() && !req.StatsOnly {
28672869
err = t.addDescriptorsToHotRanges(ctx, resp)
28682870
if err != nil {
28692871
return nil, err
@@ -2923,13 +2925,13 @@ func (s *systemStatusServer) HotRangesV2(
29232925
return nil, errors.New("cannot call 'local' mixed with other nodes")
29242926
}
29252927

2926-
resp, err := s.localHotRanges(ctx, tenantID, requestedNodeID)
2928+
resp, err := s.localHotRanges(tenantID, requestedNodeID, int(req.PerNodeLimit))
29272929
if err != nil {
29282930
return nil, err
29292931
}
29302932

2931-
// If operating as the system tenant, add descriptor data to the reposnse.
2932-
if !tenantID.IsSet() {
2933+
// If explicitly set as the system tenant, or unset, add descriptor data to the reposnse.
2934+
if !tenantID.IsSet() && !req.StatsOnly {
29332935
err = s.addDescriptorsToHotRanges(ctx, resp)
29342936
if err != nil {
29352937
return nil, err
@@ -2943,7 +2945,12 @@ func (s *systemStatusServer) HotRangesV2(
29432945
requestedNodes = append(requestedNodes, requestedNodeID)
29442946
}
29452947

2946-
remoteRequest := serverpb.HotRangesRequest{Nodes: []string{"local"}, TenantID: req.TenantID}
2948+
remoteRequest := serverpb.HotRangesRequest{
2949+
Nodes: []string{"local"},
2950+
TenantID: req.TenantID,
2951+
PerNodeLimit: req.PerNodeLimit,
2952+
StatsOnly: req.StatsOnly,
2953+
}
29472954
nodeFn := func(ctx context.Context, status serverpb.StatusClient, nodeID roachpb.NodeID) ([]*serverpb.HotRangesResponseV2_HotRange, error) {
29482955
nodeResp, err := status.HotRangesV2(ctx, &remoteRequest)
29492956
if err != nil {
@@ -2990,7 +2997,7 @@ func (s *systemStatusServer) HotRangesV2(
29902997
// Returns a HotRangesResponseV2 containing detailed information about each hot range,
29912998
// or an error if the operation fails.
29922999
func (s *systemStatusServer) localHotRanges(
2993-
ctx context.Context, tenantID roachpb.TenantID, requestedNodeID roachpb.NodeID,
3000+
tenantID roachpb.TenantID, requestedNodeID roachpb.NodeID, localLimit int,
29943001
) (*serverpb.HotRangesResponseV2, error) {
29953002
// Initialize response object
29963003
var resp serverpb.HotRangesResponseV2
@@ -3048,6 +3055,15 @@ func (s *systemStatusServer) localHotRanges(
30483055
return nil, err
30493056
}
30503057

3058+
// sort the slices by cpu
3059+
slices.SortFunc(resp.Ranges, func(a, b *serverpb.HotRangesResponseV2_HotRange) int {
3060+
return cmp.Compare(a.CPUTimePerSecond, b.CPUTimePerSecond)
3061+
})
3062+
// truncate the response if localLimit is set
3063+
if localLimit != 0 && localLimit < len(resp.Ranges) {
3064+
resp.Ranges = resp.Ranges[:localLimit]
3065+
}
3066+
30513067
return &resp, nil
30523068
}
30533069

pkg/server/status_test.go

Lines changed: 99 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -8,6 +8,7 @@ package server
88
import (
99
"context"
1010
"encoding/json"
11+
"fmt"
1112
"os"
1213
"slices"
1314
"sync"
@@ -820,3 +821,101 @@ func TestHotRangesByNode(t *testing.T) {
820821
require.Error(t, err, "cannot call 'local' mixed with other nodes")
821822
})
822823
}
824+
825+
func TestHotRangesStatsOnly(t *testing.T) {
826+
defer leaktest.AfterTest(t)()
827+
sc := log.ScopeWithoutShowLogs(t)
828+
defer sc.Close(t)
829+
830+
ctx := context.Background()
831+
832+
s := serverutils.StartServerOnly(t, base.TestServerArgs{
833+
DefaultTestTenant: base.TestControlsTenantsExplicitly,
834+
StoreSpecs: []base.StoreSpec{
835+
base.DefaultTestStoreSpec,
836+
base.DefaultTestStoreSpec,
837+
base.DefaultTestStoreSpec,
838+
},
839+
Knobs: base.TestingKnobs{
840+
Store: &kvserver.StoreTestingKnobs{
841+
ReplicaPlannerKnobs: plan.ReplicaPlannerTestingKnobs{
842+
DisableReplicaRebalancing: true,
843+
},
844+
},
845+
},
846+
})
847+
defer s.Stopper().Stop(ctx)
848+
849+
for _, test := range []struct {
850+
statsOnly bool
851+
hasDescriptors bool
852+
}{
853+
{true, false},
854+
{false, true},
855+
} {
856+
t.Run(fmt.Sprintf("statsOnly=%t hasDescriptors %t", test.statsOnly, test.hasDescriptors), func(t *testing.T) {
857+
testutils.SucceedsSoon(t, func() error {
858+
ss := s.StatusServer().(*systemStatusServer)
859+
resp, err := ss.HotRangesV2(ctx, &serverpb.HotRangesRequest{NodeID: "local", StatsOnly: test.statsOnly})
860+
if err != nil {
861+
return err
862+
}
863+
864+
if len(resp.Ranges) == 0 {
865+
return errors.New("waiting for hot ranges to be collected")
866+
}
867+
868+
hasDescriptors := false
869+
for _, r := range resp.Ranges {
870+
allDescriptors := append(r.Databases, append(r.Tables, r.Indexes...)...)
871+
if len(allDescriptors) > 0 {
872+
hasDescriptors = true
873+
}
874+
}
875+
876+
require.Equal(t, test.hasDescriptors, hasDescriptors)
877+
return nil
878+
})
879+
})
880+
}
881+
}
882+
883+
func TestHotRangesNodeLimit(t *testing.T) {
884+
defer leaktest.AfterTest(t)()
885+
sc := log.ScopeWithoutShowLogs(t)
886+
defer sc.Close(t)
887+
888+
ctx := context.Background()
889+
890+
s := serverutils.StartServerOnly(t, base.TestServerArgs{
891+
DefaultTestTenant: base.TestControlsTenantsExplicitly,
892+
StoreSpecs: []base.StoreSpec{
893+
base.DefaultTestStoreSpec,
894+
base.DefaultTestStoreSpec,
895+
base.DefaultTestStoreSpec,
896+
},
897+
Knobs: base.TestingKnobs{
898+
Store: &kvserver.StoreTestingKnobs{
899+
ReplicaPlannerKnobs: plan.ReplicaPlannerTestingKnobs{
900+
DisableReplicaRebalancing: true,
901+
},
902+
},
903+
},
904+
})
905+
defer s.Stopper().Stop(ctx)
906+
907+
testutils.SucceedsSoon(t, func() error {
908+
ss := s.StatusServer().(*systemStatusServer)
909+
resp, err := ss.HotRangesV2(ctx, &serverpb.HotRangesRequest{NodeID: "local", PerNodeLimit: 5})
910+
if err != nil {
911+
return err
912+
}
913+
914+
if len(resp.Ranges) == 0 {
915+
return errors.New("waiting for hot ranges to be collected")
916+
}
917+
918+
require.Equal(t, 5, len(resp.Ranges))
919+
return nil
920+
})
921+
}

pkg/sql/conn_executor.go

Lines changed: 25 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -557,6 +557,7 @@ func makeMetrics(internal bool, sv *settings.Values) Metrics {
557557
SQLOptPlanCacheMisses: metric.NewCounter(getMetricMeta(MetaSQLOptPlanCacheMisses, internal)),
558558
StatementFingerprintCount: metric.NewUniqueCounter(getMetricMeta(MetaUniqueStatementCount, internal)),
559559
SQLExecLatencyDetail: sqlExecLatencyDetail,
560+
560561
// TODO(mrtracy): See HistogramWindowInterval in server/config.go for the 6x factor.
561562
DistSQLExecLatency: metric.NewHistogram(metric.HistogramOptions{
562563
Mode: metric.HistogramModePreferHdrLatency,
@@ -570,6 +571,18 @@ func makeMetrics(internal bool, sv *settings.Values) Metrics {
570571
Duration: 6 * metricsSampleInterval,
571572
BucketConfig: metric.IOLatencyBuckets,
572573
}),
574+
SQLExecLatencyConsistent: metric.NewHistogram(metric.HistogramOptions{
575+
Mode: metric.HistogramModePreferHdrLatency,
576+
Metadata: getMetricMeta(MetaSQLExecLatencyConsistent, internal),
577+
Duration: 6 * metricsSampleInterval,
578+
BucketConfig: metric.IOLatencyBuckets,
579+
}),
580+
SQLExecLatencyHistorical: metric.NewHistogram(metric.HistogramOptions{
581+
Mode: metric.HistogramModePreferHdrLatency,
582+
Metadata: getMetricMeta(MetaSQLExecLatencyHistorical, internal),
583+
Duration: 6 * metricsSampleInterval,
584+
BucketConfig: metric.IOLatencyBuckets,
585+
}),
573586
DistSQLServiceLatency: metric.NewHistogram(metric.HistogramOptions{
574587
Mode: metric.HistogramModePreferHdrLatency,
575588
Metadata: getMetricMeta(MetaDistSQLServiceLatency, internal),
@@ -582,6 +595,18 @@ func makeMetrics(internal bool, sv *settings.Values) Metrics {
582595
Duration: 6 * metricsSampleInterval,
583596
BucketConfig: metric.IOLatencyBuckets,
584597
}),
598+
SQLServiceLatencyConsistent: metric.NewHistogram(metric.HistogramOptions{
599+
Mode: metric.HistogramModePreferHdrLatency,
600+
Metadata: getMetricMeta(MetaSQLServiceLatencyConsistent, internal),
601+
Duration: 6 * metricsSampleInterval,
602+
BucketConfig: metric.IOLatencyBuckets,
603+
}),
604+
SQLServiceLatencyHistorical: metric.NewHistogram(metric.HistogramOptions{
605+
Mode: metric.HistogramModePreferHdrLatency,
606+
Metadata: getMetricMeta(MetaSQLServiceLatencyHistorical, internal),
607+
Duration: 6 * metricsSampleInterval,
608+
BucketConfig: metric.IOLatencyBuckets,
609+
}),
585610
SQLTxnLatency: metric.NewHistogram(metric.HistogramOptions{
586611
Mode: metric.HistogramModePreferHdrLatency,
587612
Metadata: getMetricMeta(MetaSQLTxnLatency, internal),

pkg/sql/exec_util.go

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -767,6 +767,18 @@ var (
767767
Measurement: "Latency",
768768
Unit: metric.Unit_NANOSECONDS,
769769
}
770+
MetaSQLExecLatencyConsistent = metric.Metadata{
771+
Name: "sql.exec.latency.consistent",
772+
Help: "Latency of SQL statement execution of non-historical queries",
773+
Measurement: "Latency",
774+
Unit: metric.Unit_NANOSECONDS,
775+
}
776+
MetaSQLExecLatencyHistorical = metric.Metadata{
777+
Name: "sql.exec.latency.historical",
778+
Help: "Latency of SQL statement execution of historical queries",
779+
Measurement: "Latency",
780+
Unit: metric.Unit_NANOSECONDS,
781+
}
770782
MetaSQLExecLatencyDetail = metric.Metadata{
771783
Name: "sql.exec.latency.detail",
772784
Help: "Latency of SQL statement execution, by statement fingerprint",
@@ -780,6 +792,18 @@ var (
780792
Measurement: "Latency",
781793
Unit: metric.Unit_NANOSECONDS,
782794
}
795+
MetaSQLServiceLatencyConsistent = metric.Metadata{
796+
Name: "sql.service.latency.consistent",
797+
Help: "Latency of SQL request execution of non-historical queries",
798+
Measurement: "Latency",
799+
Unit: metric.Unit_NANOSECONDS,
800+
}
801+
MetaSQLServiceLatencyHistorical = metric.Metadata{
802+
Name: "sql.service.latency.historical",
803+
Help: "Latency of SQL request execution of historical queries",
804+
Measurement: "Latency",
805+
Unit: metric.Unit_NANOSECONDS,
806+
}
783807
MetaSQLOptPlanCacheHits = metric.Metadata{
784808
Name: "sql.optimizer.plan_cache.hits",
785809
Help: "Number of non-prepared statements for which a cached plan was used",

0 commit comments

Comments
 (0)