Skip to content

Commit 072f468

Browse files
craig[bot]yuzefovichmgartnerkev-caodhartunian
committed
150866: sql: harden logging logic in an error case r=yuzefovich a=yuzefovich We just saw a sentry report where we hit a nil pointer with `planner.curPlan.instrumentationHelper` being nil when trying to log a statement. This happened after we hit an error in `handleAOST`, so we short-circuited "dispatch to execution engine" part, meaning that the plan is left uninitialized. Previously, in eac1971 we attempted to handle such scenarios by tracking a boolean indicating whether the dispatch has occurred, but in this case we marked the boolean "true" (meaning it has occurred), yet we short-circuited due to an error. This commit refactors how we ensure that the plan is initialized by simply checking whether the corresponding fields are set or not. In turn, this exposed the fact that we previously didn't unset `planner.curPlan` when resetting the planner, which is now fixed. I spent some time trying to come up with a repro but didn't succeed, so there is no regression test. Fixes: #150717. Release note: None 151366: sql: improve stmt bundles a bit r=yuzefovich a=yuzefovich This commit addresses two minor issues around the stmt bundles: - previously, we would always include the value of `direct_columnar_scans_enabled` session variable even when it's not changed from the default. This was a bug where we overrode the value of `skip` boolean based on whether crdb_test build tag was set. - it also hardens `debug sb recreate` command to ignore `cluster.preserve_downgrade_option` if it's set - it has no effect on the execution, yet it can break recreation on a new enough binary. Epic: None Release note: None 151429: allkeywords: replace switch with map in GetKeywordID r=yuzefovich a=yuzefovich Long time ago in 74f630c we switched the implementation of keyword ID lookup in lexer from the map to a large generated switch. At that time it was more performant, but with the new swiss table implementation of hash maps in Go I was curious to see whether the map approach is now faster, and it turns out to be so, so this commit switches back to the map lookup. ``` name old time/op new time/op delta Parse/simple-8 12.4µs ± 0% 12.1µs ± 0% -2.05% (p=0.000 n=8+9) Parse/string-8 18.2µs ± 0% 18.0µs ± 0% -1.24% (p=0.000 n=10+9) Parse/tpcc-delivery-8 25.0µs ± 0% 24.5µs ± 0% -1.73% (p=0.000 n=10+10) Parse/account-8 32.9µs ± 0% 32.2µs ± 0% -2.14% (p=0.000 n=10+10) name old alloc/op new alloc/op delta Parse/simple-8 3.56kB ± 0% 3.56kB ± 0% ~ (all equal) Parse/string-8 3.83kB ± 0% 3.83kB ± 0% ~ (all equal) Parse/tpcc-delivery-8 6.14kB ± 0% 6.14kB ± 0% ~ (all equal) Parse/account-8 11.2kB ± 0% 11.2kB ± 0% ~ (all equal) name old allocs/op new allocs/op delta Parse/simple-8 22.0 ± 0% 22.0 ± 0% ~ (all equal) Parse/string-8 26.0 ± 0% 26.0 ± 0% ~ (all equal) Parse/tpcc-delivery-8 41.0 ± 0% 41.0 ± 0% ~ (all equal) Parse/account-8 75.0 ± 0% 75.0 ± 0% ~ (all equal) ``` Epic: None Release note: None 151504: opt: fix incorrect filter elimination in join reordering r=mgartner a=mgartner A bug in join reordering which incorrectly eliminated filters has been fixed. The join reordering logic could sometimes eliminate filters of the form `a=a` when constructing new inner-joins. It incorrectly assumed these filters to be redundant because a column is always equivalent to itself. However, `a=a` is a null-rejecting filter, so it can only be omitted if `a` is guaranteed to be non-null by something else, like another filter or a `NOT NULL` constraint. There is no release note because, as far as we know, this bug is not possible to hit in production. The only reproduction we have requires optimization rules to be disabled, which can only happen in tests. Fixes #146507 Release note: None 151578: rowexec: fix JoinReader benchmarks r=yuzefovich a=yuzefovich We recently added an assertion that the lookup join with "equality columns are key" property never fetches more than one lookup row for each input row. In a couple of existing benchmarks this fact was violated, so this commit fixes the issue. Namely, previously we set it to indicate that we want cross-range parallelism, but now we actually have a separate boolean on the processor spec, which makes the fix simple. Fixes: #151296. Fixes: #151299. Release note: None 151608: backup: refactor ValidateEndTimeAndTruncate r=jeffswenson a=kev-cao This patch refactors `ValidateEndTimeAndTruncate` to zip together the slices of URIs, manifests, and locality information into a single slice. This allows for easier manipulation of the slices instead of treating them as separate identities. Epic: None Release note: None 151667: sqlstats: fix insights benchmark bugs r=kyle-a-wong a=dhartunian - Transactions slice length was incorrectly initialized. - Transactions needed to set a UUID in order to get observed by the `observeTransaction` function and actually trigger the code that's being benchmarked. - Indexing into the `transactions` slice in the benchmark was incorrectly using `idx` instead of `i` and grabbing empty objects. Resolves: #151300 Epic: None Release note: None Co-authored-by: Yahor Yuzefovich <[email protected]> Co-authored-by: Marcus Gartner <[email protected]> Co-authored-by: Kevin Cao <[email protected]> Co-authored-by: David Hartunian <[email protected]>
8 parents 1522360 + f3afe9f + 51a6a4c + 5bdc026 + 125c97f + f69d669 + 195026f + d9fbcc3 commit 072f468

File tree

14 files changed

+231
-110
lines changed

14 files changed

+231
-110
lines changed

cpu.out

11.6 KB
Binary file not shown.

pkg/backup/backupinfo/manifest_handling.go

Lines changed: 107 additions & 63 deletions
Original file line numberDiff line numberDiff line change
@@ -875,11 +875,69 @@ func GetLocalityInfo(
875875
return info, nil
876876
}
877877

878+
type BackupTreeEntry struct {
879+
// The URI to the backup manifest.
880+
Uri string
881+
// The contents of the backup Manifest.
882+
Manifest backuppb.BackupManifest
883+
// The locality information for the backup.
884+
LocalityInfo jobspb.RestoreDetails_BackupLocalityInfo
885+
}
886+
887+
// zipBackupTreeEntries zips the default URIs, main backup manifests, and
888+
// locality information into a slice of backupChainEntry structs. It assumes
889+
// that the passed in slices are of the same length and that each entry at the
890+
// same index in each slice corresponds to the same backup.
891+
func zipBackupTreeEntries(
892+
defaultURIs []string,
893+
mainBackupManifests []backuppb.BackupManifest,
894+
localityInfo []jobspb.RestoreDetails_BackupLocalityInfo,
895+
) ([]BackupTreeEntry, error) {
896+
if len(defaultURIs) != len(mainBackupManifests) || len(defaultURIs) != len(localityInfo) {
897+
return nil, errors.AssertionFailedf(
898+
"length mismatch: defaultURIs %d, mainBackupManifests %d, localityInfo %d",
899+
len(defaultURIs), len(mainBackupManifests), len(localityInfo),
900+
)
901+
}
902+
903+
entries := make([]BackupTreeEntry, len(mainBackupManifests))
904+
for i := range mainBackupManifests {
905+
entries[i] = BackupTreeEntry{
906+
Uri: defaultURIs[i],
907+
Manifest: mainBackupManifests[i],
908+
LocalityInfo: localityInfo[i],
909+
}
910+
}
911+
912+
return entries, nil
913+
}
914+
915+
// unzipBackupTreeEntries unzips a slice of backupChainEntry structs into
916+
// slices of their constituent parts: default URIs, main backup manifests, and
917+
// locality info and returns the individual slices.
918+
func unzipBackupTreeEntries(
919+
entries []BackupTreeEntry,
920+
) ([]string, []backuppb.BackupManifest, []jobspb.RestoreDetails_BackupLocalityInfo) {
921+
defaultURIs := make([]string, len(entries))
922+
mainBackupManifests := make([]backuppb.BackupManifest, len(entries))
923+
localityInfo := make([]jobspb.RestoreDetails_BackupLocalityInfo, len(entries))
924+
for i := range entries {
925+
defaultURIs[i] = entries[i].Uri
926+
mainBackupManifests[i] = entries[i].Manifest
927+
localityInfo[i] = entries[i].LocalityInfo
928+
}
929+
return defaultURIs, mainBackupManifests, localityInfo
930+
}
931+
878932
// ValidateEndTimeAndTruncate checks that the requested target time, if
879933
// specified, is valid for the list of incremental backups resolved, truncating
880934
// the results to the backup that contains the target time.
881935
// The method also performs additional sanity checks to ensure the backups cover
882936
// the requested time.
937+
//
938+
// TODO (kev-cao): Refactor this function to accept/return the BackupTreeEntry
939+
// type and update surrounding caller logic to use it. This will allow us to
940+
// refactor out the zipping and unzipping logic.
883941
func ValidateEndTimeAndTruncate(
884942
defaultURIs []string,
885943
mainBackupManifests []backuppb.BackupManifest,
@@ -888,71 +946,70 @@ func ValidateEndTimeAndTruncate(
888946
includeSkipped bool,
889947
includeCompacted bool,
890948
) ([]string, []backuppb.BackupManifest, []jobspb.RestoreDetails_BackupLocalityInfo, error) {
949+
backupEntries, err := zipBackupTreeEntries(defaultURIs, mainBackupManifests, localityInfo)
950+
if err != nil {
951+
return nil, nil, nil, err
952+
}
953+
891954
if !includeCompacted {
892-
defaultURIs, mainBackupManifests, localityInfo = skipCompactedBackups(
893-
defaultURIs, mainBackupManifests, localityInfo,
894-
)
955+
backupEntries = skipCompactedBackups(backupEntries)
895956
}
896957

897958
if endTime.IsEmpty() {
898959
if includeSkipped {
899-
return defaultURIs, mainBackupManifests, localityInfo, nil
960+
uris, manifests, locality := unzipBackupTreeEntries(backupEntries)
961+
return uris, manifests, locality, nil
900962
}
901-
uris, manifests, locality, err := ElideSkippedLayers(defaultURIs, mainBackupManifests, localityInfo)
902-
if err != nil {
903-
return nil, nil, nil, err
904-
}
905-
if err := validateContinuity(manifests); err != nil {
963+
backupEntries = elideSkippedLayers(backupEntries)
964+
if err := validateContinuity(backupEntries); err != nil {
906965
return nil, nil, nil, err
907966
}
967+
uris, manifests, locality := unzipBackupTreeEntries(backupEntries)
908968
return uris, manifests, locality, nil
909969
}
910-
for i, b := range mainBackupManifests {
970+
for i, b := range backupEntries {
911971
// Find the backup that covers the requested time.
912-
if !(b.StartTime.Less(endTime) && endTime.LessEq(b.EndTime)) {
972+
if !(b.Manifest.StartTime.Less(endTime) && endTime.LessEq(b.Manifest.EndTime)) {
913973
continue
914974
}
915975

916976
// Ensure that the backup actually has revision history.
917-
if !endTime.Equal(b.EndTime) {
918-
if b.MVCCFilter != backuppb.MVCCFilter_All {
977+
if !endTime.Equal(b.Manifest.EndTime) {
978+
if b.Manifest.MVCCFilter != backuppb.MVCCFilter_All {
919979
const errPrefix = "invalid RESTORE timestamp: restoring to arbitrary time requires that BACKUP for requested time be created with 'revision_history' option."
920980
if i == 0 {
921981
return nil, nil, nil, errors.Errorf(
922982
errPrefix+" nearest backup time is %s",
923-
timeutil.Unix(0, b.EndTime.WallTime).UTC(),
983+
timeutil.Unix(0, b.Manifest.EndTime.WallTime).UTC(),
924984
)
925985
}
926986
return nil, nil, nil, errors.Errorf(
927987
errPrefix+" nearest BACKUP times are %s or %s",
928988
timeutil.Unix(0, mainBackupManifests[i-1].EndTime.WallTime).UTC(),
929-
timeutil.Unix(0, b.EndTime.WallTime).UTC(),
989+
timeutil.Unix(0, b.Manifest.EndTime.WallTime).UTC(),
930990
)
931991
}
932992
// Ensure that the revision history actually covers the requested time -
933993
// while the BACKUP's start and end might contain the requested time for
934994
// example if start time is 0 (full backup), the revision history was
935995
// only captured since the GC window. Note that the RevisionStartTime is
936996
// the latest for ranges backed up.
937-
if endTime.LessEq(b.RevisionStartTime) {
997+
if endTime.LessEq(b.Manifest.RevisionStartTime) {
938998
return nil, nil, nil, errors.Errorf(
939999
"invalid RESTORE timestamp: BACKUP for requested time only has revision history"+
940-
" from %v", timeutil.Unix(0, b.RevisionStartTime.WallTime).UTC(),
1000+
" from %v", timeutil.Unix(0, b.Manifest.RevisionStartTime.WallTime).UTC(),
9411001
)
9421002
}
9431003
}
9441004
if includeSkipped {
945-
return defaultURIs[:i+1], mainBackupManifests[:i+1], localityInfo[:i+1], nil
946-
}
947-
uris, manifests, locality, err := ElideSkippedLayers(
948-
defaultURIs[:i+1], mainBackupManifests[:i+1], localityInfo[:i+1],
949-
)
950-
if err != nil {
951-
return nil, nil, nil, err
1005+
uris, manifests, locality := unzipBackupTreeEntries(backupEntries[:i+1])
1006+
return uris, manifests, locality, nil
9521007
}
953-
if err := validateContinuity(manifests); err != nil {
1008+
backupEntries = elideSkippedLayers(backupEntries[:i+1])
1009+
if err := validateContinuity(backupEntries); err != nil {
9541010
return nil, nil, nil, err
9551011
}
1012+
uris, manifests, locality := unzipBackupTreeEntries(backupEntries)
9561013
return uris, manifests, locality, nil
9571014
}
9581015

@@ -962,68 +1019,58 @@ func ValidateEndTimeAndTruncate(
9621019
}
9631020

9641021
// skipCompactedBackups removes any compacted backups from the list of
965-
// backups and returns the updated list of URIs, manifests, and locality info.
1022+
// backups and returns the updated list backup entries.
9661023
//
9671024
// NOTE: This function modifies the underlying memory of the slices passed in.
968-
func skipCompactedBackups(
969-
defaultURIs []string,
970-
manifests []backuppb.BackupManifest,
971-
localityInfo []jobspb.RestoreDetails_BackupLocalityInfo,
972-
) ([]string, []backuppb.BackupManifest, []jobspb.RestoreDetails_BackupLocalityInfo) {
973-
for i := len(manifests) - 1; i >= 0; i-- {
974-
if manifests[i].IsCompacted {
975-
defaultURIs = slices.Delete(defaultURIs, i, i+1)
976-
manifests = slices.Delete(manifests, i, i+1)
977-
localityInfo = slices.Delete(localityInfo, i, i+1)
1025+
func skipCompactedBackups(backupEntries []BackupTreeEntry) []BackupTreeEntry {
1026+
for i := len(backupEntries) - 1; i >= 0; i-- {
1027+
if backupEntries[i].Manifest.IsCompacted {
1028+
backupEntries = slices.Delete(backupEntries, i, i+1)
9781029
}
9791030
}
980-
return defaultURIs, manifests, localityInfo
1031+
return backupEntries
9811032
}
9821033

9831034
// validateContinuity checks that the backups are continuous.
984-
func validateContinuity(manifests []backuppb.BackupManifest) error {
985-
if len(manifests) == 0 {
1035+
func validateContinuity(backupEntries []BackupTreeEntry) error {
1036+
if len(backupEntries) == 0 {
9861037
return errors.AssertionFailedf("an empty chain of backups cannot cover an end time")
9871038
}
988-
for i := range len(manifests) - 1 {
989-
if !manifests[i].EndTime.Equal(manifests[i+1].StartTime) {
1039+
for i := range len(backupEntries) - 1 {
1040+
if !backupEntries[i].Manifest.EndTime.Equal(backupEntries[i+1].Manifest.StartTime) {
9901041
return errors.AssertionFailedf(
9911042
"backups are not continuous: %dth backup ends at %+v, %dth backup starts at %+v",
992-
i, manifests[i].EndTime,
993-
i+1, manifests[i+1].StartTime,
1043+
i, backupEntries[i].Manifest.EndTime,
1044+
i+1, backupEntries[i+1].Manifest.StartTime,
9941045
)
9951046
}
9961047
}
9971048
return nil
9981049
}
9991050

1000-
// ElideSkippedLayers removes backups that are skipped in the backup chain and
1051+
// elideSkippedLayers removes backups that are skipped in the backup chain and
10011052
// ensures only backups that will be used in the restore are returned.
10021053
//
10031054
// Note: This assumes that the provided backups are sorted in increasing order
10041055
// by end time, and then sorted in increasing order by start time to break ties.
1005-
func ElideSkippedLayers(
1006-
uris []string, backups []backuppb.BackupManifest, loc []jobspb.RestoreDetails_BackupLocalityInfo,
1007-
) ([]string, []backuppb.BackupManifest, []jobspb.RestoreDetails_BackupLocalityInfo, error) {
1008-
uris, backups, loc = elideDuplicateEndTimes(uris, backups, loc)
1009-
i := len(backups) - 1
1056+
func elideSkippedLayers(backupEntries []BackupTreeEntry) []BackupTreeEntry {
1057+
backupEntries = elideDuplicateEndTimes(backupEntries)
1058+
i := len(backupEntries) - 1
10101059
for i > 0 {
10111060
// Find j such that backups[j] is parent of backups[i].
10121061
j := i - 1
1013-
for j >= 0 && !backups[i].StartTime.Equal(backups[j].EndTime) {
1062+
for j >= 0 && !backupEntries[i].Manifest.StartTime.Equal(backupEntries[j].Manifest.EndTime) {
10141063
j--
10151064
}
10161065
// If there are backups between i and j, remove them.
10171066
// If j is less than 0, then no parent was found so nothing to skip.
10181067
if j != i-1 && j >= 0 {
1019-
uris = slices.Delete(uris, j+1, i)
1020-
backups = slices.Delete(backups, j+1, i)
1021-
loc = slices.Delete(loc, j+1, i)
1068+
backupEntries = slices.Delete(backupEntries, j+1, i)
10221069
}
10231070
// Move up to check the chain from j now.
10241071
i = j
10251072
}
1026-
return uris, backups, loc, nil
1073+
return backupEntries
10271074
}
10281075

10291076
// elideDuplicateEndTimes ensures that backups in a list of backups are
@@ -1034,24 +1081,21 @@ func ElideSkippedLayers(
10341081
// by end time, and then sorted in increasing order by start time to break ties.
10351082
// This is the case for backups being returned by storage clients due to us
10361083
// encoding backup paths in a way that ensures this order.
1037-
func elideDuplicateEndTimes(
1038-
uris []string, backups []backuppb.BackupManifest, loc []jobspb.RestoreDetails_BackupLocalityInfo,
1039-
) ([]string, []backuppb.BackupManifest, []jobspb.RestoreDetails_BackupLocalityInfo) {
1040-
for i := range len(backups) - 1 {
1084+
func elideDuplicateEndTimes(backupEntries []BackupTreeEntry) []BackupTreeEntry {
1085+
for i := range len(backupEntries) - 1 {
10411086
j := i + 1
10421087
// Find j such that backups[j] no longer shares the same end time as
10431088
// backups[i].
1044-
for j < len(backups) && backups[i].EndTime.Equal(backups[j].EndTime) {
1089+
for j < len(backupEntries) &&
1090+
backupEntries[i].Manifest.EndTime.Equal(backupEntries[j].Manifest.EndTime) {
10451091
j++
10461092
}
10471093
// If there exists backups between i and j, remove them.
10481094
if j > i+1 {
1049-
uris = slices.Delete(uris, i+1, j)
1050-
backups = slices.Delete(backups, i+1, j)
1051-
loc = slices.Delete(loc, i+1, j)
1095+
backupEntries = slices.Delete(backupEntries, i+1, j)
10521096
}
10531097
}
1054-
return uris, backups, loc
1098+
return backupEntries
10551099
}
10561100

10571101
// GetBackupIndexAtTime returns the index of the latest backup in

pkg/cli/statement_bundle.go

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -179,6 +179,12 @@ func runBundleRecreate(cmd *cobra.Command, args []string) (resErr error) {
179179
initStmts := strings.Split(strings.Join(lines, "\n"), "SET CLUSTER SETTING")
180180
for i := 1; i < len(initStmts); i++ {
181181
initStmts[i] = "SET CLUSTER SETTING " + initStmts[i]
182+
if strings.Contains(initStmts[i], "cluster.preserve_downgrade_option") {
183+
// This cluster setting can prevent the bundle from being
184+
// recreated on a new enough binary, so we'll skip it.
185+
initStmts = append(initStmts[:i], initStmts[i+1:]...)
186+
i--
187+
}
182188
}
183189
// All stmts before the first SET CLUSTER SETTING are SET stmts. We need
184190
// to handle 'SET database = ' stmt separately if found - the target

pkg/sql/conn_executor_exec.go

Lines changed: 10 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -694,11 +694,6 @@ func (ex *connExecutor) execStmtInOpenState(
694694
_ = p.FormatAstAsRedactableString(stmt.AST, p.extendedEvalCtx.Annotations)
695695
}
696696

697-
// This flag informs logging decisions.
698-
// Some statements are not dispatched to the execution engine and need
699-
// some special plan initialization for logging.
700-
dispatchToExecEngine := false
701-
702697
var logErr error
703698
defer func() {
704699
// Do not log if this is an eventTxnCommittedDueToDDL event. In that case,
@@ -708,10 +703,12 @@ func (ex *connExecutor) execStmtInOpenState(
708703
return
709704
}
710705

711-
// If we did not dispatch to the execution engine, we need to initialize
712-
// the plan here.
713-
if !dispatchToExecEngine {
706+
if p.curPlan.stmt == nil || p.curPlan.instrumentation == nil {
707+
// We short-circuited before we could initialize some fields that
708+
// are needed for logging, so do that here.
714709
p.curPlan.init(&p.stmt, &p.instrumentation)
710+
}
711+
if logErr == nil {
715712
if p, ok := retPayload.(payloadWithError); ok {
716713
logErr = p.errorCause()
717714
}
@@ -932,8 +929,6 @@ func (ex *connExecutor) execStmtInOpenState(
932929
return nil, nil, nil
933930
}
934931

935-
dispatchToExecEngine = true
936-
937932
// Check if we need to auto-commit the transaction due to DDL.
938933
if ev, payload := ex.maybeAutoCommitBeforeDDL(ctx, ast); ev != nil {
939934
return ev, payload, nil
@@ -1707,11 +1702,6 @@ func (ex *connExecutor) execStmtInOpenStateWithPausablePortal(
17071702
_ = p.FormatAstAsRedactableString(vars.stmt.AST, p.extendedEvalCtx.Annotations)
17081703
}
17091704

1710-
// This flag informs logging decisions.
1711-
// Some statements are not dispatched to the execution engine and need
1712-
// some special plan initialization for logging.
1713-
dispatchToExecEngine := false
1714-
17151705
defer processCleanupFunc(func() {
17161706
// Do not log if this is an eventTxnCommittedDueToDDL event. In that case,
17171707
// the transaction is committed, and the current statement is executed
@@ -1720,10 +1710,12 @@ func (ex *connExecutor) execStmtInOpenStateWithPausablePortal(
17201710
return
17211711
}
17221712

1723-
// If we did not dispatch to the execution engine, we need to initialize
1724-
// the plan here.
1725-
if !dispatchToExecEngine {
1713+
if p.curPlan.stmt == nil || p.curPlan.instrumentation == nil {
1714+
// We short-circuited before we could initialize some fields that
1715+
// are needed for logging, so do that here.
17261716
p.curPlan.init(&p.stmt, &p.instrumentation)
1717+
}
1718+
if vars.logErr == nil {
17271719
if p, ok := retPayload.(payloadWithError); ok {
17281720
vars.logErr = p.errorCause()
17291721
}
@@ -1977,8 +1969,6 @@ func (ex *connExecutor) execStmtInOpenStateWithPausablePortal(
19771969
return nil, nil, nil
19781970
}
19791971

1980-
dispatchToExecEngine = true
1981-
19821972
// Check if we need to auto-commit the transaction due to DDL.
19831973
if ev, payload := ex.maybeAutoCommitBeforeDDL(ctx, vars.ast); ev != nil {
19841974
return ev, payload, nil

pkg/sql/explain_bundle.go

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1132,7 +1132,9 @@ func (c *stmtEnvCollector) PrintSessionSettings(w io.Writer, sv *settings.Values
11321132
case "direct_columnar_scans_enabled":
11331133
// In test builds we might randomize some setting defaults, so
11341134
// we need to ignore them to make the tests deterministic.
1135-
skip = buildutil.CrdbTestBuild
1135+
if buildutil.CrdbTestBuild {
1136+
skip = true
1137+
}
11361138
case "role":
11371139
// If a role is set, we comment it out in env.sql. Otherwise, running
11381140
// 'debug sb recreate' will fail with a non-existent user/role error.

0 commit comments

Comments
 (0)