You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
155540: kvserver: add split trigger support to TestReplicaLifecycleDataDriven r=pav-kv a=arulajmani
This patch adds a few directives to construct and evaluate split triggers.
We then use these directives to demonstrate that leases are correctly copied over from the LHS to the RHS. The interesting case is around leader leases, where if the LHS has a leader lease, the RHS gets an expiration based lease. For the other two lease types, the RHS's lease type stays the same.
Epic: none
Release note: None
155984: isession: add savepoint support r=jeffswenson a=jeffswenson
This adds Savepoint support to the internal session. LDR needs save points for performance reasons. It allows it to handle individual LWW losers without aborting the entire replication batch.
Release note: none
Epic: CRDB-48647
156031: kvstorage: check RangeTombstone on loading replicas r=arulajmani a=pav-kv
The last commit adds load-time checking of the invariant that the `RaftReplicaID` can not survive if there is a `RangeTombstone` with a higher `NextReplicaID`. For context, the relevant invariants are:
- A replica exists iff `RaftReplicaID` exists
- If a replica exists then `ReplicaID` >= `RangeTombstone.NextReplicaID`
NB: it follows that there can be a stale/no-op `RangeTombstone` with a non-zero `NextReplicaID`. With the current state of the code, this is possible. We don't always remove `RangeTombstone` when it's stale, and don't use `RangeTombstone` existence as a proof of a replica's non-existence. The `RaftReplicaID` key is the source of truth to that extent.
A bunch of preparatory commits refactor the `IterateIDPrefixKeys` helper (used at load-time to discover all `RangeID`s on a store) to support visiting more than one type of key at a time. Previously, we were doing two passes over the `RangeID`-local space (and fetching a third kind of key would mean a third pass). Now, the keyspace is scanned once.
Other benefits of the change:
- Loading `RangeTombstone` might be useful if we want to [migrate](https://cockroachlabs.slack.com/archives/C02KHQMF2US/p1761237444633689) `RaftReplicaID/RangeTombstone` into one key in state machine engine.
- With more integration, we may eliminate one more replicas loading [pass](https://github.com/cockroachdb/cockroach/blob/4183e88d8fdffdd75146d4bdf156d52f819bbb93/pkg/kv/kvserver/store.go#L2329-L2341) in `Store.Start()`, and unify the various "loaded replica" structs and assertions. Just need to read a few more keys while visiting the `RangeID`.
- Reducing the total number of passes will become beneficial with separated storages. We are "winning back" capacity for doing an extra pass in a second engine.
Epic: CRDB-55218
156119: revert "sql: store bundle when TestStreamerTightBudget fails" r=yuzefovich a=yuzefovich
This reverts commit 48f11d2.
The test hasn't failed in like year and a half and the captured stmt bundle didn't actually give any more insight into why it rarely failed.
Informs: #119675.
Release note: None
156374: sql: sort rulesForRelease versions in descending order r=celiala a=celiala
This PR fixes incorrect ordering in the `rulesForReleases` array:
- Issue detected by [claude-code-pr-review](https://github.com/cockroachdb/cockroach/actions/runs/18817950946)
- I verified via [original PR](#97213) that the array should be sorted in descending order.
## Background
The `rulesForReleases` array in `pkg/sql/schemachanger/scplan/plan.go` stores schema changer rule registries for different cluster versions. The array is documented to be in descending order (newest version first), and the `GetRulesRegistryForRelease()` function depends on this ordering to correctly match cluster versions with their corresponding rule sets.
## Bug
The array had V25_2 and V25_3 in ascending order instead of descending:
**Before (incorrect):**
```go
var rulesForReleases = []rulesForRelease{
{activeVersion: clusterversion.Latest, rulesRegistry: current.GetRegistry()},
{activeVersion: clusterversion.V25_2, rulesRegistry: release_25_2.GetRegistry()},
{activeVersion: clusterversion.V25_3, rulesRegistry: release_25_3.GetRegistry()},
}
```
**After (correct):**
```go
var rulesForReleases = []rulesForRelease{
// NB: sort versions in descending order, i.e. newest supported version first.
{activeVersion: clusterversion.Latest, rulesRegistry: current.GetRegistry()},
{activeVersion: clusterversion.V25_3, rulesRegistry: release_25_3.GetRegistry()},
{activeVersion: clusterversion.V25_2, rulesRegistry: release_25_2.GetRegistry()},
}
```
## Impact
`GetRulesRegistryForRelease()` iterates through the array in order and returns the first entry where `activeVersion.IsActive()` is true. With the incorrect ascending order, a cluster running version 25.3 would incorrectly get the 25.2 rules instead of the 25.3 rules.
## Changes
- Fixed ordering in `pkg/sql/schemachanger/scplan/plan.go`
- Updated test expectations in `pkg/cli/testdata/declarative-rules/invalid_version`
- Regenerated test output in `pkg/cli/testdata/declarative-rules/deprules`
Epic: None
Release note (bug fix): Fix rulesForReleases ordering to correctly match cluster versions with schema changer rule sets.
Co-authored-by: Arul Ajmani <[email protected]>
Co-authored-by: Jeff Swenson <[email protected]>
Co-authored-by: Pavel Kalinnikov <[email protected]>
Co-authored-by: Yahor Yuzefovich <[email protected]>
Co-authored-by: Celia La <[email protected]>
0 commit comments