Skip to content

Commit 468ff96

Browse files
Merge branch '2025-10-01-doc-13854-add-read-from-standby' of github.com:cockroachdb/docs into 2025-10-01-doc-13854-add-read-from-standby
2 parents 1e90a6f + bd04a1e commit 468ff96

23 files changed

+191
-40
lines changed

src/current/_data/redirects.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -521,6 +521,10 @@
521521
sources: ['admin-ui-storage-dashboard.md']
522522
versions: ['v20.2', 'v21.1']
523523

524+
- destination: ui-top-ranges-page.md
525+
sources: ['ui-hot-ranges-page.md']
526+
versions: ['v25.4']
527+
524528
- destination: use-a-local-file-server-for-bulk-operations.md
525529
sources: ['create-a-file-server.md']
526530
versions: ['v20.2', 'v21.1']

src/current/_data/releases.yml

Lines changed: 28 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9533,4 +9533,31 @@
95339533
docker_arm_experimental: false
95349534
docker_arm_limited_access: false
95359535
source: true
9536-
previous_release: v25.4.0-alpha.1
9536+
previous_release: v25.4.0-alpha.1
9537+
9538+
- release_name: v25.4.0-beta.1
9539+
major_version: v25.4
9540+
release_date: '2025-10-01'
9541+
release_type: Testing
9542+
go_version: go1.23.12
9543+
sha: 79755b7c45a9fb7faf07098b721f2ec696cbdcd1
9544+
has_sql_only: true
9545+
has_sha256sum: true
9546+
mac:
9547+
mac_arm: true
9548+
mac_arm_experimental: true
9549+
mac_arm_limited_access: false
9550+
windows: true
9551+
linux:
9552+
linux_arm: true
9553+
linux_arm_experimental: false
9554+
linux_arm_limited_access: false
9555+
linux_intel_fips: true
9556+
linux_arm_fips: false
9557+
docker:
9558+
docker_image: cockroachdb/cockroach-unstable
9559+
docker_arm: true
9560+
docker_arm_experimental: false
9561+
docker_arm_limited_access: false
9562+
source: true
9563+
previous_release: v25.4.0-alpha.2

src/current/_includes/releases/v25.3/v25.3.2.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,10 @@ Release Date: September 22, 2025
44

55
{% include releases/new-release-downloads-docker-image.md release=include.release %}
66

7+
<h3 id="v25-3-2-general-changes">General changes</h3>
8+
9+
- Introduced support for running CockroachDB on s390x. Production binaries are delivered via IBM Passport Advantage.
10+
711
<h3 id="v25-3-2-sql-language-changes">SQL language changes</h3>
812

913
- Added a new session variable, `disable_optimizer_rules`, which allows users to provide a comma-separated list of optimizer rules to disable during query optimization. This allows users to avoid rules that are known to create a suboptimal query plan for specific queries. [#152350][#152350]
Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,56 @@
1+
## v25.4.0-beta.1
2+
3+
Release Date: October 1, 2025
4+
5+
{% include releases/new-release-downloads-docker-image.md release=include.release %}
6+
7+
<h3 id="v25-4-0-beta-1-sql-language-changes">SQL language changes</h3>
8+
9+
- The logical cluster now uses an external connection and automatically updates its configuration when that connection changes. [#149261][#149261]
10+
- Included `num_txn_retries` and `num_txn_auto_retries` into the `crdb_internal.{cluster,node}_queries` virtual tables as well as output of SHOW QUERIES. These columns, when not NULL, have the same information as `num_retries` and `num_auto_retries` columns of `crdb_internal.{cluster,node}_transactions` virtual tables for the same transaction in which the active query is executed. [#149503][#149503]
11+
- Tables with vector indexes will no longer be taken offline while the vector index builds. [#151074][#151074]
12+
- Introduced the unimplemented `SHOW INSPECT ERRORS` statement. [#151674][#151674]
13+
- Added a built-in function, `crdb_internal.request_transaction_bundle`, that allows users to request a transaction diagnostics bundle for a specified transaction fingerprint ID. [#153608][#153608]
14+
- Implemented the `pg_get_function_arg_default` builtin function. This also causes the `information_schema.parameters(parameter_default)` column to be populated correctly. [#153625][#153625]
15+
16+
<h3 id="v25-4-0-beta-1-operational-changes">Operational changes</h3>
17+
18+
- Removed the `bulkio.backup.deprecated_full_backup_with_subdir.enabled` cluster setting, since backups will now fail if this is set to true. [#153628][#153628]
19+
- Raised the cache size for the storage engine's block cache to 256 MiB. Note that production systems should always configure this setting. [#153739][#153739]
20+
- Deprecated the bespoke restore and import event logs. For any deployment that is reliant on those logs, use the status change event log which now plumbs the SQL user that owns the job. [#153889][#153889]
21+
- The `incremental_location` option is now deprecated and will be removed in a future release. This feature was added so customers could define different TTL policies for incremental backups vs full backups. Users can still do this since incremental backups are by default stored in a distinct directory relative to full backups ({collection_root}/incrementals). [#153890][#153890]
22+
23+
<h3 id="v25-4-0-beta-1-db-console-changes">DB Console changes</h3>
24+
25+
- In the DB Console, the **Active Executions** table on the Statements and Transactions pages now includes a new **Isolation Level** column. The Sessions page also includes a new **Default Isolation Level** column. [#153617][#153617]
26+
27+
<h3 id="v25-4-0-beta-1-bug-fixes">Bug fixes</h3>
28+
29+
- Fixed a bug where a CockroachDB node could crash when executing DO statements that contain user-defined types (possibly non-existing) in non-default configuration. [#151849][#151849]
30+
- Fixed a deadlock in `DROP COLUMN CASCADE` operations when dropping columns referenced by `STORED` computed columns. [#153683][#153683]
31+
- Fixed a bug where `ALTER POLICY` was incorrectly dropping dependency tracking for functions, sequences, or types in policy expressions. [#153787][#153787]
32+
- Fixed a bug where we would not show the pgwire `RowDescription` for `EXECUTE` statements that were themselves prepared using the pgwire `Parse` command. [#153905][#153905]
33+
- Fixed a runtime error that could be hit if a new secondary index had a name collision with a primary index. [#153986][#153986]
34+
35+
<h3 id="v25-4-0-beta-1-miscellaneous">Miscellaneous</h3>
36+
37+
- Fixed a bug where the presence of duplicate temporary tables in a backup caused the restore to fail with a `restoring table desc and namespace entries: table already exists` error. Informs: #153722 [#153724][#153724]
38+
39+
40+
[#153739]: https://github.com/cockroachdb/cockroach/pull/153739
41+
[#153889]: https://github.com/cockroachdb/cockroach/pull/153889
42+
[#151849]: https://github.com/cockroachdb/cockroach/pull/151849
43+
[#153683]: https://github.com/cockroachdb/cockroach/pull/153683
44+
[#151674]: https://github.com/cockroachdb/cockroach/pull/151674
45+
[#149503]: https://github.com/cockroachdb/cockroach/pull/149503
46+
[#151074]: https://github.com/cockroachdb/cockroach/pull/151074
47+
[#153628]: https://github.com/cockroachdb/cockroach/pull/153628
48+
[#149261]: https://github.com/cockroachdb/cockroach/pull/149261
49+
[#153890]: https://github.com/cockroachdb/cockroach/pull/153890
50+
[#153905]: https://github.com/cockroachdb/cockroach/pull/153905
51+
[#153608]: https://github.com/cockroachdb/cockroach/pull/153608
52+
[#153617]: https://github.com/cockroachdb/cockroach/pull/153617
53+
[#153787]: https://github.com/cockroachdb/cockroach/pull/153787
54+
[#153986]: https://github.com/cockroachdb/cockroach/pull/153986
55+
[#153724]: https://github.com/cockroachdb/cockroach/pull/153724
56+
[#153625]: https://github.com/cockroachdb/cockroach/pull/153625

src/current/_includes/v24.1/essential-alerts.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -318,9 +318,9 @@ Send an alert when the number of ranges with replication below the replication f
318318

319319
- Refer to [Replication issues]({% link {{ page.version.version }}/cluster-setup-troubleshooting.md %}#replication-issues).
320320

321-
### Requests stuck in raft
321+
### Requests stuck in Raft
322322

323-
Send an alert when requests are taking a very long time in replication. An (evaluated) request has to pass through the replication layer, notably the quota pool and raft. If it fails to do so within a highly permissive duration, the gauge is incremented (and decremented again once the request is either applied or returns an error). A nonzero value indicates range or replica unavailability, and should be investigated.
323+
Send an alert when requests are taking a very long time in replication. An (evaluated) request has to pass through the replication layer, notably the quota pool and raft. If it fails to do so within a highly permissive duration, the gauge is incremented (and decremented again once the request is either applied or returns an error). A nonzero value indicates range or replica unavailability, and should be investigated. This can also be a symptom of a [leader-leaseholder split]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leader-leaseholder-splits).
324324

325325
**Metric**
326326
<br>`requests.slow.raft`

src/current/_includes/v24.3/essential-alerts.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -318,9 +318,9 @@ Send an alert when the number of ranges with replication below the replication f
318318

319319
- Refer to [Replication issues]({% link {{ page.version.version }}/cluster-setup-troubleshooting.md %}#replication-issues).
320320

321-
### Requests stuck in raft
321+
### Requests stuck in Raft
322322

323-
Send an alert when requests are taking a very long time in replication. An (evaluated) request has to pass through the replication layer, notably the quota pool and raft. If it fails to do so within a highly permissive duration, the gauge is incremented (and decremented again once the request is either applied or returns an error). A nonzero value indicates range or replica unavailability, and should be investigated.
323+
Send an alert when requests are taking a very long time in replication. An (evaluated) request has to pass through the replication layer, notably the quota pool and raft. If it fails to do so within a highly permissive duration, the gauge is incremented (and decremented again once the request is either applied or returns an error). A nonzero value indicates range or replica unavailability, and should be investigated. This can also be a symptom of a [leader-leaseholder split]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leader-leaseholder-splits).
324324

325325
**Metric**
326326
<br>`requests.slow.raft`

src/current/_includes/v25.4/sidebar-data/self-hosted-deployments.json

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -639,9 +639,9 @@
639639
]
640640
},
641641
{
642-
"title": "Hot Ranges Page",
642+
"title": "Top Ranges Page",
643643
"urls": [
644-
"/${VERSION}/ui-hot-ranges-page.html"
644+
"/${VERSION}/ui-top-ranges-page.html"
645645
]
646646
},
647647
{

src/current/v24.1/architecture/replication-layer.md

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -146,6 +146,15 @@ A table's meta and system ranges (detailed in the [distribution layer]({% link {
146146

147147
However, unlike table data, system ranges cannot use epoch-based leases because that would create a circular dependency: system ranges are already being used to implement epoch-based leases for table data. Therefore, system ranges use expiration-based leases instead. Expiration-based leases expire at a particular timestamp (typically after a few seconds). However, as long as a node continues proposing Raft commands, it continues to extend the expiration of its leases. If it doesn't, the next node containing a replica of the range that tries to read from or write to the range will become the leaseholder.
148148

149+
#### Leader‑leaseholder splits
150+
151+
[Epoch-based leases](#epoch-based-leases-table-data) are vulnerable to _leader-leaseholder splits_. These can occur when a leaseholder's Raft log has fallen behind other replicas in its group and it cannot acquire Raft leadership. Coupled with a [network partition]({% link {{ page.version.version }}/cluster-setup-troubleshooting.md %}#network-partition), this split can cause permanent unavailability of the range if (1) the stale leaseholder continues heartbeating the [liveness range](#epoch-based-leases-table-data) to hold its lease but (2) cannot reach the leader to propose writes.
152+
153+
Symptoms of leader-leaseholder splits include a [stalled Raft log]({% link {{ page.version.version }}/monitoring-and-alerting.md %}#requests-stuck-in-raft) on the leaseholder and [increased disk usage]({% link {{ page.version.version }}/cluster-setup-troubleshooting.md %}#disks-filling-up) on follower replicas buffering pending Raft entries. Remediations include:
154+
155+
- Restarting the affected nodes.
156+
- Fixing the network partition (or slow networking) between nodes.
157+
149158
#### How leases are transferred from a dead node
150159

151160
When the cluster needs to access a range on a leaseholder node that is dead, that range's lease must be transferred to a healthy node. This process is as follows:

src/current/v24.1/cluster-setup-troubleshooting.md

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -387,6 +387,8 @@ Like any database system, if you run out of disk space the system will no longer
387387
- [Why is disk usage increasing despite lack of writes?]({% link {{ page.version.version }}/operational-faqs.md %}#why-is-disk-usage-increasing-despite-lack-of-writes)
388388
- [Can I reduce or disable the storage of timeseries data?]({% link {{ page.version.version }}/operational-faqs.md %}#can-i-reduce-or-disable-the-storage-of-time-series-data)
389389
390+
In rare cases, disk usage can increase on nodes with [Raft followers]({% link {{ page.version.version }}/architecture/replication-layer.md %}#raft) due to a [leader-leaseholder split]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leader-leaseholder-splits).
391+
390392
###### Automatic ballast files
391393
392394
CockroachDB automatically creates an emergency ballast file at [node startup]({% link {{ page.version.version }}/cockroach-start.md %}). This feature is **on** by default. Note that the [`cockroach debug ballast`]({% link {{ page.version.version }}/cockroach-debug-ballast.md %}) command is still available but deprecated.

src/current/v24.1/monitoring-and-alerting.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1205,7 +1205,7 @@ Currently, not all events listed have corresponding alert rule definitions avail
12051205

12061206
#### Requests stuck in Raft
12071207

1208-
- **Rule:** Send an alert when requests are taking a very long time in replication.
1208+
- **Rule:** Send an alert when requests are taking a very long time in replication. This can be a symptom of a [leader-leaseholder split]({% link {{ page.version.version }}/architecture/replication-layer.md %}#leader-leaseholder-splits).
12091209

12101210
- **How to detect:** Calculate this using the `requests_slow_raft` metric in the node's `_status/vars` output.
12111211

0 commit comments

Comments
 (0)