Skip to content

Commit b5e2b27

Browse files
authored
Merge branch 'main' into products.yml-clarification
2 parents e07a838 + 775e231 commit b5e2b27

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

42 files changed

+1672
-687
lines changed

src/current/_data/v25.4/metrics/available-metrics-in-metrics-list.csv

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -483,3 +483,11 @@ rebalancing.range.rebalances
483483
rebalancing.replicas.cpunanospersecond
484484
rebalancing.replicas.queriespersecond
485485
rebalancing.state.imbalanced_overfull_options_exhausted
486+
sql.routine.delete.count
487+
sql.routine.delete.started.count
488+
sql.routine.insert.count
489+
sql.routine.insert.started.count
490+
sql.routine.select.count
491+
sql.routine.select.started.count
492+
sql.routine.update.count
493+
sql.routine.update.started.count

src/current/_data/v25.4/metrics/metrics.yaml

Lines changed: 1026 additions & 604 deletions
Large diffs are not rendered by default.

src/current/_includes/releases/v25.4/v25.4.0-beta.2.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -13,18 +13,18 @@ Release Date: October 10, 2025
1313

1414
- Added the `SHOW INSPECT ERRORS` command. This command can be used to view issues that are identified by running the `INSPECT` command to validate tables and indexes. [#154337][#154337]
1515
- Added the `sql.catalog.allow_leased_descriptors.enabled` cluster setting, which is false by default. When set to true, queries that access the `pg_catalog` or `information_schema` can use cached leased descriptors to populate the data in those tables, with the tradeoff that some of the data could be stale. [#154491][#154491]
16-
- We now support index acceleration for a subset of jsonb_path_exists filters. Given the `jsonb_path_exists(json_obj, json_path_expression)`, we only support inverted index for json_path_expression that matches one of the following patterns:
17-
- The json_path_expression must NOT be in STRICT mode.
18-
- keychain mode: $.[key|wildcard].[key|wildcard]...
19-
- For this mode, we will generate a prefix span for the inverted expression.
20-
- filter with end value mode, with equality check: $.[key|wildcard]? (@.[key|wildcard].[key|wildcard]... == [string|number|null|boolean])
21-
- For this mode, since the end value is fixed, we will generate a single value span.
22-
- Specifically, we don't support the following edge case:
23-
- $
24-
- $[*]
25-
- $.a.b.c == 12 or $.a.b.c > 12 or $.a.b.c < 12 (operation expression)
26-
- $.a.b ? (@.a > 10) (filter, with inequality check)
27-
- Note that the cases we support is to use `jsonb_path_exists` in filters, as in, when they are used in the WHERE clause. [#154631][#154631]
16+
- CockroachDB now supports index acceleration for certain `jsonb_path_exists` filters used in `WHERE` clauses. Given `jsonb_path_exists(json_obj, json_path_expression)`, an inverted index is supported only when `json_path_expression` matches one of the following patterns:
17+
- The `json_path_expression` must **not** be in `strict` mode.
18+
- Keychain mode: `$.[key|wildcard].[key|wildcard]...`
19+
- In this mode, a prefix span is generated for the inverted expression.
20+
- Filter with end value mode (equality check): `$.[key|wildcard]? (@.[key|wildcard].[key|wildcard]... == [string|number|null|boolean])`
21+
- In this mode, since the end value is fixed, a single value span is generated.
22+
- The following edge cases are **not** supported:
23+
- `$`
24+
- `$[*]`
25+
- `$.a.b.c == 12`, `$.a.b.c > 12`, or `$.a.b.c < 12` (operation expressions)
26+
- `$.a.b ? (@.a > 10)` (filter with inequality check)
27+
[#154631][#154631]
2828
- The optimizer can now use table statistics that merge the latest full statistic with all newer partial statistics, including those over arbitrary constraints over a single span. [#154755][#154755]
2929

3030
<h3 id="v25-4-0-beta-2-operational-changes">Operational changes</h3>

src/current/_includes/v25.3/essential-metrics.md

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,15 +1,15 @@
11
{% assign version = page.version.version | replace: ".", "" %}
22
{% comment %}DEBUG: {{ version }}{% endcomment %}
33

4-
These essential CockroachDB metrics let you monitor your CockroachDB {{ site.data.products.core }} cluster. Use them to build custom dashboards with the following tools:
5-
64
{% comment %} STEP 1. Assign variables specific to deployment {% endcomment %}
75
{% if include.deployment == 'self-hosted' %}
86
{% assign metrics_datadog = site.data[version].metrics.datadog-cockroachdb %}
97
{% assign datadog_link = "https://docs.datadoghq.com/integrations/cockroachdb/?tab=host#metrics" %}
108
{% assign datadog_prefix = "cockroachdb" %}
119
{% assign category_order = "HARDWARE,STORAGE,OVERLOAD,NETWORKING,DISTRIBUTED,REPLICATION,SQL,CHANGEFEEDS,TTL,UNSET," %}
1210

11+
These essential CockroachDB metrics let you monitor your CockroachDB {{ site.data.products.core }} cluster. Use them to build custom dashboards with the following tools:
12+
1313
- [Grafana]({% link {{ page.version.version }}/monitor-cockroachdb-with-prometheus.md %}#step-5-visualize-metrics-in-grafana)
1414
- [Datadog Integration]({% link {{ page.version.version }}/datadog.md %}): The [**Datadog Integration Metric Name**]({{ datadog_link }}) column lists the corresponding Datadog metric which requires the `{{ datadog_prefix }}.` prefix.
1515

@@ -20,6 +20,8 @@ These essential CockroachDB metrics let you monitor your CockroachDB {{ site.dat
2020
{% comment %} Removed NETWORKING category for advanced deployment {% endcomment %}
2121
{% assign category_order = "HARDWARE,STORAGE,OVERLOAD,DISTRIBUTED,REPLICATION,SQL,CHANGEFEEDS,TTL,UNSET," %}
2222

23+
These essential CockroachDB metrics let you monitor your CockroachDB {{ site.data.products.advanced }} cluster. Use them to build custom dashboards with the following tools:
24+
2325
- [Datadog integration]({% link cockroachcloud/tools-page.md %}#monitor-cockroachdb-cloud-with-datadog) - The [**Datadog Integration Metric Name**]({{ datadog_link }}) column lists the corresponding Datadog metric which requires the `{{ datadog_prefix }}` prefix.
2426
- [Metrics export]({% link cockroachcloud/export-metrics-advanced.md %})
2527

src/current/_includes/v25.3/known-limitations/vector-limitations.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,4 @@
11
- {% include {{ page.version.version }}/sql/vector-batch-inserts.md %}
2-
- Creating a vector index through a backfill disables mutations ([`INSERT`]({% link {{ page.version.version }}/insert.md %}), [`UPSERT`]({% link {{ page.version.version }}/upsert.md %}), [`UPDATE`]({% link {{ page.version.version }}/update.md %}), [`DELETE`]({% link {{ page.version.version }}/delete.md %})) on the table. [#144443](https://github.com/cockroachdb/cockroach/issues/144443)
32
- `IMPORT INTO` is not supported on tables with vector indexes. You can import the vectors first and create the index after import is complete. [#145227](https://github.com/cockroachdb/cockroach/issues/145227)
43
- The distance functions `vector_l1_ops`, `bit_hamming_ops`, and `bit_jaccard_ops` are not implemented. [#147839](https://github.com/cockroachdb/cockroach/issues/147839)
54
- Index acceleration with filters is only supported if the filters match prefix columns. [#146145](https://github.com/cockroachdb/cockroach/issues/146145)

src/current/_includes/v25.4/essential-metrics.md

Lines changed: 9 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,14 @@
11
{% assign version = page.version.version | replace: ".", "" %}
22
{% comment %}DEBUG: {{ version }}{% endcomment %}
33

4-
These essential CockroachDB metrics let you monitor your CockroachDB {{ site.data.products.core }} cluster. Use them to build custom dashboards with the following tools:
5-
64
{% comment %} STEP 1. Assign variables specific to deployment {% endcomment %}
75
{% if include.deployment == 'self-hosted' %}
86
{% assign metrics_datadog = site.data[version].metrics.datadog-cockroachdb %}
97
{% assign datadog_link = "https://docs.datadoghq.com/integrations/cockroachdb/?tab=host#metrics" %}
108
{% assign datadog_prefix = "cockroachdb" %}
11-
{% assign category_order = "HARDWARE,STORAGE,OVERLOAD,NETWORKING,DISTRIBUTED,REPLICATION,SQL,CHANGEFEEDS,TTL,UNSET," %}
9+
{% assign category_order = "HARDWARE,STORAGE,OVERLOAD,NETWORKING,DISTRIBUTED,REPLICATION,SQL,CHANGEFEEDS,TTL,CROSS_CLUSTER_REPLICATION,LOGICAL_DATA_REPLICATION,UNSET," %}
10+
11+
These essential CockroachDB metrics let you monitor your CockroachDB {{ site.data.products.core }} cluster. Use them to build custom dashboards with the following tools:
1212

1313
- [Grafana]({% link {{ page.version.version }}/monitor-cockroachdb-with-prometheus.md %}#step-5-visualize-metrics-in-grafana)
1414
- [Datadog Integration]({% link {{ page.version.version }}/datadog.md %}): The [**Datadog Integration Metric Name**]({{ datadog_link }}) column lists the corresponding Datadog metric which requires the `{{ datadog_prefix }}.` prefix.
@@ -18,7 +18,9 @@ These essential CockroachDB metrics let you monitor your CockroachDB {{ site.dat
1818
{% assign datadog_link = "https://docs.datadoghq.com/integrations/cockroach-cloud/#metrics" %}
1919
{% assign datadog_prefix = "crdb_dedicated" %}
2020
{% comment %} Removed NETWORKING category for advanced deployment {% endcomment %}
21-
{% assign category_order = "HARDWARE,STORAGE,OVERLOAD,DISTRIBUTED,REPLICATION,SQL,CHANGEFEEDS,TTL,UNSET," %}
21+
{% assign category_order = "HARDWARE,STORAGE,OVERLOAD,DISTRIBUTED,REPLICATION,SQL,CHANGEFEEDS,TTL,CROSS_CLUSTER_REPLICATION,LOGICAL_DATA_REPLICATION,UNSET," %}
22+
23+
These essential CockroachDB metrics let you monitor your CockroachDB {{ site.data.products.advanced }} cluster. Use them to build custom dashboards with the following tools:
2224

2325
- [Datadog integration]({% link cockroachcloud/tools-page.md %}#monitor-cockroachdb-cloud-with-datadog) - The [**Datadog Integration Metric Name**]({{ datadog_link }}) column lists the corresponding Datadog metric which requires the `{{ datadog_prefix }}` prefix.
2426
- [Metrics export]({% link cockroachcloud/export-metrics-advanced.md %})
@@ -56,7 +58,7 @@ The **Usage** column explains why each metric is important to visualize and how
5658

5759
{% comment %} Order categories, NOTE: new categories may break this order, however all relevant categories will be displayed though not in the desired order{% endcomment %}
5860
{% comment %}DEBUG: category_names_string = {{ category_names_string }}{% endcomment %}
59-
{% assign category_names_string_ordered = category_names_string | replace: "CHANGEFEEDS,DISTRIBUTED,NETWORKING,SQL,TTL,UNSET,HARDWARE,OVERLOAD,REPLICATION,STORAGE,", category_order %}
61+
{% assign category_names_string_ordered = category_names_string | replace: "CHANGEFEEDS,CROSS_CLUSTER_REPLICATION,DISTRIBUTED,LOGICAL_DATA_REPLICATION,NETWORKING,SQL,TTL,UNSET,HARDWARE,OVERLOAD,STORAGE,", category_order %}
6062
{% comment %}DEBUG: category_names_string_ordered = {{ category_names_string_ordered }}{% endcomment %}
6163
{% assign category_names_array = category_names_string_ordered | split: "," %}
6264

@@ -90,6 +92,8 @@ The **Usage** column explains why each metric is important to visualize and how
9092
{% elsif category_name == "REPLICATION" %}{% assign category_display_name = "KV Replication" %}
9193
{% elsif category_name == "CHANGEFEEDS" %}{% assign category_display_name = "Changefeeds" %}
9294
{% elsif category_name == "TTL" %}{% assign category_display_name = "Row-level TTL" %}
95+
{% elsif category_name == "CROSS_CLUSTER_REPLICATION" %}{% assign category_display_name = "Physical Replication" %}
96+
{% elsif category_name == "LOGICAL_DATA_REPLICATION" %}{% assign category_display_name = "Logical Replication" %}
9397
{% else %}{% assign category_display_name = category_name %}{% comment %} For example, SQL {% endcomment %}
9498
{% endif %}
9599

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,9 @@
1+
{% if page.name != "known-limitations.md" # New limitations in v25.4 %}
2+
- When using the `infer_rbr_region_col_using_constraint` option, inserting rows with `DEFAULT` for the region column uses the database's primary region instead of inferring the region from the parent table via foreign-key constraint. [#150783](https://github.com/cockroachdb/cockroach/issues/150783)
3+
{% endif %}
14
- When columns are [indexed]({% link {{ page.version.version }}/indexes.md %}), a subset of data from the indexed columns may appear in [meta ranges]({% link {{ page.version.version }}/architecture/distribution-layer.md %}#meta-ranges) or other system tables. CockroachDB synchronizes these system ranges and system tables across nodes. This synchronization does not respect any multi-region settings applied via either the [multi-region SQL statements]({% link {{ page.version.version }}/multiregion-overview.md %}), or the low-level [zone configs]({% link {{ page.version.version }}/configure-replication-zones.md %}) mechanism.
25
- [Zone configs]({% link {{ page.version.version }}/configure-replication-zones.md %}) can be used for data placement but these features were historically built for performance, not for domiciling. The replication system's top priority is to prevent the loss of data and it may override the zone configurations if necessary to ensure data durability. For more information, see [Replication Controls]({% link {{ page.version.version }}/configure-replication-zones.md %}#types-of-constraints).
36
- If your [log files]({% link {{ page.version.version }}/logging-overview.md %}) are kept in the region where they were generated, there is some cross-region leakage (like the system tables described previously), but the majority of user data that makes it into the logs is going to be homed in that region. If that's not strong enough, you can use the [log redaction functionality]({% link {{ page.version.version }}/configure-logs.md %}#redact-logs) to strip all raw data from the logs. You can also limit your log retention entirely.
4-
- If you start a node with a [`--locality`]({% link {{ page.version.version }}/cockroach-start.md %}#locality) flag that says the node is in region _A_, but the node is actually running in some region _B_, data domiciling based on the inferred node placement will not work. A CockroachDB node only knows its locality based on the text supplied to the `--locality` flag; it can not ensure that it is actually running in that physical location.
7+
- If you start a node with a [`--locality`]({% link {{ page.version.version }}/cockroach-start.md %}#locality) flag that says the node is in region _A_, but the node is actually running in some region _B_, data domiciling based on the inferred node placement will not work. A CockroachDB node only knows its locality based on the text supplied to the `--locality` flag; it can not ensure that it is actually running in that physical location.
8+
- {% include {{page.version.version}}/known-limitations/secondary-regions-with-regional-by-row-tables.md %}
9+
- {% include {{ page.version.version }}/known-limitations/enforce-home-region-limitations.md %}
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
`LIKE` queries with an `ESCAPE` clause cannot use index acceleration, which can result in significantly slower performance compared to standard `LIKE` queries. [#30192](https://github.com/cockroachdb/cockroach/issues/30192)
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
- The `ltree2text` function produces incorrect results by wrapping the output in single quotes. For example, `ltree2text('foo.bar.baz'::LTREE)` returns `'foo.bar.baz'` instead of `foo.bar.baz`. [#156479](https://github.com/cockroachdb/cockroach/issues/156479)
2+
- The `LTREE` `<@` operator produces incorrect results when using an index. The optimizer creates an incorrect index constraint span for `LTREE` `<@` queries. [#156478](https://github.com/cockroachdb/cockroach/issues/156478)

src/current/_includes/v25.4/known-limitations/read-committed-limitations.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
1+
{% if page.name != "known-limitations.md" # New limitations in v25.4 %}
2+
- Mixed-isolation-level workloads must enable foreign-key check locking for `SERIALIZABLE` transactions to avoid race conditions. [#151663](https://github.com/cockroachdb/cockroach/issues/151663#issuecomment-3222083180)
3+
{% endif %}
14
- Schema changes (e.g., [`CREATE TABLE`]({% link {{ page.version.version }}/create-table.md %}), [`CREATE SCHEMA`]({% link {{ page.version.version }}/create-schema.md %}), [`CREATE INDEX`]({% link {{ page.version.version }}/create-index.md %})) cannot be performed within explicit `READ COMMITTED` transactions when the [`autocommit_before_ddl` session setting]({% link {{page.version.version}}/set-vars.md %}#autocommit-before-ddl) is set to `off`, and will cause transactions to abort. As a workaround, [set the transaction's isolation level]({% link {{ page.version.version }}/read-committed.md %}#set-the-current-transaction-to-read-committed) to `SERIALIZABLE`. [#114778](https://github.com/cockroachdb/cockroach/issues/114778)
25
- Multi-column-family checks during updates are not supported under `READ COMMITTED` isolation. [#112488](https://github.com/cockroachdb/cockroach/issues/112488)
36
- Because locks acquired by [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) checks, [`SELECT FOR UPDATE`]({% link {{ page.version.version }}/select-for-update.md %}), and [`SELECT FOR SHARE`]({% link {{ page.version.version }}/select-for-update.md %}) are fully replicated under `READ COMMITTED` isolation, some queries experience a delay for Raft replication.

0 commit comments

Comments
 (0)