You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/current/_includes/releases/v25.4/v25.4.0-beta.2.md
+12-12Lines changed: 12 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,18 +13,18 @@ Release Date: October 10, 2025
13
13
14
14
- Added the `SHOW INSPECT ERRORS` command. This command can be used to view issues that are identified by running the `INSPECT` command to validate tables and indexes. [#154337][#154337]
15
15
- Added the `sql.catalog.allow_leased_descriptors.enabled` cluster setting, which is false by default. When set to true, queries that access the `pg_catalog` or `information_schema` can use cached leased descriptors to populate the data in those tables, with the tradeoff that some of the data could be stale. [#154491][#154491]
16
-
-We now support index acceleration for a subset of jsonb_path_exists filters. Given the `jsonb_path_exists(json_obj, json_path_expression)`, we only support inverted index for json_path_expression that matches one of the following patterns:
17
-
- The json_path_expression must NOT be in STRICT mode.
- For this mode, we will generate a prefix span for the inverted expression.
20
-
-filter with end value mode, with equality check: $.[key|wildcard]? (@.[key|wildcard].[key|wildcard]... == [string|number|null|boolean])
21
-
- For this mode, since the end value is fixed, we will generate a single value span.
22
-
-Specifically, we don't support the following edge case:
23
-
-$
24
-
- $[*]
25
-
- $.a.b.c == 12 or $.a.b.c > 12 or $.a.b.c < 12 (operation expression)
26
-
- $.a.b ? (@.a > 10) (filter, with inequality check)
27
-
- Note that the cases we support is to use `jsonb_path_exists` in filters, as in, when they are used in the WHERE clause. [#154631][#154631]
16
+
-CockroachDB now supports index acceleration for certain `jsonb_path_exists` filters used in `WHERE` clauses. Given `jsonb_path_exists(json_obj, json_path_expression)`, an inverted index is supported only when `json_path_expression` matches one of the following patterns:
17
+
- The `json_path_expression` must **not** be in `strict` mode.
-`$.a.b ? (@.a > 10)` (filter with inequality check)
27
+
[#154631][#154631]
28
28
- The optimizer can now use table statistics that merge the latest full statistic with all newer partial statistics, including those over arbitrary constraints over a single span. [#154755][#154755]
Copy file name to clipboardExpand all lines: src/current/_includes/v25.3/essential-metrics.md
+4-2Lines changed: 4 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,15 +1,15 @@
1
1
{% assign version = page.version.version | replace: ".", "" %}
2
2
{% comment %}DEBUG: {{ version }}{% endcomment %}
3
3
4
-
These essential CockroachDB metrics let you monitor your CockroachDB {{ site.data.products.core }} cluster. Use them to build custom dashboards with the following tools:
5
-
6
4
{% comment %} STEP 1. Assign variables specific to deployment {% endcomment %}
These essential CockroachDB metrics let you monitor your CockroachDB {{ site.data.products.core }} cluster. Use them to build custom dashboards with the following tools:
12
+
13
13
-[Grafana]({% link {{ page.version.version }}/monitor-cockroachdb-with-prometheus.md %}#step-5-visualize-metrics-in-grafana)
14
14
-[Datadog Integration]({% link {{ page.version.version }}/datadog.md %}): The [**Datadog Integration Metric Name**]({{ datadog_link }}) column lists the corresponding Datadog metric which requires the `{{ datadog_prefix }}.` prefix.
15
15
@@ -20,6 +20,8 @@ These essential CockroachDB metrics let you monitor your CockroachDB {{ site.dat
These essential CockroachDB metrics let you monitor your CockroachDB {{ site.data.products.advanced }} cluster. Use them to build custom dashboards with the following tools:
24
+
23
25
-[Datadog integration]({% link cockroachcloud/tools-page.md %}#monitor-cockroachdb-cloud-with-datadog) - The [**Datadog Integration Metric Name**]({{ datadog_link }}) column lists the corresponding Datadog metric which requires the `{{ datadog_prefix }}` prefix.
24
26
-[Metrics export]({% link cockroachcloud/export-metrics-advanced.md %})
Copy file name to clipboardExpand all lines: src/current/_includes/v25.3/known-limitations/vector-limitations.md
-1Lines changed: 0 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,4 @@
1
1
- {% include {{ page.version.version }}/sql/vector-batch-inserts.md %}
2
-
- Creating a vector index through a backfill disables mutations ([`INSERT`]({% link {{ page.version.version }}/insert.md %}), [`UPSERT`]({% link {{ page.version.version }}/upsert.md %}), [`UPDATE`]({% link {{ page.version.version }}/update.md %}), [`DELETE`]({% link {{ page.version.version }}/delete.md %})) on the table. [#144443](https://github.com/cockroachdb/cockroach/issues/144443)
3
2
-`IMPORT INTO` is not supported on tables with vector indexes. You can import the vectors first and create the index after import is complete. [#145227](https://github.com/cockroachdb/cockroach/issues/145227)
4
3
- The distance functions `vector_l1_ops`, `bit_hamming_ops`, and `bit_jaccard_ops` are not implemented. [#147839](https://github.com/cockroachdb/cockroach/issues/147839)
5
4
- Index acceleration with filters is only supported if the filters match prefix columns. [#146145](https://github.com/cockroachdb/cockroach/issues/146145)
Copy file name to clipboardExpand all lines: src/current/_includes/v25.4/essential-metrics.md
+9-5Lines changed: 9 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,14 @@
1
1
{% assign version = page.version.version | replace: ".", "" %}
2
2
{% comment %}DEBUG: {{ version }}{% endcomment %}
3
3
4
-
These essential CockroachDB metrics let you monitor your CockroachDB {{ site.data.products.core }} cluster. Use them to build custom dashboards with the following tools:
5
-
6
4
{% comment %} STEP 1. Assign variables specific to deployment {% endcomment %}
These essential CockroachDB metrics let you monitor your CockroachDB {{ site.data.products.core }} cluster. Use them to build custom dashboards with the following tools:
12
12
13
13
-[Grafana]({% link {{ page.version.version }}/monitor-cockroachdb-with-prometheus.md %}#step-5-visualize-metrics-in-grafana)
14
14
-[Datadog Integration]({% link {{ page.version.version }}/datadog.md %}): The [**Datadog Integration Metric Name**]({{ datadog_link }}) column lists the corresponding Datadog metric which requires the `{{ datadog_prefix }}.` prefix.
@@ -18,7 +18,9 @@ These essential CockroachDB metrics let you monitor your CockroachDB {{ site.dat
These essential CockroachDB metrics let you monitor your CockroachDB {{ site.data.products.advanced }} cluster. Use them to build custom dashboards with the following tools:
22
24
23
25
-[Datadog integration]({% link cockroachcloud/tools-page.md %}#monitor-cockroachdb-cloud-with-datadog) - The [**Datadog Integration Metric Name**]({{ datadog_link }}) column lists the corresponding Datadog metric which requires the `{{ datadog_prefix }}` prefix.
24
26
-[Metrics export]({% link cockroachcloud/export-metrics-advanced.md %})
@@ -56,7 +58,7 @@ The **Usage** column explains why each metric is important to visualize and how
56
58
57
59
{% comment %} Order categories, NOTE: new categories may break this order, however all relevant categories will be displayed though not in the desired order{% endcomment %}
{% if page.name != "known-limitations.md" # New limitations in v25.4 %}
2
+
- When using the `infer_rbr_region_col_using_constraint` option, inserting rows with `DEFAULT` for the region column uses the database's primary region instead of inferring the region from the parent table via foreign-key constraint. [#150783](https://github.com/cockroachdb/cockroach/issues/150783)
3
+
{% endif %}
1
4
- When columns are [indexed]({% link {{ page.version.version }}/indexes.md %}), a subset of data from the indexed columns may appear in [meta ranges]({% link {{ page.version.version }}/architecture/distribution-layer.md %}#meta-ranges) or other system tables. CockroachDB synchronizes these system ranges and system tables across nodes. This synchronization does not respect any multi-region settings applied via either the [multi-region SQL statements]({% link {{ page.version.version }}/multiregion-overview.md %}), or the low-level [zone configs]({% link {{ page.version.version }}/configure-replication-zones.md %}) mechanism.
2
5
-[Zone configs]({% link {{ page.version.version }}/configure-replication-zones.md %}) can be used for data placement but these features were historically built for performance, not for domiciling. The replication system's top priority is to prevent the loss of data and it may override the zone configurations if necessary to ensure data durability. For more information, see [Replication Controls]({% link {{ page.version.version }}/configure-replication-zones.md %}#types-of-constraints).
3
6
- If your [log files]({% link {{ page.version.version }}/logging-overview.md %}) are kept in the region where they were generated, there is some cross-region leakage (like the system tables described previously), but the majority of user data that makes it into the logs is going to be homed in that region. If that's not strong enough, you can use the [log redaction functionality]({% link {{ page.version.version }}/configure-logs.md %}#redact-logs) to strip all raw data from the logs. You can also limit your log retention entirely.
4
-
- If you start a node with a [`--locality`]({% link {{ page.version.version }}/cockroach-start.md %}#locality) flag that says the node is in region _A_, but the node is actually running in some region _B_, data domiciling based on the inferred node placement will not work. A CockroachDB node only knows its locality based on the text supplied to the `--locality` flag; it can not ensure that it is actually running in that physical location.
7
+
- If you start a node with a [`--locality`]({% link {{ page.version.version }}/cockroach-start.md %}#locality) flag that says the node is in region _A_, but the node is actually running in some region _B_, data domiciling based on the inferred node placement will not work. A CockroachDB node only knows its locality based on the text supplied to the `--locality` flag; it can not ensure that it is actually running in that physical location.
8
+
- {% include {{page.version.version}}/known-limitations/secondary-regions-with-regional-by-row-tables.md %}
9
+
- {% include {{ page.version.version }}/known-limitations/enforce-home-region-limitations.md %}
`LIKE` queries with an `ESCAPE` clause cannot use index acceleration, which can result in significantly slower performance compared to standard `LIKE` queries. [#30192](https://github.com/cockroachdb/cockroach/issues/30192)
- The `ltree2text` function produces incorrect results by wrapping the output in single quotes. For example, `ltree2text('foo.bar.baz'::LTREE)` returns `'foo.bar.baz'` instead of `foo.bar.baz`. [#156479](https://github.com/cockroachdb/cockroach/issues/156479)
2
+
- The `LTREE``<@` operator produces incorrect results when using an index. The optimizer creates an incorrect index constraint span for `LTREE``<@` queries. [#156478](https://github.com/cockroachdb/cockroach/issues/156478)
Copy file name to clipboardExpand all lines: src/current/_includes/v25.4/known-limitations/read-committed-limitations.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,3 +1,6 @@
1
+
{% if page.name != "known-limitations.md" # New limitations in v25.4 %}
2
+
- Mixed-isolation-level workloads must enable foreign-key check locking for `SERIALIZABLE` transactions to avoid race conditions. [#151663](https://github.com/cockroachdb/cockroach/issues/151663#issuecomment-3222083180)
3
+
{% endif %}
1
4
- Schema changes (e.g., [`CREATE TABLE`]({% link {{ page.version.version }}/create-table.md %}), [`CREATE SCHEMA`]({% link {{ page.version.version }}/create-schema.md %}), [`CREATE INDEX`]({% link {{ page.version.version }}/create-index.md %})) cannot be performed within explicit `READ COMMITTED` transactions when the [`autocommit_before_ddl` session setting]({% link {{page.version.version}}/set-vars.md %}#autocommit-before-ddl) is set to `off`, and will cause transactions to abort. As a workaround, [set the transaction's isolation level]({% link {{ page.version.version }}/read-committed.md %}#set-the-current-transaction-to-read-committed) to `SERIALIZABLE`. [#114778](https://github.com/cockroachdb/cockroach/issues/114778)
2
5
- Multi-column-family checks during updates are not supported under `READ COMMITTED` isolation. [#112488](https://github.com/cockroachdb/cockroach/issues/112488)
3
6
- Because locks acquired by [foreign key]({% link {{ page.version.version }}/foreign-key.md %}) checks, [`SELECT FOR UPDATE`]({% link {{ page.version.version }}/select-for-update.md %}), and [`SELECT FOR SHARE`]({% link {{ page.version.version }}/select-for-update.md %}) are fully replicated under `READ COMMITTED` isolation, some queries experience a delay for Raft replication.
0 commit comments