Skip to content

Commit 3f153eb

Browse files
authored
Add docs for sql.schema.approx_max_object_count (#20751)
* Add docs for `sql.schema.approx_max_object_count` Fixes DOC-14949 Information about this cluster setting will also be added to the list of backwards-incompatible changes in the v25.4.0 release notes via DOC-15112 * Update with taroface feedback (1)
1 parent 28e255e commit 3f153eb

File tree

1 file changed

+13
-1
lines changed

1 file changed

+13
-1
lines changed

src/current/v25.4/schema-design-overview.md

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -124,10 +124,22 @@ CockroachDB has been shown to perform well with clusters containing 10,000 table
124124

125125
As you scale to a large number of tables, note that:
126126

127-
- The amount of RAM per node is the limiting factor for the number of tables and other schema objects the cluster can support. This includes columns, indexes, GIN indexes, constraints, and partitions. Increasing RAM is likely to have the greatest impact on the number of these objects that a cluster can support, while increasing the number of nodes will not have a substantial effect.
127+
- {% include_cached new-in.html version="v25.4" %} The cluster setting [`sql.schema.approx_max_object_count`]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-schema-approx-max-object-count) defaults to `20000` and blocks creation of new schema objects once the approximate count exceeds the limit. The check relies on cached [table statistics]({% link {{ page.version.version }}/cost-based-optimizer.md %}#table-statistics), so enforcement can lag until statistics refresh.
128+
- Other than the value of the `sql.schema.approx_max_object_count` cluster setting, the amount of RAM per node is the limiting factor for the number of tables and other schema objects the cluster can support. This includes columns, [indexes]({% link {{ page.version.version }}/indexes.md %}), [GIN indexes]({% link {{ page.version.version }}/inverted-indexes.md %}), [constraints]({% link {{ page.version.version }}/constraints.md %}), and [partitions]({% link {{ page.version.version }}/partitioning.md %}). Increasing RAM is likely to have the greatest impact on the number of these objects that a cluster can support, while increasing the number of nodes will not have a substantial effect.
128129
- The number of databases or schemas on the cluster has minimal impact on the total number of tables that it can support.
129130
- Performance at larger numbers of tables may be affected by your use of [backup and restore]({% link {{ page.version.version }}/backup-and-restore-overview.md %}) and [Change data capture (CDC)]({% link {{ page.version.version }}/change-data-capture-overview.md %}).
130131

132+
If you upgrade to this version with an existing object count above the limit set by [`sql.schema.approx_max_object_count`]({% link {{ page.version.version }}/cluster-settings.md %}#setting-sql-schema-approx-max-object-count), the upgrade will complete, but future attempts to create schema objects will return an error until you raise or disable the limit:
133+
134+
{% include_cached copy-clipboard.html %}
135+
~~~ sql
136+
-- Raise the limit
137+
SET CLUSTER SETTING sql.schema.approx_max_object_count = 50000;
138+
139+
-- Or disable the limit
140+
SET CLUSTER SETTING sql.schema.approx_max_object_count = 0;
141+
~~~
142+
131143
See the [Hardware]({% link {{ page.version.version }}/recommended-production-settings.md %}#hardware) section for additional recommendations based on your expected workloads.
132144

133145
### Quantity of rows

0 commit comments

Comments
 (0)