You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/best-practices/partitioning_keys.mdx
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -15,7 +15,7 @@ import merges_with_partitions from '@site/static/images/bestpractices/merges_wit
15
15
Partitioning is primarily a data management technique and not a query optimization tool, and while it can improve performance in specific workloads, it should not be the first mechanism used to accelerate queries; the partitioning key must be chosen carefully, with a clear understanding of its implications, and only applied when it aligns with data life cycle needs or well-understood access patterns.
16
16
:::
17
17
18
-
In ClickHouse, partitioning organizes data into logical segments based on a specified key. This is defined using the `PARTITION BY` clause at table creation time and is commonly used to group rows by time intervals, categories, or other business-relevant dimensions. Each unique value of the partitioning expression forms its own physical partition on disk, and ClickHouse stores data in separate parts for each of these values. Partitioning improves data management, simplifies retention policies, and can help with certain query patterns.
18
+
In ClickHouse, partitioning organizes data into logical segments based on a specified key. This is defined using the `PARTITION BY` clause at table creation time and is commonly used to group rows by time intervals, categories, or other business-relevant dimensions. Each unique value of the partitioning expression forms its own physical partition on disk, and ClickHouse stores data in separate ^^parts^^ for each of these values. Partitioning improves data management, simplifies retention policies, and can help with certain query patterns.
19
19
20
20
For example, consider the following UK price paid dataset table with a partitioning key of `toStartOfMonth(date)`.
21
21
@@ -40,7 +40,7 @@ The ClickHouse server first splits the rows from the example insert with 4 rows
40
40
41
41
For a more detailed explanation of partitioning, we recommend [this guide](/partitions).
42
42
43
-
With partitioning enabled, ClickHouse only [merges](/merges) data parts within, but not across partitions. We sketch that for our example table from above:
43
+
With partitioning enabled, ClickHouse only [merges](/merges) data ^^parts^^ within, but not across partitions. We sketch that for our example table from above:
@@ -52,16 +52,16 @@ While partitioning can improve query performance for some workloads, it can also
52
52
53
53
If the partitioning key is not in the primary key and you are filtering by it, users may see an improvement in query performance with partitioning. See [here](/partitions#query-optimization) for an example.
54
54
55
-
Conversely, if queries need to query across partitions performance may be negatively impacted due to a higher number of total parts. For this reason, users should understand their access patterns before considering partitioning a a query optimization technique.
55
+
Conversely, if queries need to query across partitions performance may be negatively impacted due to a higher number of total ^^parts^^. For this reason, users should understand their access patterns before considering partitioning a a query optimization technique.
56
56
57
57
In summary, users should primarily think of partitioning as a data management technique. For an example of managing data, see ["Managing Data"](/observability/managing-data) from the observability use-case guide and ["What are table partitions used for?"](/partitions#data-management) from Core Concepts - Table partitions.
58
58
59
59
## Choose a low cardinality partitioning key {#choose-a-low-cardinality-partitioning-key}
60
60
61
-
Importantly, a higher number of parts will negatively affect query performance. ClickHouse will therefore respond to inserts with a [“too many parts”](/knowledgebase/exception-too-many-parts) error if the number of parts exceeds specified limits either in [total](/operations/settings/merge-tree-settings#max_parts_in_total) or [per partition](/operations/settings/merge-tree-settings#parts_to_throw_insert).
61
+
Importantly, a higher number of ^^parts^^ will negatively affect query performance. ClickHouse will therefore respond to inserts with a [“too many parts”](/knowledgebase/exception-too-many-parts) error if the number of ^^parts^^ exceeds specified limits either in [total](/operations/settings/merge-tree-settings#max_parts_in_total) or [per partition](/operations/settings/merge-tree-settings#parts_to_throw_insert).
62
62
63
-
Choosing the right **cardinality** for the partitioning key is critical. A high-cardinality partitioning key - where the number of distinct partition values is large - can lead to a proliferation of data parts. Since ClickHouse does not merge parts across partitions, too many partitions will result in too many unmerged parts, eventually triggering the “Too many parts” error. [Merges are essential](/merges) for reducing storage fragmentation and optimizing query speed, but with high-cardinality partitions, that merge potential is lost.
63
+
Choosing the right **cardinality** for the partitioning key is critical. A high-cardinality partitioning key - where the number of distinct partition values is large - can lead to a proliferation of data ^^parts^^. Since ClickHouse does not merge ^^parts^^ across partitions, too many partitions will result in too many unmerged ^^parts^^, eventually triggering the “Too many ^^parts^^” error. [Merges are essential](/merges) for reducing storage fragmentation and optimizing query speed, but with high-cardinality partitions, that merge potential is lost.
64
64
65
-
By contrast, a **low-cardinality partitioning key**—with fewer than 100 - 1,000 distinct values - is usually optimal. It enables efficient part merging, keeps metadata overhead low, and avoids excessive object creation in storage. In addition, ClickHouse automatically builds MinMax indexes on partition columns, which can significantly speed up queries that filter on those columns. For example, filtering by month when the table is partitioned by `toStartOfMonth(date)` allows the engine to skip irrelevant partitions and their parts entirely.
65
+
By contrast, a **low-cardinality partitioning key**—with fewer than 100 - 1,000 distinct values - is usually optimal. It enables efficient part merging, keeps metadata overhead low, and avoids excessive object creation in storage. In addition, ClickHouse automatically builds MinMax indexes on partition columns, which can significantly speed up queries that filter on those columns. For example, filtering by month when the table is partitioned by `toStartOfMonth(date)` allows the engine to skip irrelevant partitions and their ^^parts^^ entirely.
66
66
67
-
While partitioning can improve performance in some query patterns, it's primarily a data management feature. In many cases, querying across all partitions can be slower than using a non-partitioned table due to increased data fragmentation and more parts being scanned. Use partitioning judiciously, and always ensure that the chosen key is low-cardinality and aligns with your data life cycle policies (e.g., retention via TTL). If you're unsure whether partitioning is necessary, you may want to start without it and optimize later based on observed access patterns.
67
+
While partitioning can improve performance in some query patterns, it's primarily a data management feature. In many cases, querying across all partitions can be slower than using a non-partitioned table due to increased data fragmentation and more ^^parts^^ being scanned. Use partitioning judiciously, and always ensure that the chosen key is low-cardinality and aligns with your data life cycle policies (e.g., retention via TTL). If you're unsure whether partitioning is necessary, you may want to start without it and optimize later based on observed access patterns.
Copy file name to clipboardExpand all lines: docs/concepts/why-clickhouse-is-so-fast.mdx
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,9 +19,9 @@ From an architectural perspective, databases consist (at least) of a storage lay
19
19
20
20
<iframewidth="1024"height="576"src="https://www.youtube.com/embed/vsykFYns0Ws?si=hE2qnOf6cDKn-otP"title="YouTube video player"frameborder="0"allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"referrerpolicy="strict-origin-when-cross-origin"allowfullscreen></iframe>
21
21
22
-
In ClickHouse, each table consists of multiple "table ^^parts^^". A [part](/parts) is created whenever a user inserts data into the table (INSERT statement). A query is always executed against all table parts that exist at the time the query starts.
22
+
In ClickHouse, each table consists of multiple "table ^^parts^^". A [part](/parts) is created whenever a user inserts data into the table (INSERT statement). A query is always executed against all table ^^parts^^ that exist at the time the query starts.
23
23
24
-
To avoid that too many parts accumulate, ClickHouse runs a [merge](/merges) operation in the background which continuously combines multiple smaller parts into a single bigger part.
24
+
To avoid that too many ^^parts^^ accumulate, ClickHouse runs a [merge](/merges) operation in the background which continuously combines multiple smaller ^^parts^^ into a single bigger part.
25
25
26
26
This approach has several advantages: All data processing can be [offloaded to background part merges](/concepts/why-clickhouse-is-so-fast#storage-layer-merge-time-computation), keeping data writes lightweight and highly efficient. Individual inserts are "local" in the sense that they do not need to update global, i.e. per-table data structures. As a result, multiple simultaneous inserts need no mutual synchronization or synchronization with existing table data, and thus inserts can be performed almost at the speed of disk I/O.
27
27
@@ -33,7 +33,7 @@ This approach has several advantages: All data processing can be [offloaded to b
33
33
34
34
<iframewidth="1024"height="576"src="https://www.youtube.com/embed/dvGlPh2bJFo?si=F3MSALPpe0gAoq5k"title="YouTube video player"frameborder="0"allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"referrerpolicy="strict-origin-when-cross-origin"allowfullscreen></iframe>
35
35
36
-
Inserts are fully isolated from SELECT queries, and merging inserted data parts happens in the background without affecting concurrent queries.
36
+
Inserts are fully isolated from SELECT queries, and merging inserted data ^^parts^^ happens in the background without affecting concurrent queries.
37
37
38
38
🤿 Deep dive into this in the [Storage Layer](/docs/academic_overview#3-storage-layer) section of the web version of our VLDB 2024 paper.
39
39
@@ -43,7 +43,7 @@ Inserts are fully isolated from SELECT queries, and merging inserted data parts
43
43
44
44
Unlike other databases, ClickHouse keeps data writes lightweight and efficient by performing all additional data transformations during the [merge](/merges) background process. Examples of this include:
45
45
46
-
-**Replacing merges** which retain only the most recent version of a row in the input parts and discard all other row versions. Replacing merges can be thought of as a merge-time cleanup operation.
46
+
-**Replacing merges** which retain only the most recent version of a row in the input ^^parts^^ and discard all other row versions. Replacing merges can be thought of as a merge-time cleanup operation.
47
47
48
48
-**Aggregating merges** which combine intermediate aggregation states in the input part to a new aggregation state. While this seems difficult to understand, it really actually only implements an incremental aggregation.
49
49
@@ -53,7 +53,7 @@ The point of these transformations is to shift work (computation) from the time
53
53
54
54
On the one hand, user queries may become significantly faster, sometimes by 1000x or more, if they can leverage "transformed" data, e.g. pre-aggregated data.
55
55
56
-
On the other hand, the majority of the runtime of merges is consumed by loading the input parts and saving the output part. The additional effort to transform the data during merge does usually not impact the runtime of merges too much. All of this magic is completely transparent and does not affect the result of queries (besides their performance).
56
+
On the other hand, the majority of the runtime of merges is consumed by loading the input ^^parts^^ and saving the output part. The additional effort to transform the data during merge does usually not impact the runtime of merges too much. All of this magic is completely transparent and does not affect the result of queries (besides their performance).
57
57
58
58
🤿 Deep dive into this in the [Merge-time Data Transformation](/docs/academic_overview#3-3-merge-time-data-transformation) section of the web version of our VLDB 2024 paper.
0 commit comments