You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
description: Users can now enable compaction on R2 Data Catalog
4
4
products:
5
5
- r2
6
-
date: 2025-09-25
6
+
date: 2025-09-25T13:00:00
7
7
hidden: true
8
8
---
9
9
import {
10
10
LinkCard,
11
11
} from"~/components";
12
12
13
-
Today, we're adding support for managed [compaction](/r2/data-catalog/about-compaction) for [Apache Iceberg](https://iceberg.apache.org/) tables managed by [R2 Data Catalog](/r2/data-catalog/).
13
+
You can now enable automatic compaction for [Apache Iceberg](https://iceberg.apache.org/) tables in [R2 Data Catalog](/r2/data-catalog/) to improve query performance.
14
14
15
-
Compaction is the process of taking a group of small files and combining them into fewer larger files. This is an important maintenance operation as it helps ensure that performance remains consistent by reducing the number of files that needs to be scanned when running queries.
15
+
Compaction is the process of taking a group of small files and combining them into fewer larger files. This is an important maintenance operation as it helps ensure that query performance remains consistent by reducing the number of files that needs to be scanned.
16
16
17
-
To enable compaction in R2 Data Catalog, find it under **R2 Data Catalog** in your bucket settings in the dashboard
17
+
To enable automatic compaction in R2 Data Catalog, find it under **R2 Data Catalog** in your R2 bucket settings in the dashboard.
And that's it. Compaction will start running automatically.
28
-
29
-
<LinkCard
30
-
title="Learn more about R2 Data Catalog"
31
-
href="/r2/data-catalog/manage-catalogs"
32
-
description="Learn how to manage R2 Data Catalog and enable compaction on your bucket."
33
-
/>
34
-
35
-
<LinkCard
36
-
title="Learn more about compaction"
37
-
href="/r2/data-catalog/about-compaction"
38
-
description="Learn more about compaction, best practices, and limitations."
39
-
/>
27
+
To get started with compaction, check out [manage catalogs](/r2/data-catalog/manage-catalogs/). For best practices and limitations, refer to [about compaction](/r2/data-catalog/about-compaction/).
Copy file name to clipboardExpand all lines: src/content/docs/r2/data-catalog/about-compaction.mdx
+11-15Lines changed: 11 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,36 +8,32 @@ sidebar:
8
8
9
9
## What is compaction?
10
10
11
-
Compaction is the process of taking a group of small files and combining them into fewer larger files. This is an important maintenance operation as it helps ensure that performance remains consistent by reducing the number of files that needs to be scanned.
11
+
Compaction is the process of taking a group of small files and combining them into fewer larger files. This is an important maintenance operation as it helps ensure that query performance remains consistent by reducing the number of files that needs to be scanned.
12
12
13
13
## Why do I need compaction?
14
14
15
-
Every write operation in Apache Iceberg, no matter how small or large, results in a series of new files being generated. As time goes on, the number of files can grow unbounded. This can lead to:
16
-
-Increased metadata overhead: Each file has its own metadata, file path, column statistics, etc. This means the query engine will have to read a large amount of metadata to satisfy a given query.
17
-
- Increased I/O operations: Without compaction, query engines will have to open and read each individual file, resulting in increased resource usage and cost.
18
-
- Reduced compression efficiency: Smaller files tend to compress less efficiently compared to larger files.
15
+
Every write operation in [Apache Iceberg](https://iceberg.apache.org/), no matter how small or large, results in a series of new files being generated. As time goes on, the number of files can grow unbounded. This can lead to:
16
+
-Slower queries and increased I/O operations: Without compaction, query engines will have to open and read each individual file, resulting in longer query times and increased costs.
17
+
- Increased metadata overhead: Query engines must scan metadata files to determine which ones to read. With thousands of small files, query planning takes longer even before data is accessed.
18
+
- Reduced compression efficiency: Smaller files compress less efficiently than larger files, leading to higher storage costs and more data to transfer during queries.
19
19
20
-
## R2 Data Catalog Compaction
20
+
## R2 Data Catalog automatic compaction
21
21
22
-
R2 Data Catalog can now [manage compaction](/r2/data-catalog/manage-catalogs) for Apache Iceberg tables stored in R2. The compaction service periodically runs every hour and compacts new files that have not been compacted yet.
22
+
R2 Data Catalog can now [manage compaction](/r2/data-catalog/manage-catalogs) for Apache Iceberg tables stored in R2. When enabled, compaction runs automatically and combines new files that have not been compacted yet.
23
23
24
-
You can tell which files in R2 have been compacted as they have a `compacted-` added to the file name in the `/data/` directory.
24
+
Compacted files are prefixed with `compacted-` in the `/data/` directory.
25
25
26
26
### Choosing the right target file size
27
27
28
-
You can configure the target file size compaction should try to generate if possible. There is a minimum of `64 MB` and a maximum of `512 MB` currently.
28
+
You can configure the target file size for compaction. Currently, the minimum is 64 MB and the maximum is 512 MB.
29
29
30
-
Different compute engines tend to have different best practices when it comes to ideal file sizes so it's best to consult their documentation to find out what's best.
30
+
Different compute engines have different optimal file sizes, so check their documentation.
31
31
32
-
It's important to note that there are performance tradeoffs to keep in mind based on the use case. For example, if your use case is primarily performing queries that are well defined and will return small amounts of data, having a smaller target file size might be more beneficial as you might end up reading more data than necessary with larger files.
32
+
Performance tradeoffs depend on your use case. For example, queries that return small amounts of data may perform better with smaller files, as larger files could result in reading unnecessary data.
33
33
- For workloads that are more latency sensitive, consider a smaller target file size (for example, 64MB - 128MB)
34
34
- For streaming ingest workloads, consider medium file sizes (for example, 128MB - 256MB)
35
35
- For OLAP style queries that need to scan a lot of data, consider larger file sizes (for example, 256MB - 512MB)
36
36
37
-
:::note
38
-
Make sure to check to your compute engine documentation to check if there's a recommended file size.
39
-
:::
40
-
41
37
## Current limitations
42
38
- During open beta, compaction will compact up to 2GB worth of files once per hour for each table.
43
39
- Only data files stored in parquet format are currently supported with compaction.
Compaction is a performance optimization that takes the many small files created when ingesting data and combines them into fewer, larger files according to the set `target file size`. [Click here](http:///r2/data-catalog/about-compaction) to learn more about compaction.
81
+
Compaction improves query performance by combining the many small files created during data ingestion into fewer, larger files according to the set `target file size`. For more information about compaction and why it's valuable, refer to [About compaction](/r2/data-catalog/about-compaction/).
82
82
<TabssyncKey='CLIvDash'>
83
83
<TabItemlabel='Dashboard'>
84
84
@@ -87,10 +87,10 @@ Compaction is a performance optimization that takes the many small files created
87
87
88
88
<DashButtonurl="/?to=/:account/r2/overview" />
89
89
2. Select the bucket you want to enable compaction on.
90
-
3. Switch to the **Settings** tab, scroll down to **R2 Data Catalog**, and click on the **edit** icon next to the compaction card.
91
-
4.Click enable and optionally set a target file size. The default is 128MB.
92
-
5. Provide a credential: you can choose to allow us generate an account-level token on your behalf, which is scoped to your bucket (recommended); or, you can manually input a token.
93
-
6.Slick *save*
90
+
3. Switch to the **Settings** tab, scroll down to **R2 Data Catalog**, and click on the **Edit** icon next to the compaction card.
91
+
4.Enable compaction and optionally set a target file size. The default is 128MB.
92
+
5.(Optional) Provide a Cloudflare API token for compaction to access and rewrite files in your bucket.
93
+
6.Select **Save**.
94
94
</Steps>
95
95
96
96
</TabItem>
@@ -99,16 +99,16 @@ Compaction is a performance optimization that takes the many small files created
99
99
To enable the compaction on your catalog, run the [`r2 bucket catalog enable command`](/workers/wrangler/commands/#r2-bucket-catalog-compaction-enable):
An API Token is a required argument to enable compaction because our process needs to access and write metadata in R2 Catalog and access files in the R2 bucket, just like any Iceberg client would require. So we need an API Token with R2 Bucket Read/Write permissions and R2 Catalog Read/Write permissions. The token is encrypted and securely accessed only by our compaction processor. Once saved, it is not required again if you re-enable compaction on your Catalog after disabling it.
106
+
Compaction requires a Cloudflare API token with **both** R2 storage and R2 Data Catalog read/write permissions to act as a service credential. The compaction process uses this token to read files, combine them, and update table metadata. Refer to [Authenticate your Iceberg engine](#authenticate-your-iceberg-engine) for details on creating a token with the required permissions.
107
107
108
-
After enabling, compaction will be enabled retroactively on all existing tables, and will be enabled by default for newly created tables. During open beta, we currently compact up to 2GB worth of files once per hour for each table.
108
+
Once enabled, compaction applies retroactively to all existing tables and automatically to newly created tables. During open beta, we currently compact up to 2GB worth of files once per hour for each table.
109
109
110
110
## Disable compaction
111
-
Disabling compaction will prevent the process from running for all tables within the Catalog. You can re-enable it at any time.
111
+
Disabling compaction will prevent the process from running for all tables managed by the catalog. You can re-enable it at any time.
112
112
113
113
<TabssyncKey='CLIvDash'>
114
114
<TabItemlabel='Dashboard'>
@@ -119,8 +119,8 @@ Disabling compaction will prevent the process from running for all tables within
119
119
<DashButtonurl="/?to=/:account/r2/overview" />
120
120
2. Select the bucket you want to enable compaction on.
121
121
3. Switch to the **Settings** tab, scroll down to **R2 Data Catalog**, and click on the **edit** icon next to the compaction card.
122
-
4.Click **disable**.
123
-
5.Slick *save*.
122
+
4.Disable compaction.
123
+
5.Select **Save**.
124
124
</Steps>
125
125
126
126
</TabItem>
@@ -129,7 +129,7 @@ Disabling compaction will prevent the process from running for all tables within
129
129
To disable the compaction on your catalog, run the [`r2 bucket catalog disable command`](/workers/wrangler/commands/#r2-bucket-catalog-compaction-disable):
0 commit comments