-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Description
Search before asking
- I searched in the issues and found nothing similar.
Motivation
Compact job may attempt to compact files in partitions that are concurrently being expired. When committing, the compactBefore files are deleted by partition expiration, causing a conflict and the error: "You are writing data to expired partitions".
ERROR message like this:
java.lang.RuntimeException: You are writing data to expired partitions, and you can filter this data to avoid job failover. Otherwise, continuous expired records will cause the job to failover restart continuously. Expired partitions are: [20260311, 20260318] at org.apache.paimon.operation.FileStoreCommitImpl.assertConflictForPartitionExpire(FileStoreCommitImpl.java:1445) at org.apache.paimon.operation.FileStoreCommitImpl.assertNoDelete(FileStoreCommitImpl.java:1424) at org.apache.paimon.operation.FileStoreCommitImpl.noConflictsOrFail(FileStoreCommitImpl.java:1368) at org.apache.paimon.operation.FileStoreCommitImpl.tryCommitOnce(FileStoreCommitImpl.java:948) at org.apache.paimon.operation.FileStoreCommitImpl.tryCommit(FileStoreCommitImpl.java:782) at org.apache.paimon.operation.FileStoreCommitImpl.commit(FileStoreCommitImpl.java:361) at org.apache.paimon.table.sink.TableCommitImpl.commitMultiple(TableCommitImpl.java:217) at
Solution
Add a new table option compaction.skip-expired-partitions (default false). When enabled, the compaction job source filters out partitions that have already expired, preventing compaction from generating DELETE manifest entries for expired partitions and avoiding the commit conflict.
Anything else?
No
Are you willing to submit a PR?
- I'm willing to submit a PR!