-
Notifications
You must be signed in to change notification settings - Fork 4.5k
fix(iceberg): Incorrect $partition Metadata in Trino for Iceberg Tables Written via IcebergIO.writeRows with Timestamp Partitioning #36562
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(iceberg): Incorrect $partition Metadata in Trino for Iceberg Tables Written via IcebergIO.writeRows with Timestamp Partitioning #36562
Conversation
|
Assigning reviewers: R: @robertwb for label java. Note: If you would like to opt out of this review, comment Available commands:
The PR bot will only process comments in the main thread (not review comments). |
|
Reminder, please take a look at this pr: @robertwb |
|
Assigning new set of reviewers because Pr has gone too long without review. If you would like to opt out of this review, comment R: @chamikaramj for label java. Available commands:
|
|
Reminder, please take a look at this pr: @chamikaramj |
|
Assigning new set of reviewers because Pr has gone too long without review. If you would like to opt out of this review, comment R: @ahmedabu98 for label java. Available commands:
|
|
hey @mskyrim, can you please run iceberg integration tests on this PR? just modify this file and commit a change: https://github.com/apache/beam/blob/master/.github/trigger_files/IO_Iceberg_Integration_Tests.json |
|
Hi @ahmedabu98 |
|
@mskyrim please modify it directly in this PR so the tests can run against these changes. |
… Iceberg Integration Tests
|
done @ahmedabu98 |
|
Looks good, thanks for fixing this @mskyrim! |
this fixes the issue raised here #35417
What happened?
We are using an Apache Beam pipeline (v2.62.0) to ingest data read in Protobuf format, transform it into a Beam schema (dynamically), and then write it to an Iceberg table using IcebergIO.writeRows as the final step of the pipeline.
We have noticed an issue when writing to Iceberg tables that are partitioned by the meta_processing_time field using the following specification:
(PartitionSpec.builderFor(contractIcebergSchema).month("meta_processing_time").build())
Although the Parquet data files are correctly written under the expected monthly partition folders (e.g., path/2025-06/filename.parquet), when querying the table using Trino:
select "$partition" , "$path",* from iceberg_table"
The $partition metadata field incorrectly shows a value of "1970-07", while:
the $path value is correctly referencing the meta_processing_time month (e.g., 'path/2025-06/filename.parquet')
the actual meta_processing_time column in the data contains the correct timestamp value (e.g., '2025-06-24 11:06:06.187 +00:00')
After running the following query in Trino:
ALTER TABLE table_name EXECUTE optimize
the $partition value is corrected and shows the expected value '2025-06'.
We have upgraded to the latest Apache Beam version as well as the Iceberg core library, but the issue persists.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead.CHANGES.mdwith noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.