Skip to content

Conversation

@Will-Lo
Copy link
Collaborator

@Will-Lo Will-Lo commented Dec 4, 2025

Summary

Issue Briefly discuss the summary of the changes made in this
pull request in 2-3 lines.

When committing table metadatas when multiple schema evolutions have been performed within a single commit, there can be issues ensuring that the schema IDs remain consistent. This is more commonly seen in replication tables as shown below:

---
title: Source Table
---
flowchart TD;
   A[Schema A - id 1] --> B[Schema B - id 2]
   B --> C[Schema C - id 3]
   C --> D[Schema D - id 4]
Loading
---
title: Replica Table
---
flowchart TD;
   A[Schema A - id 1] --> D[Schema D - id 2]
Loading

This causes issues because replication uses table snapshots generated directly on the source table, where each snapshot contains a reference to its corresponding schema ID. This causes the replica table to be unreadable for that specific snapshot. This affects certain compute engines such as Trino, as well as time travel queries.

To resolve this issue, we want to support multiple schema updates within a single commit on the server side. This resolves an extra field to send the delta of schemas, identified by schema ID, when performing a replicated table commit (this can be generalized to not only affect replica tables). Each schema needs to be serialized in order to ensure that the column IDs are consistent.

Changes

  • Client-facing API Changes
  • Internal API Changes
  • Bug Fixes
  • New Features
  • Performance Improvements
  • Code Style
  • Refactoring
  • Documentation
  • Tests

For all the boxes checked, please include additional details of the changes made in this pull request.

Testing Done

  • Manually Tested on local docker setup. Please include commands ran, and their output.
  • Added new tests for the changes made.
  • Updated existing tests to reflect the changes made.
  • No tests added or updated. Please explain why. If unsure, please feel free to ask for help.
  • Some other form of testing like staging or soak time in production. Please explain.

For all the boxes checked, include a detailed description of the testing done for the changes made in this pull request.

Additional Information

  • Breaking Changes
  • Deprecations
  • Large PR broken into smaller PRs, and PR plan linked in the description.

For all the boxes checked, include additional details of the changes made in this pull request.

@Will-Lo Will-Lo force-pushed the add-multi-schema-update-support-tables-api branch from d8d7dad to d366bce Compare December 4, 2025 20:58
@Will-Lo Will-Lo marked this pull request as ready for review December 4, 2025 21:50
@cbb330
Copy link
Collaborator

cbb330 commented Dec 4, 2025

this is a similar requirement to snapshots

in snapshots API, we pass List<snapshots> and List<snapshotrefs> via tabledto and deserialize them on server. server also has access to the existing baseMetadata object. so, once on server we:

  1. truncate snapshots that are on baseMetadata but not present in the new serialized version.
  2. add snapshots given in the new serialized version.
  3. update ref pointers

i think it would be useful for code reuse and logically simpler to implement the same API, but for schemas. so, pass List<schemas> which would be all schemas, and do a truncate (i don't think schemas can be truncated) and a append (only add new schemas)

which would preserve ordering.

@cbb330
Copy link
Collaborator

cbb330 commented Dec 4, 2025

does schema require a pointer per branch or is it global?

is last schema in the list always the current schema?

@cbb330
Copy link
Collaborator

cbb330 commented Dec 4, 2025

the table metadata object that we must serialize has these attributes

  private final String metadataFileLocation;
  private final int formatVersion;
  private final String uuid;
  private final String location;
  private final long lastSequenceNumber;
  private final long lastUpdatedMillis;
  private final int lastColumnId;
  private final int currentSchemaId;
  private final List<Schema> schemas;
  private final int defaultSpecId;
  private final List<PartitionSpec> specs;
  private final int lastAssignedPartitionId;
  private final int defaultSortOrderId;
  private final List<SortOrder> sortOrders;
  private final Map<String, String> properties;
  private final long currentSnapshotId;
  private final Map<Integer, Schema> schemasById;
  private final Map<Integer, PartitionSpec> specsById;
  private final Map<Integer, SortOrder> sortOrdersById;
  private final List<HistoryEntry> snapshotLog;
  private final List<MetadataLogEntry> previousFiles;
  private final List<StatisticsFile> statisticsFiles;
  private final List<PartitionStatisticsFile> partitionStatisticsFiles;
  private final List<MetadataUpdate> changes;
  private SerializableSupplier<List<Snapshot>> snapshotsSupplier;
  private volatile List<Snapshot> snapshots;
  private volatile Map<Long, Snapshot> snapshotsById;
  private volatile Map<String, SnapshotRef> refs;
  private volatile boolean snapshotsLoaded;

we know that list has to be serialized

but what about these?

 private final int currentSchemaId;
  private final Map<Integer, Schema> schemasById;

one idea for why curretnschemaid is required is for if we ever support schema rollback, the api will be ready.

schemasById <- i'm not sure what this is but maybe it would be helpful

@Will-Lo
Copy link
Collaborator Author

Will-Lo commented Dec 5, 2025

@cbb330 good questions so I'll go over it one at a time.
Schemas are similar to snapshots but not exactly 1-1 because snapshots are the SoT for determining which schema to use in Iceberg. When performing queries over branches or from timetravel, it will reference the snapshot's schema ID, so schemas just need to be in the map schemasById. Also schemas cannot be expired unlike snapshots, so a table metadata will store its schemas from creation time.

I checked through the docs here based on your questions: https://iceberg.apache.org/docs/nightly/branching/#usage

It is important to understand that the schema tracked for a table is valid across all branches. When working with branches, the table's schema is used as that's the schema being validated when writing data to a branch. On the other hands, querying a tag uses the snapshot's schema, which is the schema id that snapshot pointed to when the snapshot was created.

I didn't want to always send in all schemas in the request similarly to snapshots because like you said, schemas don't go through truncation. Also schemas never get expired on Iceberg, even when the snapshots referencing the schema is expired, so copying over all the schemas from the metadata in each request can blow up the payload size. Hence why the code here only looks at deltas.

There is a concern though that there are already some tables that currently have a mismatch from source <-> dest schemas already, in which case the proposed approach to ensure equality is better. It becomes a tradeoff of handling very large schemas that have been evolved a lot vs fixing existing tables that are already corrupted. I think it is safer to optimize for only sending in the schema deltas though, especially since schemas can't be expired unlike snapshots. Let me know your thoughts.

Copy link
Collaborator

@cbb330 cbb330 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

strategy wise, intermediate schemas approach seems like an ok tradeoff vs sending all schemas, given that schemas list is append only and content per schema are huge. i have some concerns on rollout.

  1. what will be the impact to destination tables which are already in a "bad" state?
  2. are there any source tables which are in a bad state? because in theory its possible if client evolved the schema twice before committing.

the answer to these two questions may dictate us to ideate on the rollout strategy because we will be "correcting" the old behavior which will change the contract with existing tables.

starting the conversation on the 1) assume destination table has schemas [0,1] but source table has [0,1,2,3,4]. this works now because we don't calculate "newschemas", and destinatino table snapshots which refer to 4 are somehow fine in spark ( but trino doesn't work) the only way to fix this is a true merge because newsnapshots can't be calculated.

@Will-Lo
Copy link
Collaborator Author

Will-Lo commented Dec 9, 2025

Strategy wise I will perform analysis to determine the number of corrupted tables given this limitation by checking all replica tables against their latest snapshot. Then I'll need to fix these tables manually, once they are fixed then going forward new evolutions will be safe.

@cbb330
Copy link
Collaborator

cbb330 commented Dec 13, 2025

This bug will do this for replicated tables:

Desired: [1a,2b,3c,4d]
Actual:[1a,2d]

where number is the ID and letter is the schema json

and after the fix will do

Desired: [1a,2b,3c,4d]
Actual:[1a,2d,3c,4d]

i'm thinking about the effect it has on manifests which point at their schemaId if the history looks not perfect but mostly accurate

i think since minimal replica tables have broken in the before state, it should be ok, wdyt?

@Will-Lo
Copy link
Collaborator Author

Will-Lo commented Dec 18, 2025

After the fix the actual should be
Desired: [1a,2b,3c,4d]
Actual:[1a,2d,3c,4d]

If fixed by hand (what I'll need to do pre-rollout) the actual will look like:
Desired: [1a,2b,3c,4d,5e]
Actual:[1a,4d,5e]

Which will mean going forward from the timestamp where the metadata is fixed it will be accurate.

Copy link
Collaborator

@cbb330 cbb330 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for sorting through the complexity here and coming up with a good solution.

Copy link
Member

@abhisheknath2011 abhisheknath2011 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Will-Lo for the work on this and handling the schema update scenario properly!

@Will-Lo Will-Lo merged commit 24dabd3 into linkedin:main Dec 22, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants