Skip to content

Conversation

@voonhous
Copy link
Member

@voonhous voonhous commented Dec 30, 2025

Describe the issue this Pull Request addresses

Introduce the new VARIANT schema type. This will look similar to a record/struct with a logical type associated with it. The implementation is inline with parquet spec.

Linked task: #17745

Note: No reader and writer support has been added yet. It will be added in a separate PR.

Summary and Changelog

  1. Add VARIANT support to HoodieSchema
  2. Add VARIANT type to HoodieSchemaType enum.

Impact

Support for Variant types in accordance to parquet's spec.

Risk Level

Low

Documentation Update

None

Contributor's checklist

  • Read through contributor's guide
  • Enough context is provided in the sections above
  • Adequate tests were added if applicable

@voonhous voonhous requested review from bvaradar, rahil-c and the-other-tim-brown and removed request for the-other-tim-brown December 30, 2025 10:40
@github-actions github-actions bot added the size:L PR with lines of changes in (300, 1000] label Dec 30, 2025
@voonhous voonhous changed the title feat(schema): Add variant support to HoodieSchema feat(schema): Add VARIANT support to HoodieSchema Dec 30, 2025
return DATE;
} else if (logicalType == LogicalTypes.uuid()) {
return UUID;
} else if (logicalType instanceof VariantLogicalType) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we follow a similar pattern as the uuid where there is a singleton we can compare to instead of instanceof?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sense, limit the number of object creation for types. Will eagerly initialize a singleton.

}

/**
* Creates an unshredded Variant schema.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

General question, can you have shredded and unshredded values in the same dataset? if so, it seems like the schema should be the same for these

Copy link
Member Author

@voonhous voonhous Dec 30, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't quite understand this question. The implementation follows the parquet schema spec.

Nonetheless, will like to still understand what you're pushing towards.
Do you mean if we can have a unshredded_typed_column and shredded_typed_column in the dataset?

Or are you saying that since shredded_variant typed columns can hold unshredded data, we should just maintain the shredded type?

Unshredded

optional group variant_unshredded (VARIANT) {
  required binary metadata;
  required binary value;
}

Shredded

optional group variant_shredded (VARIANT) {
  required binary metadata;
  optional binary value;
  optional int64 typed_value;
}

So, to use shredded schema to represent unshredded, we can just make typed_value null and populate value?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if the column can have both shredded and unshredded in the same file.

* @param typedValueSchema the schema for the typed_value field (can be null if typed_value is not needed)
* @return a new HoodieSchema.Variant representing a shredded variant
*/
public static HoodieSchema.Variant createVariantShredded(HoodieSchema typedValueSchema) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need include shredding information in type/schema layer? IIUC, it's more about read/write optimization mechanism, which can be inferred or fetched from configuration during reading or writing.

Copy link
Member Author

@voonhous voonhous Jan 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeap, that's what i understand too. Since we are introducing the VARIANT type, i am merely following the spec of what parquet provided.

It's similar to Decimal. The same argument of "do we really need a Decimal backed by BYTE or FIXED"? Both are of DECIMAL logical types and each has their own performance differences.

I do not have an answer of whether we need to include shredding information in type/schema layer. But I think a similar question will be asked if i do not include it. So, might as well include it now and leave it up for discussion for the experts.

Copy link
Member Author

@voonhous voonhous Jan 2, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From what i read in the variant shredding spec: https://github.com/apache/parquet-format/blob/master/VariantShredding.md

Shredding is an optional/extended feature, so i am not really sure if ALL engines actually support it.

Might need devs with more domain knowledge to chime in on how different engines support VARIANT and if there's a risk for only supporting one type and not the other.

Copy link
Collaborator

@cshuo cshuo Jan 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thks for the explaining with DECIMAL example, extra type informations seems necessary since now the HoodieSchema is a wrapper class for avro schema and needs providing compatibility with Avro.

Shredding is an optional/extended feature, so i am not really sure if ALL engines actually support it.

AFAIK, Flink do not support shredding yet and Spark supports shredding. BTW, both engines do not include shredding information in the data type layer.

@voonhous voonhous linked an issue Jan 5, 2026 that may be closed by this pull request
@hudi-bot
Copy link
Collaborator

hudi-bot commented Jan 7, 2026

CI report:

Bot commands @hudi-bot supports the following commands:
  • @hudi-bot run azure re-run the last Azure build

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:L PR with lines of changes in (300, 1000]

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Introduce Variant schema type

4 participants