Skip to content

Reduce Allocations When Reading Parquet MetadataΒ #5775

@tustvold

Description

@tustvold

Is your feature request related to a problem or challenge? Please describe what you are trying to do.

Currently when reading the parquet metadata, in particular decode_metadata, we read into structures within parquet::format that are generated by the thrift compiler. Despite reading from a fixed slice of memory, these allocate new buffers for all variable width data, including all statistics. This is especially unfortunate as these thrift data structures are temporary, and quickly get parsed into more optimal data structures.

For example when reading the schema of a 26 column parquet we see almost 10,000 allocations, most as a result of the thrift data structures.

image

Almost all of them are very small

image

Describe the solution you'd like

The optimal solution would be for the parquet decoder to borrow data rather than allocating new data structures, this would avoid the vast majority of these allocations and is possible because the thrift binary encoding is just a side-prefixed string - https://github.com/apache/thrift/blob/master/doc/specs/thrift-compact-protocol.md#binary-encoding

Given that a lot of the allocations are small, deploying a small string optimisation might also be valuable, but just borrowing string slices will be optimal.

Describe alternatives you've considered

Additional context

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementAny new improvement worthy of a entry in the changelog

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions