Skip to content

performance(starknet-classes): Simplify decompression by using bit alignment of encoding.#9607

Open
orizi wants to merge 1 commit intomainfrom
orizi/02-04-performance_starknet-classes_simplify_decompression_by_using_bit_alignment_of_encoding
Open

performance(starknet-classes): Simplify decompression by using bit alignment of encoding.#9607
orizi wants to merge 1 commit intomainfrom
orizi/02-04-performance_starknet-classes_simplify_decompression_by_using_bit_alignment_of_encoding

Conversation

@orizi
Copy link
Collaborator

@orizi orizi commented Feb 4, 2026

Summary

Optimized the decompress function in felt252_vec_compression.rs by replacing the manual limb-by-limb division with a more efficient bit manipulation approach. The new implementation uses a bit mask and shifting operations to extract values from the packed data, which is more performant than the previous division-based algorithm. Also extended the IntoOrPanic trait to support i128 and u128 types.


Type of change

Please check one:

  • Bug fix (fixes incorrect behavior)
  • New feature
  • Performance improvement
  • Documentation change with concrete technical impact
  • Style, wording, formatting, or typo-only change

Why is this change needed?

The previous implementation of the decompress function used a relatively expensive division operation for each word extraction. By leveraging the fact that the padded code size is a power of 2, we can use more efficient bit manipulation operations (masking and shifting) to achieve the same result with better performance.


What was the behavior or documentation before?

The previous implementation used a manual 4-limb long division approach to extract values from packed data, which was less efficient, especially for large inputs.


What is the behavior or documentation after?

The new implementation uses bit manipulation (masking and shifting) to extract values from the packed data, which is more efficient. It also adds i128 and u128 implementations for the IntoOrPanic trait to support the new code.


Additional context

This optimization maintains the same functionality while improving performance. The approach takes advantage of the fact that the padded code size is a power of 2, allowing us to use bit operations instead of division.

@reviewable-StarkWare
Copy link

This change is Reviewable

Copy link
Collaborator Author

orizi commented Feb 4, 2026

This stack of pull requests is managed by Graphite. Learn more about stacking.

@orizi orizi requested a review from TomerStarkware February 4, 2026 14:20
@orizi orizi marked this pull request as ready for review February 4, 2026 14:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants