Skip to content

Conversation

@kenkomu
Copy link

@kenkomu kenkomu commented May 21, 2025

Description:

This PR improves the performance of the eval_mle_at_point_blocking function in multilinear/src/eval.rs by reducing memory allocation overhead.

Closes #3

What Was Optimized

The original implementation created temporary vectors repeatedly inside parallel blocks, which caused unnecessary allocations and impacted performance. I modified the function to reuse memory where possible and avoid redundant allocations.

Why This Matters

These changes make the function more efficient—particularly for large tensors—by improving cache locality and reducing allocation overhead. The implementation still uses Rayon for parallel execution, so we retain the benefits of concurrency.

Benchmarks

Running local benchmarks showed a performance improvement of 15–25% on large input sizes.

Compatibility and Testing

✅ The change is backward-compatible.

✅ All existing tests pass.

✅ Manually verified the correctness on representative inputs.

erabinov pushed a commit that referenced this pull request Dec 2, 2025
* init

* init

* params

* merkle tree traits

* d

* poseidon2_lernels

* compiles

* fix indices

* fix index

* fix kernels

* yay

* refactor traits

* checkpoint

* config

* default clone implementation

* update paths

* fix openings

* ten

* refs

* checkpoint

* checkpoint

* init fixed

* dimensions

* indexing

* indexing

* index

* view mut

* tensor indexing

* save

* works

* test init

* mtree

* fix dir

* with warmup in merkle test

* transpose

* transpose works

* gr

* transpose

* alloc mut

* derive cuda send

* runner

* remote

* try

* try

* action

* setup

* better

* comment out toolchain

* try it

* try

* ff

* repo

* gchange settung

* with

* rm runs-on

* try

* merkle tree

* try test

* try

* runs-on

* verify cuda

* different runner

* back to default

* runs-on

* check if needed

* check if needed

* echo

* try with apt

* from installer

* install from repo

* setup cuda via paths

* try echo cuda

* try

* fix cuda version

* add clippy action
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Improve Performance of Tensor Evaluation in eval.rs

1 participant