perf(vdf): optimize hot loop with direct compress256 and copy elimination#1214
perf(vdf): optimize hot loop with direct compress256 and copy elimination#1214glottologist merged 15 commits intomasterfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughReplace mutable Sha256 hasher usage with block-based SHA‑256 compression in the VDF, change vdf APIs to accept salt by value (U256), add compute_step_checkpoints and propagate a reset_seed through capacity checkpoint generation, enable sha2 "compress" feature in manifests, and update benchmark workflow and benches. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Design Plan: VDF Hot Loop Optimization via compress256Issue: #954 — Improve VDF 1. Problem StatementThe VDF (Verifiable Delay Function) is a consensus-critical component that performs sequential SHA256 hashing to provide network timing (~1 step/sec). The hot loop in
All of this overhead is avoidable because the VDF input is always exactly 64 bytes (32-byte salt + 32-byte seed), making the padding and block structure fully predictable. 2. Solution: Direct compress256 with Pre-computed Padding2.1 Core InsightSHA256 processes data in 64-byte blocks. For a 64-byte message:
The padding block is a compile-time constant. The IV is a compile-time constant. By calling 2.2 Architecture2.3 Key Constants// SHA256 initial hash values (FIPS 180-4 Section 5.3.3)
const SHA256_IV: [u32; 8] = [
0x6a09e667, 0xbb67ae85, 0x3c6ef372, 0xa54ff53a,
0x510e527f, 0x9b05688c, 0x1f83d9ab, 0x5be0cd19,
];
// Pre-computed padding for exactly 64 bytes of input
// Layout: 0x80 | zeros | 0x0200 (512 bits in big-endian at bytes 62-63)
const SHA256_64B_PADDING: [u8; 64] = { ... };2.4 Helper Functions
2.5 Data LayoutThe salt half of 3. API Changes3.1
|
| File | Function/Location | Change |
|---|---|---|
crates/vdf/src/lib.rs |
vdf_sha |
Rewritten (core optimization) |
crates/vdf/src/lib.rs |
last_step_checkpoints_is_valid |
Inner loop rewritten with compress256 |
crates/vdf/src/lib.rs |
calibrate_vdf |
Signature update |
crates/vdf/src/vdf.rs |
run_vdf_for_genesis_block |
Signature update |
crates/vdf/src/vdf.rs |
run_vdf |
Signature update, removed unnecessary H256 clone |
crates/vdf/src/vdf.rs |
test_vdf_step |
Signature update |
crates/vdf/src/vdf.rs |
test_vdf_service |
Signature update |
crates/vdf/src/vdf.rs |
test_vdf_does_not_get_too_far_ahead |
Signature update |
crates/vdf/src/state.rs |
vdf_steps_are_valid |
Signature update |
crates/vdf/src/state.rs |
test_mid_execution_cancellation |
Signature update |
crates/chain-tests/src/utils.rs |
3 call sites (~lines 136, 255, 3622) | Signature update |
Removed Imports
crates/vdf/src/vdf.rs:use sha2::{Digest as _, Sha256}removed entirelycrates/vdf/src/state.rs:use sha2::{Digest as _, Sha256}removed entirelycrates/chain-tests/src/utils.rs:sha2import retained (still used for solution hash)
5. Unchanged Components
| Component | Reason |
|---|---|
vdf_sha_verification (lib.rs:93-137) |
Uses openssl::sha::Sha256 independently. Serves as a cross-check in tests to verify our optimization produces identical output. Intentionally a different implementation. |
vdf_utils.rs |
Not performance-critical (fast-forward replay of pre-validated steps) |
apply_reset_seed |
Uses openssl::sha::Sha256 for seed mixing. Not in the hot loop. |
6. Correctness Guarantee
The VDF is consensus-critical: output must be bit-identical across all nodes. Correctness is verified by:
-
4 hardcoded test vectors in
lib.rs(test_checkpoints_for_single_step_*) that assert exact H256 values for known inputs. These encode the consensus-expected output. -
Cross-implementation verification in
test_vdf_step: runs bothvdf_sha(compress256 path) andvdf_sha_verification(openssl path) on the same input and asserts equality. -
Integration tests (
test_vdf_service,test_vdf_does_not_get_too_far_ahead,test_mid_execution_cancellation) exercise the full VDF pipeline including state management, parallel verification, and cancellation.
All 8 tests pass with the optimized implementation.
7. Safety Analysis
One unsafe block exists in as_generic_array_slice:
unsafe { core::slice::from_raw_parts(blocks.as_ptr().cast(), blocks.len()) }Why it's sound: GenericArray<u8, U64> is declared #[repr(transparent)] over [u8; 64] in the generic-array crate. This is the same cast pattern used internally by the sha2 crate itself. The pointer cast preserves alignment (both are [u8; 64] under the hood) and the length is preserved. No lifetime extension occurs — the returned slice borrows from the input.
8. Performance Characteristics
Eliminated per-iteration overhead:
Sha256::new()— state initialization + buffer allocation- Buffer fill tracking and boundary management
- Padding computation (always the same for 64-byte input)
- Finalization logic
- Hasher state reset
Retained per-iteration work:
- 32-byte
copy_from_slicefor seed (unavoidable — seed changes each iteration) - Two SHA256 compression rounds via
compress256(the actual work) - 32-byte state extraction via
state_to_h256
Additional optimization:
- Salt incrementing between checkpoints uses byte-level
increment_le_saltinstead ofU256arithmetic, avoiding big-integer overhead
Eliminate Per-Iteration 32-Byte Copy in VDF Hot Loop — Design PlanMode: Feature Problem StatementThe VDF hot loop in Current flow per iteration: Architecture & ApproachWrite the compression output directly into New flow per iteration: Before the loop: one-time copy of initial Why not custom intrinsics (Research Option A)? Writing a Helper ChangesAdd ComponentsModified:
|
| Gives up | Why acceptable |
|---|---|
| Code reads slightly less "obviously" as SHA256(salt || seed) | The data flow is the same; a one-line comment explains the in-place pattern |
seed is not updated until after each checkpoint's inner loop |
seed was only read within the inner loop (via the copy) — no external observers |
| Small additional per-checkpoint copy (blocks → seed) | Amortized over 100K iterations, negligible |
Out of Scope
- Custom SHA-NI intrinsics (Option A from research) — insufficient ROI
- Keeping state in
[u32; 8]form between iterations (Option B) — high effort - Benchmarking infrastructure — verify via existing
calibrate_vdfor manual timing - Changes to
vdf_sha_verification(OpenSSL path) — this is the reference/verification path, not performance-critical
Verification
- All 4 existing hardcoded VDF test vectors must pass (bit-identical output)
- Cross-implementation check (
vdf_shavsvdf_sha_verification) must pass - Integration tests (
test_vdf_service,test_vdf_does_not_get_too_far_ahead) must pass - Manual comparison: run
calibrate_vdfbefore and after to observe improvement
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
crates/vdf/src/lib.rs (1)
237-279:⚠️ Potential issue | 🟠 MajorReject invalid checkpoint counts before the rayon loop.
cp[i]on Line 263 assumes the header carries exactlyconfig.num_checkpoints_in_vdf_stepcheckpoints. A shorterlast_step_checkpointslist will panic inside the rayon task and bubble out as aJoinErrorinstead of a clean invalid-block result.🛡️ Proposed guard
- // Insert the seed at the head of the checkpoint list - checkpoint_hashes.0.insert(0, seed); - let cp = checkpoint_hashes.clone(); + if vdf_info.last_step_checkpoints.len() != config.num_checkpoints_in_vdf_step { + return Err(eyre::eyre!( + "Invalid checkpoint count: expected {}, got {}", + config.num_checkpoints_in_vdf_step, + vdf_info.last_step_checkpoints.len() + )); + } + + // Insert the seed at the head of the checkpoint list + checkpoint_hashes.0.insert(0, seed); + let cp = checkpoint_hashes.clone();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/vdf/src/lib.rs` around lines 237 - 279, Reject/validate the checkpoint count before spawning the rayon task: check that checkpoint_hashes (or cp after inserting seed) has length >= config.num_checkpoints_in_vdf_step and return a proper invalid-block error if not, instead of letting cp[i] panic inside the rayon closure; perform this validation immediately after constructing checkpoint_hashes/let cp = checkpoint_hashes.clone() (and before computing start_salt and spawning the tokio::task::spawn_blocking) so the function returns a clear error path rather than a JoinError.crates/chain-tests/src/utils.rs (1)
125-141:⚠️ Potential issue | 🟠 MajorMirror reset-step handling in these checkpoint recomputations.
These helpers hash
steps[0]directly, but the first step after a reset boundary is generated fromapply_reset_seed(previous_step, reset_seed), not from the raw previous step. As written, every post-resetSolutionContexthere will carry wrong checkpoints. The validator path incrates/vdf/src/state.rsLines 369-377 and the local recomputation incrates/vdf/src/vdf.rsLines 384-386 and 494-496 already do the pre-hash reset; these sites need the same logic, which means threading in the active reset seed as well.Also applies to: 245-258, 3611-3623
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/chain-tests/src/utils.rs` around lines 125 - 141, The checkpoint recomputation is using steps[0] directly but must mirror the validator logic and pre-hash a reset boundary by applying apply_reset_seed(previous_step, reset_seed) when the step after a reset is being processed; update the seed initialization before calling vdf_sha in utils.rs (the block that sets salt and mut seed and calls vdf_sha) to detect if the previous step is a reset boundary and, if so, call apply_reset_seed(previous_step, active_reset_seed) to produce the seed, thread the active reset seed into this helper, and apply the same change to the other occurrences noted (around the other blocks at the referenced ranges) so SolutionContext recomputations use the reset-applied seed exactly like crates/vdf/src/state.rs and crates/vdf/src/vdf.rs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/vdf/src/lib.rs`:
- Around line 19-91: Add a property-based test using proptest that randomly
generates small inputs and asserts vdf_sha matches the existing reference path
(vdf_sha_verification): create a test module (#[cfg(test)] mod tests) importing
proptest::prelude::*, generate random start_salt (32-byte little-endian U256)
and seed (32-byte H256) and small num_checkpoints/num_iterations_per_checkpoint
(e.g. 1..5 and 0..100), run both vdf_sha(start_salt, &mut seed_clone, ...) and
the reference vdf_sha_verification(start_salt, &mut seed_ref, ...) and assert
equality of checkpoints and final seed; wire conversion from the generated byte
arrays into U256 and H256 using the same helpers used elsewhere (e.g.
U256::from_little_endian / to_little_endian and H256::from_slice / as_mut()),
and shrink iteration sizes to keep tests fast.
---
Outside diff comments:
In `@crates/chain-tests/src/utils.rs`:
- Around line 125-141: The checkpoint recomputation is using steps[0] directly
but must mirror the validator logic and pre-hash a reset boundary by applying
apply_reset_seed(previous_step, reset_seed) when the step after a reset is being
processed; update the seed initialization before calling vdf_sha in utils.rs
(the block that sets salt and mut seed and calls vdf_sha) to detect if the
previous step is a reset boundary and, if so, call
apply_reset_seed(previous_step, active_reset_seed) to produce the seed, thread
the active reset seed into this helper, and apply the same change to the other
occurrences noted (around the other blocks at the referenced ranges) so
SolutionContext recomputations use the reset-applied seed exactly like
crates/vdf/src/state.rs and crates/vdf/src/vdf.rs.
In `@crates/vdf/src/lib.rs`:
- Around line 237-279: Reject/validate the checkpoint count before spawning the
rayon task: check that checkpoint_hashes (or cp after inserting seed) has length
>= config.num_checkpoints_in_vdf_step and return a proper invalid-block error if
not, instead of letting cp[i] panic inside the rayon closure; perform this
validation immediately after constructing checkpoint_hashes/let cp =
checkpoint_hashes.clone() (and before computing start_salt and spawning the
tokio::task::spawn_blocking) so the function returns a clear error path rather
than a JoinError.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 638c6287-4767-49c1-b5a3-c2dd50f72169
📒 Files selected for processing (5)
Cargo.tomlcrates/chain-tests/src/utils.rscrates/vdf/src/lib.rscrates/vdf/src/state.rscrates/vdf/src/vdf.rs
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/vdf/src/lib.rs`:
- Around line 67-72: The public function vdf_sha must validate that
num_checkpoints does not exceed checkpoints.len() (and similarly any other call
sites using checkpoint_idx at lines ~89-90); add an explicit precondition at the
start of vdf_sha checking num_checkpoints <= checkpoints.len() (and that
checkpoints is non-empty if used) and return a Result error instead of letting
checkpoints[checkpoint_idx] panic, or alternatively document and assert the
invariant before indexing; reference symbols: vdf_sha, checkpoint_idx,
checkpoints, num_checkpoints, num_iterations_per_checkpoint, seed, start_salt.
- Around line 672-696: The property test should also assert that vdf_sha mutates
the seed to the final checkpoint and include 0 iterations as a boundary case:
update the proptest strategy for num_iterations to allow 0 (e.g., 0..=8_u64) and
after calling vdf_sha check that seed == checkpoints[num_checkpoints - 1] (or
the last element of checkpoints) in addition to the existing checkpoints ==
verification assertion; adjust the cast of num_iterations to usize when calling
vdf_sha_verification as needed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: b09dadfc-10e1-4745-98c5-2edf5a8f84f2
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (3)
crates/chain-tests/src/utils.rscrates/vdf/Cargo.tomlcrates/vdf/src/lib.rs
There was a problem hiding this comment.
♻️ Duplicate comments (1)
crates/vdf/src/lib.rs (1)
66-79:⚠️ Potential issue | 🟠 MajorGuard
num_checkpointsbefore slicing invdf_sha.On Line 78,
checkpoints[..num_checkpoints]will panic ifnum_checkpoints > checkpoints.len(). Please enforce the precondition at function entry (or switch to aResultAPI) before indexing/slicing.Suggested fix
pub fn vdf_sha( start_salt: U256, seed: &mut H256, num_checkpoints: usize, num_iterations_per_checkpoint: u64, checkpoints: &mut [H256], ) { + assert!( + num_checkpoints <= checkpoints.len(), + "num_checkpoints ({}) exceeds checkpoints length ({})", + num_checkpoints, + checkpoints.len() + ); + let mut blocks = [[0_u8; 64]; 2]; @@ - for (checkpoint_idx, slot) in checkpoints[..num_checkpoints].iter_mut().enumerate() { + for (checkpoint_idx, slot) in checkpoints.iter_mut().take(num_checkpoints).enumerate() { if checkpoint_idx > 0 { increment_le_salt(&mut blocks[0][..32]); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/vdf/src/lib.rs` around lines 66 - 79, vdf_sha currently slices checkpoints[..num_checkpoints] without validating num_checkpoints; add an entry guard in function vdf_sha that ensures num_checkpoints <= checkpoints.len() (for example an assert! or explicit check that returns/propagates an error) to prevent panics when num_checkpoints is out of bounds and include a clear message referencing num_checkpoints and checkpoints.len() for debugging.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@crates/vdf/src/lib.rs`:
- Around line 66-79: vdf_sha currently slices checkpoints[..num_checkpoints]
without validating num_checkpoints; add an entry guard in function vdf_sha that
ensures num_checkpoints <= checkpoints.len() (for example an assert! or explicit
check that returns/propagates an error) to prevent panics when num_checkpoints
is out of bounds and include a clear message referencing num_checkpoints and
checkpoints.len() for debugging.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: c28b7be1-46a1-4649-b8ec-cc092a492d55
📒 Files selected for processing (1)
crates/vdf/src/lib.rs
There was a problem hiding this comment.
♻️ Duplicate comments (1)
crates/vdf/src/lib.rs (1)
78-78:⚠️ Potential issue | 🟠 MajorAdd an explicit bounds precondition for
num_checkpoints.
checkpoints[..num_checkpoints]will panic whennum_checkpoints > checkpoints.len()in this public API. Please assert or validate this invariant at function entry.Suggested fix
pub fn vdf_sha( start_salt: U256, seed: &mut H256, num_checkpoints: usize, num_iterations_per_checkpoint: u64, checkpoints: &mut [H256], ) { + assert!( + num_checkpoints <= checkpoints.len(), + "num_checkpoints ({}) exceeds checkpoints length ({})", + num_checkpoints, + checkpoints.len() + ); + let mut blocks = [[0_u8; 64]; 2];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/vdf/src/lib.rs` at line 78, Add an explicit precondition check for num_checkpoints before the loop that slices checkpoints (for (checkpoint_idx, slot) in checkpoints[..num_checkpoints].iter_mut().enumerate()), e.g. at the start of the containing function validate that num_checkpoints <= checkpoints.len() and handle violations by returning a proper Result::Err (or panicking with a clear message / assert!) instead of allowing the slice to panic; update any function signature to propagate the error if you choose to return Result so callers can handle invalid input.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@crates/vdf/src/lib.rs`:
- Line 78: Add an explicit precondition check for num_checkpoints before the
loop that slices checkpoints (for (checkpoint_idx, slot) in
checkpoints[..num_checkpoints].iter_mut().enumerate()), e.g. at the start of the
containing function validate that num_checkpoints <= checkpoints.len() and
handle violations by returning a proper Result::Err (or panicking with a clear
message / assert!) instead of allowing the slice to panic; update any function
signature to propagate the error if you choose to return Result so callers can
handle invalid input.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: cc1739b5-1bce-42ed-b94b-3fd8755a4165
📒 Files selected for processing (1)
crates/vdf/src/lib.rs
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/bench.yml:
- Around line 30-32: Update the GitHub Actions checkout step to use the
consistent, current version by changing the usage reference from
actions/checkout@v3 to actions/checkout@v4; locate the checkout step where
"uses: actions/checkout@v3" is declared and replace it with "uses:
actions/checkout@v4" so it matches the other checkout invocation and satisfies
actionlint.
- Around line 104-105: The workflow step is injecting untrusted PR refs directly
into shell commands (git fetch origin ${{ github.event.pull_request.base.ref }}
and head.ref), which is a script-injection risk; fix it by adding environment
variables (e.g., BASE_REF and HEAD_REF) populated from
github.event.pull_request.base.ref and head.ref in the step's env block, then
update the git fetch commands to reference the safe environment variables (e.g.,
"$BASE_REF" and "$HEAD_REF") with proper shell quoting so the refs are not
interpolated into the YAML shell line itself.
In `@crates/vdf/src/lib.rs`:
- Around line 122-127: The loop reassigns local_seed each iteration which can be
unclear; rename local_seed to a more descriptive mutable name (e.g.,
current_seed or seed_iter) used in the loop body (while keeping the original
H256 type and update logic: sha::Sha256::new(), hasher.update(...), local_seed =
H256::from(hasher.finish())) or add a one-line comment above the for loop
explaining that this is an iterative hash chain over
num_iterations_per_checkpoint that mutates local_seed; update all references to
that symbol accordingly so the intent is explicit.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: d60726d7-f470-4358-bb6a-638f546c313a
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (5)
.github/workflows/bench.ymlCargo.tomlcrates/vdf/Cargo.tomlcrates/vdf/benches/vdf_bench.rscrates/vdf/src/lib.rs
09f6e3f to
8f4bb18
Compare
8f4bb18 to
6ba7d3a
Compare
|
Benchmark results: https://irys-xyz.github.io/irys/dev/bench/1214%2Fmerge/index.html |
|
@CodeRabbit full review |
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
.github/workflows/bench.yml (1)
146-165:⚠️ Potential issue | 🔴 CriticalStop interpolating branch refs directly into inline
github-scriptcode.Line [154] and Line [164] inject
${{ github.head_ref || github.ref_name }}into JavaScript source text. Branch refs are user-controlled, so this is a script-injection risk on the runner. Pass the value viaenvand readprocess.env.BRANCH_NAMEin the script.🔒 Proposed fix
- name: Comment benchmark results URL on PR if: (github.head_ref || github.ref_name) != github.event.repository.default_branch uses: actions/github-script@v7 + env: + BRANCH_NAME: ${{ github.head_ref || github.ref_name }} with: script: | let prNumber = context.payload.pull_request?.number ?? context.payload.review?.pull_request?.number; if (!prNumber) { - const branch = '${{ github.head_ref || github.ref_name }}'; + const branch = process.env.BRANCH_NAME; const { data: prs } = await github.rest.pulls.list({ ...context.repo, head: `${context.repo.owner}:${branch}`, state: 'open', }); prNumber = prs[0]?.number; } if (!prNumber) return; - const branch = '${{ github.head_ref || github.ref_name }}'; + const branch = process.env.BRANCH_NAME; const url = `https://irys-xyz.github.io/irys/dev/bench/${encodeURIComponent(branch)}/index.html`;#!/bin/bash # Verify no direct expression interpolation remains inside github-script JS source. rg -n "const branch = '\\$\\{\\{ github\\.head_ref \\|\\| github\\.ref_name \\}\\}'" .github/workflows/bench.yml -C2🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/bench.yml around lines 146 - 165, Replace the inline interpolation of "${{ github.head_ref || github.ref_name }}" inside the actions/github-script block with an environment variable to avoid script-injection: set an env entry (e.g., BRANCH_NAME) on the step and read it inside the script via process.env.BRANCH_NAME; update both places where the const branch = '...' is declared and the URL construction to use process.env.BRANCH_NAME (or the previously-computed prNumber logic) and remove the literal interpolation, ensuring the script still falls back to listing PRs when prNumber is missing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/chain-tests/src/utils.rs`:
- Around line 3579-3585: The code currently computes reset_seed by falling back
to H256::default() when read.get_block(&parent_hash) is None; change this to
propagate an error instead of silently defaulting: replace the
map(...).unwrap_or_default() chain with explicit error handling that returns an
Err (e.g., using match or .ok_or_else(...)?.map(...)? / .and_then(...) and the ?
operator) so that if get_block(&parent_hash) is None you return an error from
the enclosing function; locate the reset_seed binding and the calls to
node_ctx.block_tree_guard.read(), get_max_cumulative_difficulty_block(),
get_block(), and vdf_limiter_info.next_seed to implement the change and adjust
the function signature to return Result if necessary.
In `@crates/vdf/src/lib.rs`:
- Around line 55-57: The const assertions use unqualified size_of and align_of
which are not in Rust 2024 prelude; update the assertions to call
core::mem::size_of and core::mem::align_of instead so they resolve correctly in
const context (the lines referencing GenericArray<u8, U64> should use
core::mem::size_of::<GenericArray<u8, U64>>() and
core::mem::align_of::<GenericArray<u8, U64>>() in the assert! calls).
---
Duplicate comments:
In @.github/workflows/bench.yml:
- Around line 146-165: Replace the inline interpolation of "${{ github.head_ref
|| github.ref_name }}" inside the actions/github-script block with an
environment variable to avoid script-injection: set an env entry (e.g.,
BRANCH_NAME) on the step and read it inside the script via
process.env.BRANCH_NAME; update both places where the const branch = '...' is
declared and the URL construction to use process.env.BRANCH_NAME (or the
previously-computed prNumber logic) and remove the literal interpolation,
ensuring the script still falls back to listing PRs when prNumber is missing.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: feb89271-8276-46dd-91ab-211271e87f78
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (8)
.github/workflows/bench.ymlCargo.tomlcrates/chain-tests/src/utils.rscrates/vdf/Cargo.tomlcrates/vdf/benches/vdf_bench.rscrates/vdf/src/lib.rscrates/vdf/src/state.rscrates/vdf/src/vdf.rs
Summary
Optimizes the VDF hot loop by replacing the high-level
sha2::Sha256API with directcompress256calls and eliminating a per-iteration 32-bytememcpy.SHA256_IV,SHA256_64B_PADDING)vdf_shaandlast_step_checkpoints_is_validsocompress256output is written directly intoblocks[0][32..64]— the same position the next iteration reads from. This removes ~2.5M redundant 32-byte copies per VDF step (100K iterations x 25 checkpoints)vdf_shasignature: Remove thehasherparameter and takesaltby value, since both are internal concerns. Salt stepping uses byte-levelincrement_le_saltinstead ofU256arithmeticApproach
We leverage SHA-NI intrinsics to avoid the copy via split-pointer. This achieves the same copy elimination through data flow restructuring — zero new
unsafecode beyond the existingGenericArraycast, no architecture-specific branches.Test plan
vdf_shavsvdf_sha_verificationvia OpenSSL) passescargo clippy -p irys-vdf --tests --all-targets— zero warningscargo fmt --all -- --check— cleancargo check --workspace— clean compilation across all crates🤖 Generated with Claude Code
Summary by CodeRabbit
Chores
Refactor
Tests