Skip to content

Conversation

@avilagaston9
Copy link
Contributor

@avilagaston9 avilagaston9 commented Feb 10, 2026

Motivation

The block building loop (build_payload_loop) continuously rebuilds the block for 12 seconds (one slot), even when no new transactions have arrived. Each rebuild is CPU-intensive (EVM execution, trie operations) and contends with RPC handlers over the shared trie_cache mutex, starving the RPC server.

Description

Alternative approach to #6165 — uses tokio::sync::Notify instead of an AtomicU64 generation counter.

Add a tokio::sync::Notify to Mempool that signals when new transactions or blobs bundles are inserted. The payload build loop waits on this notification (or cancellation, or the slot deadline) before rebuilding, instead of looping as fast as possible.

  • Mempool::add_transaction() and Mempool::add_blobs_bundle() both call notify_waiters() after inserting data, so builders see complete blob tx data and all active payload builders are woken
  • build_payload_loop uses tokio::select! to wait for a mempool notification, cancel_token cancellation, or the remaining slot deadline — whichever comes first
  • The notified() future is pre-captured before select! to avoid missing permits between loop iterations
  • The first build_payload call remains unconditional — we always build one payload immediately when FCU arrives
  • tx_added() is pub(crate) to avoid exposing the notification primitive externally

Tradeoffs vs #6165 (AtomicU64 polling):

  • Instant wake-up on new tx (no 100ms polling delay)
  • Requires adding sync and macros tokio features to the blockchain crate

Checklist

  • Updated STORE_SCHEMA_VERSION (crates/storage/lib.rs) if the PR includes breaking changes to the Store requiring a re-sync.

new transactions instead of rebuilding continuously. Each rebuild is
CPU-intensive (EVM execution, trie operations) and contends with RPC
handlers over the shared trie_cache mutex, starving the RPC server.

Mempool::add_transaction() calls notify_one() after inserting a tx.
build_payload_loop uses tokio::select! to wait for either a mempool
notification or cancel_token cancellation before each rebuild. The first
build_payload call remains unconditional.
@github-actions github-actions bot added L1 Ethereum client performance Block execution throughput and performance in general labels Feb 10, 2026
@github-actions
Copy link

🤖 Kimi Code Review

Review Summary

The PR adds transaction notification functionality to the mempool to improve payload building efficiency. Overall, the changes are well-structured and follow Rust best practices.

Issues Found

1. Race Condition in Payload Building (payload.rs:392-400)
The current implementation has a potential race condition where:

  • A transaction is added to the mempool
  • The notification is sent
  • The payload builder wakes up and builds a block
  • But the transaction might not actually be included if it was filtered out during validation or if the mempool was full

Fix: Consider adding a check after waking to ensure there's actually work to do, or use a different synchronization pattern.

2. Missing Error Handling (payload.rs:394-400)
The tokio::select! could silently fail if the build_payload call returns an error, which might cause the payload builder to spin indefinitely.

Fix: Add proper error handling:

tokio::select! {
    _ = self.mempool.tx_added().notified() => {
        match self.build_payload(payload.clone()) {
            Ok(new_payload) => res = new_payload,
            Err(e) => {
                tracing::warn!("Failed to build payload: {}", e);
                continue;
            }
        }
    }
    _ = cancel_token.cancelled() => break,
}

3. Potential Deadlock (mempool.rs:132-134)
While the current implementation correctly drops the lock before notifying, there's a subtle issue: if notify_one() panics (unlikely but possible), the lock guard is already dropped, which is good. However, consider using a more explicit pattern:

{
    let mut inner = self.write()?;
    // ... insert operations ...
    // lock is dropped here at end of scope
}
self.tx_added.notify_one();

Minor Suggestions

1. Documentation Needed

  • Add documentation to the new tx_added() method explaining its purpose and usage
  • Document the notification behavior in the add_transaction method

2. Consider Notification Coalescing
If many transactions are added rapidly, this could cause excessive wake-ups. Consider:

  • Using notify_waiters() instead of notify_one() if multiple tasks are waiting
  • Adding a small debounce delay

3. Test Coverage
The PR should include tests for:

  • Notification delivery when transactions are added
  • Proper cleanup when mempool is dropped
  • Behavior under concurrent access

Security Assessment

No security vulnerabilities identified. The changes maintain proper lock ordering and don't introduce new attack vectors.

Performance Impact

The changes should improve performance by reducing busy-waiting in the payload builder, though the exact impact depends on transaction arrival patterns.


Automated review by Kimi (Moonshot AI) · custom prompt

@github-actions
Copy link

🤖 Claude Code Review

Now I have a comprehensive understanding of the PR. Let me write my review.

Code Review: PR #6169 - Payload Building Optimization with tokio::sync::Notify

Overview

This PR replaces busy-wait polling in the payload building loop with event-driven notifications using tokio::sync::Notify. The change should significantly reduce CPU usage and mutex contention during block building.

Architecture & Correctness ✅

Strong Points:

  • Correct lock ordering (mempool.rs:132-134): Explicitly drops the write lock before calling notify_one(), preventing potential deadlocks and reducing hold time
  • Single entry point: All transaction additions go through Mempool::add_transaction, ensuring consistent notification behavior
  • Graceful cancellation: The tokio::select! properly handles both notification and cancellation paths

Issues & Concerns

1. Spurious Wake-ups Without New Transactions (Medium Priority)

Location: payload.rs:393-397

The current implementation can wake up even when no includeable transactions were added:

  • Transaction might be filtered out during mempool validation
  • Mempool might be full (transaction evicted immediately)
  • Transaction might not meet current base fee requirements

Current behavior:

tokio::select! {
    _ = self.mempool.tx_added().notified() => {}
    _ = cancel_token.cancelled() => break,
}
// Always proceeds to rebuild, even if no valid tx was added

Impact: Extra rebuild cycles waste CPU and contend for trie_cache mutex, partially defeating the optimization.

Recommendation: This is acceptable for an initial implementation. A future optimization could check if the mempool actually has new includeable transactions before rebuilding, but this adds complexity and the current approach is still much better than busy-waiting.


2. Missing notified() Future Capture (Low Priority)

Location: payload.rs:394

Current:

_ = self.mempool.tx_added().notified() => {}

Issue: Notify::notified() must be called before the permit is stored. If a transaction arrives between loop iteration and the notified() call, you might miss it temporarily (though notify_one() stores one permit, so this is mitigated).

Better pattern:

let notified = self.mempool.tx_added().notified();
tokio::select! {
    _ = notified => {}
    _ = cancel_token.cancelled() => break,
}

Assessment: The current code works correctly due to Notify's permit semantics, but the above pattern is more explicitly correct and matches tokio best practices.


3. Cargo.toml Feature Additions (Informational)

Location: Cargo.toml:25

Added features: "sync" and "macros"

  • "sync" → Required for tokio::sync::Notify
  • "macros" → Required for tokio::select!

Both are justified and necessary.


4. Public API Exposure (Low Priority)

Location: mempool.rs:92-94

pub fn tx_added(&self) -> &tokio::sync::Notify {
    &self.tx_added
}

Consideration: This exposes internal synchronization primitives as a public API. While functional, consider:

  • Is this meant to be used by external consumers?
  • Should this be pub(crate) instead?

Assessment: If the blockchain crate needs cross-module access, pub(crate) would be more restrictive. Current approach is acceptable for internal use.


Rust Best Practices ✅

  1. Explicit drop (mempool.rs:133): Good defensive programming
  2. Immutable notify (mempool.rs:81): tokio::sync::Notify doesn't need mut, correctly stored
  3. Error propagation: Maintains existing error handling patterns

Performance Analysis ⚡

Positives:

  • Eliminates continuous CPU-intensive rebuilds
  • Reduces trie_cache mutex contention with RPC handlers
  • notify_one() is lock-free and extremely fast

Tradeoffs:

Expected outcome: Significant CPU reduction during low transaction periods, slight increase in responsiveness.


Security & Consensus ✅

  • No changes to EVM execution, gas accounting, or consensus rules
  • No new attack vectors introduced
  • Lock ordering prevents deadlocks
  • Notification mechanism cannot be abused (no external control)

Missing Elements

1. Documentation

Add doc comments to the new public method:

/// Returns a reference to the notification handle that is signaled
/// when a new transaction is added to the mempool.
pub fn tx_added(&self) -> &tokio::sync::Notify {
    &self.tx_added
}

2. Testing

Consider adding tests for:

  • Notification delivery on transaction insertion
  • Behavior with concurrent additions
  • Cancellation during wait

Final Assessment

Verdict:Approve with minor suggestions

This is a solid optimization that addresses a real performance issue. The implementation is mostly correct and follows Rust best practices. The identified issues are minor and don't block merging:

  • Consider pre-capturing the notified() future (easy fix, improves clarity)
  • Add documentation to the new public API
  • Future optimization: check for includeable transactions before rebuilding (not required now)

The tradeoff analysis in the PR description is accurate and well-reasoned. This approach is cleaner than the atomic counter alternative (#6165).


Automated review by Claude (Anthropic) · custom prompt

@github-actions
Copy link

🤖 Codex Code Review

Issue found (most important first)

  • Slot deadline can be exceeded: build_payload_loop now waits on Notify without any timeout, so if no new txs arrive and no cancel occurs, it can block past the 12s slot window. This is consensus‑sensitive behavior; builders should stop at the slot deadline. Consider adding a timeout for the remaining slot duration or checking elapsed before/after the wait and breaking when it’s exceeded. crates/blockchain/payload.rs:394

Other notes

  • Notify usage looks correct (dropping the write lock before notifying is good), and the change should reduce busy‑loop CPU. No other correctness or security issues spotted in this diff.

Automated review by OpenAI Codex · custom prompt

@avilagaston9 avilagaston9 changed the title perf(l1): only rebuild payload when mempool receives new transactions (Notify) fix(l1): only rebuild payload when mempool receives new transactions (Notify) Feb 10, 2026
@github-actions github-actions bot removed the performance Block execution throughput and performance in general label Feb 10, 2026
@github-actions
Copy link

github-actions bot commented Feb 10, 2026

Lines of code report

Total lines added: 15
Total lines removed: 0
Total lines changed: 15

Detailed view
+-------------------------------------+-------+------+
| File                                | Lines | Diff |
+-------------------------------------+-------+------+
| ethrex/crates/blockchain/mempool.rs | 386   | +8   |
+-------------------------------------+-------+------+
| ethrex/crates/blockchain/payload.rs | 713   | +7   |
+-------------------------------------+-------+------+

pre-capture the notified() future before select! for correctness, and
add a slot deadline timeout so the loop exits even if no transactions
arrive and the cancel token is not triggered.
@avilagaston9 avilagaston9 marked this pull request as ready for review February 10, 2026 19:36
Copilot AI review requested due to automatic review settings February 10, 2026 19:36
@avilagaston9 avilagaston9 requested a review from a team as a code owner February 10, 2026 19:36
@ethrex-project-sync ethrex-project-sync bot moved this to In Review in ethrex_l1 Feb 10, 2026
@github-actions
Copy link

🤖 Kimi Code Review

Review Summary

The PR introduces a notification mechanism to avoid busy-looping in the payload builder when waiting for new transactions. Overall, the implementation is sound, but there are a few issues to address:

Issues Found

  1. Race condition in payload.rs:396-403: The notified() method creates a future that is already notified if any notification happened before it was created. This means if a transaction is added between the last payload build and the creation of notified(), it will be missed. Use self.mempool.tx_added().notified().await directly instead.

  2. Potential deadlock in mempool.rs:133-136: While the drop before notify is correct, there's still a potential issue if the write lock is poisoned. Consider handling the poison error case explicitly.

  3. Missing feature flag in Cargo.toml: The tokio dependency adds "sync" and "macros" features, but "macros" isn't actually used in this PR. Remove "macros" to reduce compilation time.

Suggested Changes

// In payload.rs, line 396-403
// Replace:
let notified = self.mempool.tx_added().notified();
tokio::select! {
    _ = notified => {}
    _ = cancel_token.cancelled() => break,
    _ = tokio::time::sleep(remaining) => break,
}

// With:
tokio::select! {
    _ = self.mempool.tx_added().notified() => {}
    _ = cancel_token.cancelled() => break,
    _ = tokio::time::sleep(remaining) => break,
}
# In blockchain/Cargo.toml, line 25
# Change:
tokio = { workspace = true, features = ["time", "rt", "sync", "macros"] }
# To:
tokio = { workspace = true, features = ["time", "rt", "sync"] }

The implementation correctly uses Notify for efficient waiting and properly handles the lock drop before notification to avoid holding locks across await points.


Automated review by Kimi (Moonshot AI) · custom prompt

@github-actions
Copy link

🤖 Codex Code Review

Found a couple of correctness/perf concerns to address.

  • Potential missed wakeups for payload rebuilds: payload.rs creates notified() each loop and waits on it. If a tx is inserted just before notified() is created, that notification is lost and the builder may wait until another tx or slot timeout, delaying inclusion. Consider using a monotonic signal (e.g., tokio::sync::watch or broadcast with a counter), or check mempool state before waiting to avoid missing a recent insertion. crates/blockchain/payload.rs:396-403
  • Rebuild storm risk: notify_one() on every insertion can trigger a full payload rebuild per tx. If mempool receives bursts, this can cause excessive rebuilds in a single slot. Consider batching/debouncing (e.g., wait for a short interval after first notify, or use a bounded channel + coalescing) so rebuilds are amortized. crates/blockchain/mempool.rs:131-136, crates/blockchain/payload.rs:396-403

Everything else in the diff looks reasonable: lock drop before notify is good, and the additional tokio features are appropriate.

If you want, I can suggest a small, idiomatic watch-based pattern that avoids lost wakeups and rebuild storms.


Automated review by OpenAI Codex · custom prompt

@github-actions
Copy link

🤖 Claude Code Review

Perfect! Now I have a comprehensive understanding. Let me create the review:

Code Review: PR #6169 - Notify-based Payload Rebuild Optimization

Summary

This PR adds tokio::sync::Notify to signal the payload builder when new transactions arrive, replacing the busy-loop approach. The implementation looks solid with recent improvements addressing potential race conditions.

✅ Strengths

  1. Correct Notify Pattern: The code pre-captures notified() before the select! (line 395 in payload.rs), which is the correct pattern to avoid missing notifications.

  2. Proper Lock Management: The write lock is explicitly dropped before calling notify_one() (mempool.rs:135), preventing lock contention while waking waiters.

  3. Timeout Protection: The slot deadline timeout in the select! ensures the loop exits even if no transactions arrive, preventing potential hangs.

  4. Minimal API Surface: tx_added() is correctly scoped as pub(crate), maintaining encapsulation.

🔍 Potential Issues & Suggestions

1. Missing Notification on Transaction Removal (Minor)

Location: mempool.rs:180-184

The remove_transaction() method doesn't trigger any notification. If the payload builder is waiting and transactions are being removed (e.g., due to invalidation), the builder won't rebuild to reflect the updated state.

Consideration: This might be intentional since removals don't add new revenue opportunities. However, document this behavior if it's by design.

2. Blobs Bundle Addition Doesn't Notify (Minor)

Location: mempool.rs:163-172

The add_blobs_bundle() method adds blob data but doesn't trigger a notification. If blob bundles can be added after their corresponding transaction:

pub fn add_blobs_bundle(&self, tx_hash: H256, blobs_bundle: BlobsBundle) -> Result<(), StoreError> {
    self.write()?.blobs_bundle_pool.insert(tx_hash, blobs_bundle);
    // No notification here - is this intentional?
    Ok(())
}

Question: Should blob bundle additions trigger a rebuild? If blobs can arrive after the transaction, the payload builder might build an incomplete block.

3. Notify Semantics Under Concurrent Additions

Location: mempool.rs:136

Using notify_one() is correct since only one payload builder should be active per FCU. However, the PR description states:

"if multiple txs arrive during a build, the next notified().await resolves immediately"

This is true because Notify stores exactly one permit. But if transactions arrive in rapid succession while NOT building, only one notification is queued, which is the desired behavior.

Recommendation: Add a comment explaining this permit-based behavior for future maintainers:

// notify_one() stores a single permit, so if multiple txs arrive 
// during a build, the next notified().await resolves immediately.
// This is optimal: one rebuild can incorporate all pending txs.
self.tx_added.notify_one();

4. Race Condition: Transaction Arrives During Clone (Low Risk)

Location: payload.rs:401

let payload = payload.clone();
let self_clone = self.clone();
let building_task = tokio::task::spawn_blocking(move || self_clone.build_payload(payload));

If a transaction arrives between when notified() resolves and when the clone happens, the notification permit is consumed but no rebuild is scheduled. The next transaction will trigger another rebuild, so impact is minimal (one slot with potentially sub-optimal transaction inclusion).

Status: Acceptable tradeoff for code simplicity.

5. Error Handling in Build Loop (Observation)

Location: payload.rs:408-416

If build_payload() returns an error, the previous result is retained and the loop continues. This is reasonable but means errors during rebuilds are silent (except for the warning on line 413).

Recommendation: Consider metrics/logging for build failures to detect systematic issues.

🔒 Security Considerations

✅ No Security Issues Found

  • DoS Protection: The 12-second slot deadline prevents infinite loops even if notification mechanism fails
  • Lock Handling: Proper drop before notify prevents deadlocks
  • Atomic Operations: RwLock is correctly used; no data races possible
  • Resource Exhaustion: Mempool has size limits and pruning logic

🎯 Ethereum-Specific Correctness

✅ Consensus Rules

  • Slot Timing: Correctly uses 12-second slots (SECONDS_PER_SLOT)
  • FCU Handling: First payload build remains unconditional (line 391)
  • Transaction Ordering: Notification mechanism doesn't affect transaction ordering within blocks

⚠️ MEV Considerations (Informational)

The notification-based approach means blocks are rebuilt immediately when new transactions arrive. This is good for inclusion but consider:

  • Higher gas fee transactions arriving late in the slot get included faster
  • This is likely the desired behavior for MEV extraction

📊 Performance Analysis

✅ Improvements Over Busy-Loop

  1. CPU Usage: Eliminates continuous rebuilds with no new transactions
  2. Lock Contention: Reduces trie_cache mutex contention with RPC handlers (addresses the core issue)
  3. Wake Latency: Instant wake-up vs 100ms polling in PR fix(l1): only rebuild payload when mempool receives new transactions #6165

⚠️ Tradeoffs vs PR #6165

Aspect This PR (Notify) PR #6165 (AtomicU64)
Wake latency Instant Up to 100ms
Dependencies Adds sync & macros features No new dependencies
Complexity Slightly higher Simpler
Performance Better Good enough

Verdict: The instant wake-up is worth the additional tokio features.

🧪 Testing Recommendations

  1. Add integration test for notification behavior:

    • Verify payload is rebuilt when transaction arrives
    • Verify loop exits after slot deadline even with no transactions
    • Test cancellation token behavior
  2. Stress test concurrent transaction arrivals during payload building

  3. Benchmark RPC latency improvement under load

✅ Rust Best Practices

  • Idiomatic async: Correct use of tokio::select!
  • Lifetime management: Proper reference handling with &self.tx_added
  • Error handling: Consistent Result types
  • Documentation: Good inline comments (could add more as suggested)

📝 Final Verdict

Approve with minor suggestions

The implementation is sound and addresses the performance issue effectively. The recent feedback incorporation (pre-capture notified, explicit drop, timeout) shows careful attention to correctness.

Required Changes: None

Recommended Improvements:

  1. Add comment explaining notify_one() permit semantics
  2. Document why remove_transaction() and add_blobs_bundle() don't notify
  3. Consider metrics for build failures
  4. Add integration tests for notification behavior

The code is production-ready and superior to the alternative polling approach for this use case.


Automated review by Claude (Anthropic) · custom prompt

@greptile-apps
Copy link

greptile-apps bot commented Feb 10, 2026

Greptile Overview

Greptile Summary

This PR optimizes the block building loop by replacing continuous rebuilding with event-driven rebuilding using tokio::sync::Notify.

Key Changes

  • Mempool notification system: Added a tokio::sync::Notify field to Mempool that signals when new transactions arrive
  • Event-driven rebuild loop: Modified build_payload_loop to wait for notifications using tokio::select! instead of busy-looping
  • Lock management: Properly drops the write lock before calling notify_one() to avoid holding locks while waking waiters
  • Timeout handling: Added slot deadline timeout so the loop exits gracefully even if no transactions arrive

Technical Approach

The implementation correctly:

  • Pre-captures the notified() future before the select! macro (avoiding the lost wakeup race condition)
  • Uses notify_one() which stores a permit, so if multiple transactions arrive during a rebuild, the next await resolves immediately
  • Maintains the first unconditional build_payload() call to ensure at least one payload is built
  • Restricts tx_added() visibility to pub(crate) for proper encapsulation

Performance Impact

This eliminates unnecessary CPU-intensive rebuilds (EVM execution, trie operations) that were previously happening continuously during the 12-second slot window, reducing contention on the shared trie_cache mutex and preventing RPC server starvation.

Confidence Score: 5/5

  • This PR is safe to merge with no identified issues
  • The implementation is correct and well-designed. The author has addressed potential race conditions by pre-capturing the notified() future, properly manages locks by dropping before notification, and includes proper timeout handling. The change is performance-focused with clear benefits (eliminates busy-looping) and minimal risk.
  • No files require special attention

Important Files Changed

Filename Overview
crates/blockchain/mempool.rs Added tokio::sync::Notify field with proper initialization and notification after transaction insertion; correctly drops lock before notifying
crates/blockchain/payload.rs Replaced busy-loop with tokio::select! that waits for new transactions, cancellation, or slot deadline; properly pre-captures notified() future before select
crates/blockchain/Cargo.toml Added required sync and macros features to tokio dependency for Notify and select! macro support

Sequence Diagram

sequenceDiagram
    participant RPC as Engine RPC
    participant Builder as Payload Builder Loop
    participant Mempool as Mempool
    participant P2P as P2P Network

    Note over RPC,Builder: FCU arrives, payload build starts
    RPC->>Builder: initiate_payload_build(payload)
    activate Builder
    Builder->>Builder: build_payload() (first build)
    
    loop Until slot deadline (12s) or cancellation
        Builder->>Mempool: tx_added().notified()
        Note over Builder: Wait for notification or timeout
        
        alt New transaction arrives
            P2P->>Mempool: add_transaction(tx)
            Mempool->>Mempool: Insert tx into pool
            Mempool->>Mempool: drop(write_lock)
            Mempool->>Builder: notify_one()
            Builder->>Builder: build_payload() (rebuild)
        else Cancellation requested
            RPC->>Builder: cancel_token.cancel()
            Builder->>Builder: break loop
        else Slot deadline reached
            Builder->>Builder: sleep(remaining) timeout
            Builder->>Builder: break loop
        end
    end
    
    Builder-->>RPC: PayloadBuildResult
    deactivate Builder
Loading

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR reduces CPU usage and contention during L1 payload building by making the payload rebuild loop event-driven: it now waits for mempool transaction insertions (via tokio::sync::Notify) instead of continuously rebuilding for the entire slot.

Changes:

  • Add a tokio::sync::Notify to Mempool and signal it on every transaction insertion.
  • Update build_payload_loop to select! on mempool notifications, cancellation, or slot deadline before triggering rebuilds.
  • Enable additional Tokio features (sync, macros) required for Notify and tokio::select!.

Reviewed changes

Copilot reviewed 3 out of 4 changed files in this pull request and generated 3 comments.

File Description
crates/blockchain/mempool.rs Adds a Notify to signal transaction insertions to the payload builder.
crates/blockchain/payload.rs Makes the payload rebuild loop wait for mempool events/cancellation/deadline before rebuilding.
crates/blockchain/Cargo.toml Enables Tokio sync + macros features to support Notify and select!.
crates/vm/levm/bench/revm_comparison/Cargo.lock Locks additional Tokio macro dependency introduced by feature changes.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@github-actions
Copy link

github-actions bot commented Feb 10, 2026

Benchmark Results Comparison

No significant difference was registered for any benchmark run.

Detailed Results

Benchmark Results: BubbleSort

Command Mean [s] Min [s] Max [s] Relative
main_revm_BubbleSort 2.926 ± 0.022 2.890 2.951 1.00 ± 0.01
main_levm_BubbleSort 3.050 ± 0.041 3.015 3.150 1.05 ± 0.02
pr_revm_BubbleSort 2.912 ± 0.019 2.880 2.939 1.00
pr_levm_BubbleSort 3.015 ± 0.041 2.971 3.119 1.04 ± 0.02

Benchmark Results: ERC20Approval

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_ERC20Approval 981.3 ± 14.5 967.9 1010.1 1.01 ± 0.02
main_levm_ERC20Approval 1097.6 ± 71.6 1055.8 1287.2 1.13 ± 0.07
pr_revm_ERC20Approval 973.3 ± 9.2 959.0 993.1 1.00
pr_levm_ERC20Approval 1050.9 ± 11.1 1038.2 1076.3 1.08 ± 0.02

Benchmark Results: ERC20Mint

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_ERC20Mint 130.9 ± 2.2 129.2 135.7 1.01 ± 0.02
main_levm_ERC20Mint 159.9 ± 2.8 157.0 167.2 1.23 ± 0.02
pr_revm_ERC20Mint 130.0 ± 0.6 129.1 131.0 1.00
pr_levm_ERC20Mint 158.9 ± 1.7 157.0 162.9 1.22 ± 0.01

Benchmark Results: ERC20Transfer

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_ERC20Transfer 229.4 ± 1.4 227.1 231.3 1.00 ± 0.01
main_levm_ERC20Transfer 264.3 ± 2.0 261.5 267.7 1.16 ± 0.01
pr_revm_ERC20Transfer 228.5 ± 1.5 226.6 230.5 1.00
pr_levm_ERC20Transfer 262.8 ± 2.2 260.8 268.2 1.15 ± 0.01

Benchmark Results: Factorial

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_Factorial 228.7 ± 12.5 222.2 263.8 1.03 ± 0.06
main_levm_Factorial 250.0 ± 2.4 246.9 254.5 1.12 ± 0.01
pr_revm_Factorial 222.5 ± 0.7 221.6 223.7 1.00
pr_levm_Factorial 248.7 ± 1.6 246.5 250.6 1.12 ± 0.01

Benchmark Results: FactorialRecursive

Command Mean [s] Min [s] Max [s] Relative
main_revm_FactorialRecursive 1.660 ± 0.038 1.618 1.739 1.00 ± 0.03
main_levm_FactorialRecursive 8.566 ± 0.077 8.409 8.662 5.17 ± 0.10
pr_revm_FactorialRecursive 1.656 ± 0.027 1.618 1.700 1.00
pr_levm_FactorialRecursive 8.627 ± 0.090 8.480 8.753 5.21 ± 0.10

Benchmark Results: Fibonacci

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_Fibonacci 200.2 ± 3.7 197.9 209.9 1.00 ± 0.02
main_levm_Fibonacci 224.2 ± 3.3 221.0 230.9 1.12 ± 0.02
pr_revm_Fibonacci 199.6 ± 1.2 196.7 201.4 1.00
pr_levm_Fibonacci 222.2 ± 3.8 219.4 231.4 1.11 ± 0.02

Benchmark Results: FibonacciRecursive

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_FibonacciRecursive 868.7 ± 15.6 853.2 902.8 1.24 ± 0.03
main_levm_FibonacciRecursive 701.3 ± 8.2 687.0 715.7 1.00
pr_revm_FibonacciRecursive 863.2 ± 7.3 853.1 876.5 1.23 ± 0.02
pr_levm_FibonacciRecursive 703.9 ± 6.8 697.3 715.7 1.00 ± 0.02

Benchmark Results: ManyHashes

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_ManyHashes 8.6 ± 0.0 8.5 8.6 1.00
main_levm_ManyHashes 9.7 ± 0.2 9.5 10.0 1.13 ± 0.02
pr_revm_ManyHashes 8.7 ± 0.1 8.5 8.8 1.01 ± 0.01
pr_levm_ManyHashes 9.7 ± 0.1 9.6 9.9 1.13 ± 0.01

Benchmark Results: MstoreBench

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_MstoreBench 256.9 ± 1.6 255.1 259.6 1.19 ± 0.01
main_levm_MstoreBench 215.4 ± 1.3 213.6 218.5 1.00
pr_revm_MstoreBench 256.6 ± 0.8 255.7 257.8 1.19 ± 0.01
pr_levm_MstoreBench 223.5 ± 23.1 213.6 288.7 1.04 ± 0.11

Benchmark Results: Push

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_Push 287.6 ± 3.8 285.1 298.1 1.08 ± 0.02
main_levm_Push 267.0 ± 2.9 263.1 271.1 1.01 ± 0.01
pr_revm_Push 287.4 ± 4.7 284.7 300.6 1.08 ± 0.02
pr_levm_Push 265.4 ± 1.8 263.5 268.4 1.00

Benchmark Results: SstoreBench_no_opt

Command Mean [ms] Min [ms] Max [ms] Relative
main_revm_SstoreBench_no_opt 164.2 ± 5.0 157.3 177.0 1.75 ± 0.07
main_levm_SstoreBench_no_opt 94.5 ± 2.3 92.5 100.1 1.01 ± 0.03
pr_revm_SstoreBench_no_opt 162.4 ± 2.3 157.9 164.9 1.73 ± 0.05
pr_levm_SstoreBench_no_opt 93.6 ± 2.1 90.6 97.1 1.00

builders, and also notify after add_blobs_bundle() so builders that woke
from add_transaction() for a blob tx see the complete bundle data.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

L1 Ethereum client

Projects

Status: In Review
Status: Todo

Development

Successfully merging this pull request may close these issues.

1 participant