Skip to content

tracker(bloatnet): BloatNet benchmark testing #1986

@LouisTsai-Csie

Description

@LouisTsai-Csie

BloatNet Test Case Tracker

Adversarial Scenarios for Execution Client Stress-Testing

State Write & I/O Pressure Scenarios

  • LSM: Compaction
    Maximize write amplification by repeatedly writing to hot storage slots, triggering frequent compactions and high disk I/O.
    Composition: A block filled with SSTORE operations targeting a small, fixed set of storage keys.

  • LSM: Tombstone
    Inflate logical DB size via deletion markers (tombstones), forcing compaction of older state.
    Composition: Use SELFDESTRUCT or SSTORE(0) against contracts and storage slots modified minutes earlier.

  • B+ Tree: Random Write
    Trigger non-sequential updates in B+ trees, leading to expensive page splits and rebalancing.
    Composition: Mix 7702-style delegation and random SSTORE operations into a variety of contract structures.

  • B+ Tree: Freelist Fragmentation
    Fragment MDBX’s freelist, then perform a large allocation to induce long search latency.
    Composition: Phase 1: spam random SSTORE + deletes. Phase 2: deploy many max-size contracts in one block using CREATE.


State Read & CPU-Bound Scenarios

  • Wide State
    Maximize state root calculation time by creating a large number of new trie leaf nodes.
    Composition: Fill block with EOA-to-EOA transfers using distant addresses.

  • Deep State
    Induce deep trie traversal with minimal storage changes.
    Composition: Use contracts whose addresses share long prefixes, and update a single storage slot per block.

  • Sender Recovery Choke - Implementation
    Stress signature verification by maximizing unique EOA senders.
    Composition: Fill block with minimal 21k gas transactions, each from a new EOA.

  • Besu: JVM Pressure - Related Implementation (Note: this case uses CREATE, not CREATE2)
    Attempt to trigger stop-the-world GC pause by allocating and deleting large volumes of objects.
    Composition: Use CREATE2 followed by immediate SELFDESTRUCT across thousands of contracts in a block.


Complex and Concurrency Scenarios

  • Reorg Under Compaction
    Test for I/O conflicts or deadlocks when a reorg occurs during background compaction.
    Composition: Phase 1: start compaction. Phase 2: force a 10+ block reorg.

  • MDBX Reorg vs Long Read
    Exploit MDBX vulnerability by holding a long read open across a reorg.
    Composition: Phase 1: open a long RPC query. Phase 2: build a heavy-writing fork. Phase 3: reorg while read is active.

  • Trielog Torture Test
    Stress trielog replay mechanisms under deep reorg conditions.
    Composition: Build a 20+ block wide-state fork, then force a reorg to it.

  • Peer Overload
    Stress peer management and message handling under maximum network traffic.
    Composition: Use 100+ nodes running spam-level tx propagation and peer saturation.

  • Snapshot Generation
    Evaluate snapshot generation behavior under constant state turnover.
    Composition: Sustain >100M gas blocks with SSTORE and 7702 ops while snapshotting.

  • Pruning Under Load
    Test pruning throughput during high-volume state mutation.
    Composition: Use block spam with many state-touching transactions, especially touching old state.


Opcode State Access Baseline Tests

  • Normal state

  • While compacting / pruning / compacting + pruning / reorging / reorging + compacting

  • Mempool flooded with 21k gas txs / Mempool flooded with invalid txs

  • Trie structures deepened by 1–2 levels over mainnet

  • SSTORE — Fill block with SSTORE(0 → 1) to maximize new storage slot creation.

  • SSTORE — Fill block with SSTORE(1 → 2) to measure update cost without expansion.

  • SLOAD — Fill block with warm SLOAD to measure cached read throughput. - Implementation

  • SLOAD — Fill block with cold SLOAD to test random uncached access latency.

  • CREATE / CREATE2 — Deploy as many contracts as possible in a block to test account creation overhead. - Implementation

  • SELFDESTRUCT — Delete previously created contracts in bulk to test tombstone generation cost. - Implementation (destruct existing contract), Implementation (destruct newly created contract)

  • BALANCE — Read balance of accounts (warm and cold) to evaluate account header access latency. - Implementation (Access cold state), Implementation (Access warm state)

  • EXTCODESIZE — Read code sizes from deployed contracts (warm and cold) to measure code header I/O. - Implementation (Warm state only)

  • EXTCODECOPY — Read and copy contract code to stress memory + disk read bandwidth. - Implementation

  • EXTCODEHASH — Hash contract bytecode to test compute-bound and I/O interplay. - Implementation

  • BLOCKHASH — Call BLOCKHASH for several recent blocks to evaluate historical header access performance. - Implementation


Opcode State Access Combination Tests

  • Cold BALANCE + Warm EXTCODESIZE
    Description: Cold-read many account balances, then warm-read large code contracts.
    Purpose: Maximize DB read pressure via divergent access patterns.

  • SSTORE(0→X→0)
    Description: Toggle each storage slot to a nonzero value, then back to zero.
    Purpose: Apply max write and delete cost per slot; tests refund logic and I/O efficiency.

  • CREATE + SELFDESTRUCT Loop - Implementation
    Description: Rapid contract creation and destruction within the same block.
    Purpose: Stress account insertion + tombstone overhead in a tight loop.


Other Scenarios

  • Max Deposits Block
    Description: Include 8192 validator deposits (~32k gas each) in a single block.
    Purpose: Test consensus-layer limits and whether the full block exceeds P2P 10MB limit.

  • Block Size Overlimit
    Description: Construct a block that exceeds the 10MB RLP size cap.
    Purpose: Verify clients safely reject or handle blocks that exceed RLP decoding limits.

  • Attestation Flood
    Description: Fill slot with maximum attestations, slashings, and exits.
    Purpose: Stress consensus-layer validation throughput and gossip handling.

  • Large TX vs Many Small TXs
    Description: Compare I/O and gas cost when touching the same state via one large tx vs many small ones.
    Purpose: Understand scaling differences in state access granularity.


Notes / To Do

  • Add a test that deletes a node in the trie and triggers branch rearrangement.

Metadata

Metadata

Labels

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions