Skip to content

test: flaky heavy shallow fork triggers migration prune and fcu#930

Draft
craigmayhew wants to merge 17 commits intomasterfrom
craig/heavy_shallow_fork_triggers_migration_prune_and_fcu
Draft

test: flaky heavy shallow fork triggers migration prune and fcu#930
craigmayhew wants to merge 17 commits intomasterfrom
craig/heavy_shallow_fork_triggers_migration_prune_and_fcu

Conversation

@craigmayhew
Copy link
Contributor

@craigmayhew craigmayhew commented Nov 3, 2025

Describe the changes
Currently this is a PR of assertions and not a fix.

Debug Notes

  • The test occasionally mines a total of 7 blocks, when 6 are expected.
  • The test sometimes fails due to a 4th block being seen after testing_peer_with_assignments_and_name() when 3 (the number imn this tests epoch settings) are expected.
    • assert_eq!(base_height, num_blocks_in_epoch as u64);
  • The test sometimes fails due to a 5th block at fork_height when 4 are expected
    • assert_eq!(fork_height, canonical_block_level1.height);

Checklist

  • Tests have been added/updated for the changes.
  • Documentation has been updated for the changes (if applicable).
  • The code follows Rust's style guidelines.

Additional Context
Add any other context about the pull request here.

@craigmayhew craigmayhew changed the title test: heavy shallow fork triggers migration prune and fcu test: flaky heavy shallow fork triggers migration prune and fcu Nov 3, 2025

// This failed "No reorg event received within 20 seconds" at 12:39 on 4th November
// "Error: Timeout: No reorg event received within 20 seconds" at 13:06 on 4th Nov
let reorg_event = reorg_future.await?;
Copy link
Contributor Author

@craigmayhew craigmayhew Nov 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

~1% of the time on local runs this error is seen: Error: Timeout: No reorg event received within 20 seconds. It is possible this was/will be fixed by #944

@craigmayhew
Copy link
Contributor Author

latest test results:

   TRY 2 FAIL [  23.846s] irys-chain::mod multi_node::fork_recovery::slow_heavy_reorg_upto_block_migration_depth
   TRY 2 FAIL [  23.667s] irys-chain::mod multi_node::mempool_tests::slow_heavy_evm_mempool_fork_recovery_test

@craigmayhew
Copy link
Contributor Author

craigmayhew commented Nov 6, 2025

Locally

$ cargo nextest run --workspace --tests --all-targets --retries=0 heavy_shallow_fork_triggers_migration_prune_and_fcu
    Blocking waiting for file lock on package cache
    Blocking waiting for file lock on package cache
    Blocking waiting for file lock on package cache
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.54s
────────────
 Nextest run ID 0cf100eb-f092-4a76-84f2-74f96a960fbc with nextest profile: default
    Starting 1 test across 45 binaries (1057 tests skipped)
        PASS [  10.874s] irys-chain::mod multi_node::fork_recovery::heavy_shallow_fork_triggers_migration_prune_and_fcu
────────────
     Summary [  10.874s] 1 test run: 1 passed, 1057 skipped
Test passed
----------------------------------------
Reached maximum iterations (500)
All tests passed

.await?;

let _ = genesis_node
let result = genesis_node
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On some runs this error is seen Error: Reth latest block did not reach expected hash 0x2fa08d711ca0028f245fac5fbbf181c3220b682255d8e3c818ac2279d6556690 within 20s

I today am not seeing this and am now also not seeing this locally.

@craigmayhew
Copy link
Contributor Author

TRY 2 FAIL [   6.216s] irys-chain::mod multi_node::fork_recovery::slow_heavy_reorg_upto_block_migration_depth

@craigmayhew
Copy link
Contributor Author

CI FAIL from slow_heavy_promotion_with_multiple_proofs_test

@craigmayhew
Copy link
Contributor Author

slow_heavy_promotion_with_multiple_proofs_test - this failure was fixed in #944

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant