diff --git a/.claude/commands/codex.md b/.claude/commands/codex.md index 4fd9f0358e..11b28baad1 100644 --- a/.claude/commands/codex.md +++ b/.claude/commands/codex.md @@ -1,147 +1,70 @@ # Codex Second Opinion -Get a second opinion from OpenAI Codex on code reviews, debugging, and analysis. +Get a second opinion from OpenAI Codex on any question about this codebase — debugging, analysis, architecture, code review, or anything else. ## Arguments -The user's input is: $ARGUMENTS +The user's request is: $ARGUMENTS ## Instructions -Parse the arguments and determine which mode to use, then gather context and execute the appropriate codex command. +Your job is to gather the right context, build a prompt that gives Codex enough understanding of this codebase to be useful, and then run `codex exec` with that prompt. Codex has zero knowledge of this codebase, so the context you provide is everything. -### Mode Detection +### Step 1: Gather Context -Analyze `$ARGUMENTS` to determine the mode: +Read the Architecture Overview section of `CLAUDE.md` for a high-level understanding. Then, based on what the user is asking about, selectively read the files that are most relevant. Use these heuristics: -1. **Branch review** — arguments match `review`, `review branch`, `review --base`, or no arguments at all (default mode). Reviews the current branch against `master`. -2. **PR review** — arguments match patterns like `review PR #42`, `review pr 42`, `pr 42`, `pr #42` -3. **Uncommitted changes review** — arguments match `review uncommitted`, `uncommitted`, `wip` -4. **Commit review** — arguments match `review commit `, `commit ` -5. **Free-form** — anything else (e.g. `debug why test_foo fails`, `explain the block validation flow`) +- **If actor services are involved**: read the service's message types and the `ServiceSenders`/`ServiceReceivers` pattern +- **If types crate is involved**: check wire formats (`BlockBody`, `BlockHeader`, `DataTransactionHeader`) +- **If consensus/mining is involved**: read the VDF + PoA section and shadow transaction patterns +- **If p2p is involved**: check gossip protocol routes and circuit breaker usage +- **If storage/packing is involved**: check chunk size constants and XOR packing invariants +- **If reth integration is involved**: check CL/EL boundary and payload building flow -### Context Gathering (All Modes) +If the user's request references specific files, diffs, or branches, read those too. Keep context focused — aim for 3-5 key files maximum. -Before running codex, you must build a targeted review prompt. This is the most important step — codex has no knowledge of this codebase, so you need to give it the right context. +### Step 2: Build the Prompt -#### Step 1: Identify touched areas +Construct a single prompt string that includes: -Get the diff for the relevant mode: +1. **Architecture summary** — 2-3 sentences describing the relevant components and how they fit together +2. **Key conventions** — patterns that apply (e.g., "custom Tokio channel-based actor system, not Actix", "crypto crates compiled with opt-level=3") +3. **Relevant code** — inline the key snippets or file contents that Codex needs to see +4. **The user's request** — what they actually want Codex to analyze -| Mode | Command | -|---|---| -| Branch review | `git diff master...HEAD --stat` then `git diff master...HEAD` | -| PR review | `gh pr diff ` | -| Uncommitted | `git diff HEAD --stat` then `git diff HEAD` (includes staged + unstaged) | -| Commit | `git show --stat` then `git show ` | +### Step 3: Run Codex -From the `--stat` output, identify which crates and modules are touched (e.g., `crates/actors/src/block_tree/`, `crates/types/src/`). - -#### Step 2: Read relevant context - -Based on the touched areas, read the relevant sections of `CLAUDE.md` (specifically the Architecture Overview section). Then selectively read files that provide context for the review: - -- **If actor services are touched**: read the service's message types and the `ServiceSenders`/`ServiceReceivers` pattern -- **If types crate is touched**: check for breaking changes to wire formats (`BlockBody`, `BlockHeader`, `DataTransactionHeader`) -- **If consensus/mining is touched**: read the VDF + PoA section and shadow transaction patterns -- **If p2p is touched**: check gossip protocol routes and circuit breaker usage -- **If storage/packing is touched**: check chunk size constants and XOR packing invariants -- **If reth integration is touched**: check CL/EL boundary and payload building flow - -Keep the context focused — only read files directly relevant to the diff. Aim for 3-5 key files maximum. - -#### Step 3: Build the review prompt - -Construct a prompt that includes: - -1. **Architecture summary** — a 2-3 sentence description of what the touched components do and how they fit together, derived from your reading -2. **Key conventions** — the specific patterns that apply (e.g., "This codebase uses a custom Tokio channel-based actor system, not Actix" or "Crypto crates must compile with opt-level=3") -3. **Review focus areas** — what to pay attention to based on the diff: - - Unsafe code and memory safety (especially in packing/crypto crates) - - Correctness of `Arc`/`Clone` patterns in actor message passing - - Wire format backward compatibility (types crate changes) - - Concurrency bugs (deadlocks, race conditions in channel-based services) - - Error handling (are errors propagated correctly, or silently swallowed?) - - Off-by-one errors in chunk/partition/offset calculations - -### Execution by Mode - -#### Mode 1: Branch Review (Default) - -Run codex review with the base flag and your constructed prompt: ```bash -codex review --base master "" +codex exec --sandbox read-only "" ``` -Run this in the background with a 300s timeout. - -#### Mode 2: PR Review - -1. Extract the PR number from the arguments. -2. Gather PR metadata: - ```bash - gh pr view --json title,body,labels,baseRefName - ``` -3. Get the diff via `gh pr diff `. -4. Include the PR title and description in the constructed prompt. -5. Run codex with the combined context: - ```bash - echo "" | codex exec --sandbox read-only -o /tmp/codex-review-$$.txt - - ``` - Run this in the background with a 300s timeout. -#### Mode 3: Uncommitted Changes Review - -```bash -codex review --uncommitted "" -``` -Run in background with 300s timeout. - -#### Mode 4: Commit Review - -Extract the commit SHA from the arguments. Run: -```bash -codex review --commit "" -``` -Run in background with 300s timeout. - -#### Mode 5: Free-form - -Pass the arguments as a prompt to codex exec, prefixed with the architecture context you gathered: -```bash -codex exec --sandbox read-only -o /tmp/codex-review-$$.txt "\n\nUser request: $ARGUMENTS" -``` -Run in background with 300s timeout. +Run this in the background with a 300s timeout. -### Progress Monitoring +### Step 4: Monitor Progress After launching codex in the background: 1. Wait ~30 seconds, then check the background task output using `TaskOutput` with `block: false`. -2. If there is new output, give the user a brief progress update (e.g. "Codex is analyzing the diff..." or quote a snippet of what it's working on). +2. If there is new output, give the user a brief progress update. 3. Repeat every ~30 seconds. -4. If no new output appears for 60+ seconds and the task hasn't completed, warn the user that codex may be stuck and offer to kill the process. -5. When the task completes, proceed to output presentation. +4. If no new output appears for 60+ seconds and the task hasn't completed, warn the user that codex may be stuck and offer to kill it. -### Output Presentation +### Step 5: Present Results Once codex finishes: -1. **Summary**: Present a concise summary of key findings organized by category: +1. **Summary**: Concise summary of key findings, organized by category (only include categories with findings): - Bugs and logic errors - Security concerns - Concurrency / actor system issues - - Code quality and style suggestions + - Code quality and style - Performance considerations - Only include categories that have findings. -2. **Raw output**: Include the complete codex output in a fenced code block so the user can read the full analysis. +2. **Raw output**: Complete codex output in a fenced code block. -3. **Counterpoints**: If you (Claude) disagree with any of codex's findings or think something was missed, add a "Claude's take" section noting your perspective. This is especially valuable when codex lacks the architectural context to understand why something was done a certain way. Only include this if you have a meaningful counterpoint — don't add it just for the sake of it. +3. **Counterpoints**: If you (Claude) disagree with any findings or think something was missed, add a "Claude's take" section. Only include this if you have a meaningful counterpoint. ### Error Handling - If `codex` is not found, tell the user to install it: `npm install -g @openai/codex` -- If `gh` is not found (PR mode only), tell the user to install GitHub CLI - If codex times out (5 minutes), show whatever partial output was captured and note the timeout -- If the PR number doesn't exist, report the gh error clearly -- If the current branch IS master (branch review mode), tell the user and suggest using `uncommitted` or `commit ` mode instead diff --git a/crates/actors/src/block_producer/ledger_expiry.rs b/crates/actors/src/block_producer/ledger_expiry.rs index cfb95629ed..e8ba94d447 100644 --- a/crates/actors/src/block_producer/ledger_expiry.rs +++ b/crates/actors/src/block_producer/ledger_expiry.rs @@ -95,6 +95,14 @@ pub async fn calculate_expired_ledger_fees( db: &DatabaseProvider, expect_txs_to_be_promoted: bool, ) -> eyre::Result { + // Fee distribution is only implemented for Submit ledger. Publish expiry + // simply resets partitions without fee redistribution. + debug_assert_ne!( + ledger_type, + DataLedger::Publish, + "fee distribution not supported for Publish ledger" + ); + // Step 1: Collect expired partitions let expired_slots = collect_expired_partitions(parent_epoch_snapshot, block_height, ledger_type)?; @@ -288,44 +296,42 @@ fn collect_expired_partitions( ); for expired_partition in expired_partition_info { - let partition = partition_assignments - .get_assignment(expired_partition.partition_hash) - .ok_or_eyre("could not get expired partition")?; - let ledger_id = expired_partition.ledger_id; let slot_index = SlotIndex::new(expired_partition.slot_index as u64); - // Only process partitions for the target ledger type - if ledger_id == target_ledger_type { - // Verify this ledger type can expire - if ledger_id == DataLedger::Publish { - eyre::bail!("publish ledger cannot expire"); - } - - tracing::info!( - "Found expired partition for {:?} ledger at slot_index={}, miner={:?}", - ledger_id, - slot_index.0, - partition.miner_address - ); - - // Store miner_address (not reward_address) to preserve unique miner identities - // for correct fee distribution. Reward address resolution is deferred to - // aggregate_balance_deltas to ensure pooled miners (sharing a reward address) - // are counted individually for fee splitting. - expired_ledger_slot_indexes - .entry(slot_index) - .and_modify(|miners: &mut Vec| { - miners.push(partition.miner_address); - }) - .or_insert(vec![partition.miner_address]); - } else { + // Filter by ledger type FIRST — before any lookup that could fail. + // This prevents a Publish partition state inconsistency from blocking + // Submit fee distribution (or vice versa). + if ledger_id != target_ledger_type { tracing::debug!( "Skipping partition with ledger_id={:?} (looking for {:?})", ledger_id, target_ledger_type ); + continue; } + + let partition = partition_assignments + .get_assignment(expired_partition.partition_hash) + .ok_or_eyre("could not get expired partition")?; + + tracing::info!( + "Found expired partition for {:?} ledger at slot_index={}, miner={:?}", + ledger_id, + slot_index.0, + partition.miner_address + ); + + // Store miner_address (not reward_address) to preserve unique miner identities + // for correct fee distribution. Reward address resolution is deferred to + // aggregate_balance_deltas to ensure pooled miners (sharing a reward address) + // are counted individually for fee splitting. + expired_ledger_slot_indexes + .entry(slot_index) + .and_modify(|miners: &mut Vec| { + miners.push(partition.miner_address); + }) + .or_insert(vec![partition.miner_address]); } Ok(expired_ledger_slot_indexes) diff --git a/crates/actors/tests/epoch_snapshot_tests.rs b/crates/actors/tests/epoch_snapshot_tests.rs index 1a5305ca5f..7611a4a134 100644 --- a/crates/actors/tests/epoch_snapshot_tests.rs +++ b/crates/actors/tests/epoch_snapshot_tests.rs @@ -185,6 +185,7 @@ async fn add_slots_test() { num_blocks_in_epoch: 100, num_capacity_partitions: Some(123), submit_ledger_epoch_length: 5, + publish_ledger_epoch_length: None, }, ..ConsensusConfig::testing() }; @@ -281,6 +282,7 @@ async fn unique_addresses_per_slot_test() { num_blocks_in_epoch: 100, num_capacity_partitions: Some(123), submit_ledger_epoch_length: 5, + publish_ledger_epoch_length: None, }, ..ConsensusConfig::testing() }; @@ -430,6 +432,7 @@ async fn partition_expiration_and_repacking_test() { submit_ledger_epoch_length: 2, num_blocks_in_epoch: 5, num_capacity_partitions: Some(123), + publish_ledger_epoch_length: None, }, ..ConsensusConfig::testing() }; @@ -964,6 +967,7 @@ async fn partitions_assignment_determinism_test() { num_blocks_in_epoch: 100, submit_ledger_epoch_length: 2, num_capacity_partitions: None, + publish_ledger_epoch_length: None, }, ..ConsensusConfig::testing() }; diff --git a/crates/chain-tests/src/lib.rs b/crates/chain-tests/src/lib.rs index 7f101b4968..1a2e3a264a 100644 --- a/crates/chain-tests/src/lib.rs +++ b/crates/chain-tests/src/lib.rs @@ -21,6 +21,8 @@ mod packing; #[cfg(test)] mod partition_assignments; #[cfg(test)] +mod perm_ledger_expiry; +#[cfg(test)] mod programmable_data; #[cfg(test)] mod promotion; diff --git a/crates/chain-tests/src/perm_ledger_expiry/mod.rs b/crates/chain-tests/src/perm_ledger_expiry/mod.rs new file mode 100644 index 0000000000..55c4e39ce2 --- /dev/null +++ b/crates/chain-tests/src/perm_ledger_expiry/mod.rs @@ -0,0 +1,1160 @@ +use crate::utils::IrysNodeTest; +use alloy_genesis::GenesisAccount; +use alloy_rpc_types_eth::TransactionTrait as _; +use irys_reth_node_bridge::irys_reth::shadow_tx::{ShadowTransaction, TransactionPacket}; +use irys_types::{irys::IrysSigner, DataLedger, NodeConfig, U256}; +use tracing::info; + +/// Tests that publish ledger slots expire when publish_ledger_epoch_length is configured. +/// Verifies: +/// - Perm data is posted and promoted before expiry +/// - After epoch_length epochs, perm slots are marked expired +/// - No fee distribution shadow transactions are generated for perm expiry +/// - Expired perm partitions are reassigned to non-expired slots via backfill +/// - User balance is unchanged by perm expiry (no fees or refunds) +#[test_log::test(tokio::test)] +async fn heavy_perm_ledger_expiry_basic() -> eyre::Result<()> { + const CHUNK_SIZE: u64 = 32; + const DATA_SIZE: usize = 64; // 2 chunks — enough to trigger slot allocation at epoch boundary + const BLOCKS_PER_EPOCH: u64 = 3; + const PUBLISH_LEDGER_EPOCH_LENGTH: u64 = 2; + const INITIAL_BALANCE: u128 = 10_000_000_000_000_000_000; + + let mut config = NodeConfig::testing(); + config.consensus.get_mut().block_migration_depth = 1; + config.consensus.get_mut().chunk_size = CHUNK_SIZE; + config.consensus.get_mut().num_chunks_in_partition = 4; + config.consensus.get_mut().num_chunks_in_recall_range = 1; + config.consensus.get_mut().epoch.num_blocks_in_epoch = BLOCKS_PER_EPOCH; + config.consensus.get_mut().epoch.publish_ledger_epoch_length = + Some(PUBLISH_LEDGER_EPOCH_LENGTH); + + let signer = IrysSigner::random_signer(&config.consensus_config()); + config.consensus.extend_genesis_accounts(vec![( + signer.address(), + GenesisAccount { + balance: U256::from(INITIAL_BALANCE).into(), + ..Default::default() + }, + )]); + + let node = IrysNodeTest::new_genesis(config.clone()) + .start_and_wait_for_packing("perm_expiry_test", 30) + .await; + + let anchor = node.get_block_by_height(0).await?.block_hash; + + // Post a transaction to the Submit ledger + let tx = node + .post_data_tx(anchor, vec![1_u8; DATA_SIZE], &signer) + .await; + node.wait_for_mempool(tx.header.id, 10).await?; + + // Upload chunks to trigger promotion to Publish + node.upload_chunks(&tx).await?; + node.wait_for_ingress_proofs_no_mining(vec![tx.header.id], 20) + .await?; + + // Mine 1 block to include tx (triggers promotion to Publish) + node.mine_block().await?; + + // Capture balance before expiry mining + let pre_expiry_height = node.get_canonical_chain_height().await; + let pre_expiry_block = node.get_block_by_height(pre_expiry_height).await?; + let pre_expiry_balance = node + .get_balance(signer.address(), pre_expiry_block.evm_block_hash) + .await; + info!("User balance before expiry mining: {}", pre_expiry_balance); + + // Mine to first epoch boundary to trigger additional slot allocation + let (_, epoch_height) = node.mine_until_next_epoch().await?; + info!("Reached first epoch boundary at height {}", epoch_height); + + // Verify we have 2+ perm slots (last-slot protection prevents single-slot expiry) + let snapshot = node.get_canonical_epoch_snapshot(); + let perm_slots = snapshot.ledgers.get_slots(DataLedger::Publish); + assert!( + perm_slots.len() >= 2, + "Expected 2+ perm slots after epoch boundary, got {}", + perm_slots.len() + ); + + // Derive expiry target from observed state + let slot0_last_height = perm_slots[0].last_height; + let min_blocks = PUBLISH_LEDGER_EPOCH_LENGTH * BLOCKS_PER_EPOCH; + let earliest_expiry = min_blocks + slot0_last_height; + // Round up to epoch boundary + let target_height = earliest_expiry.div_ceil(BLOCKS_PER_EPOCH) * BLOCKS_PER_EPOCH; + info!( + "Slot 0 last_height={}, min_blocks={}, target expiry height={}", + slot0_last_height, min_blocks, target_height + ); + + // Mine to expiry target + let current_height = node.get_canonical_chain_height().await; + for _ in current_height..target_height { + node.mine_block().await?; + } + + // Verify we reached the target height + let final_height = node.get_canonical_chain_height().await; + info!( + "Reached height {}, target was {}", + final_height, target_height + ); + assert!( + final_height >= target_height, + "Should have reached target height" + ); + + // --- Assertion 1: Publish ledger slots are marked expired --- + let epoch_snapshot = node.get_canonical_epoch_snapshot(); + let perm_slots = epoch_snapshot.ledgers.get_slots(DataLedger::Publish); + let num_slots = perm_slots.len(); + + // At least one non-last slot should be expired (last slot is protected) + let expired_count = perm_slots + .iter() + .enumerate() + .filter(|(i, slot)| *i < num_slots - 1 && slot.is_expired) + .count(); + assert!( + expired_count > 0, + "Expected at least one non-last perm slot to be expired after height {}", + final_height + ); + info!("{} of {} perm slots are expired", expired_count, num_slots); + + // --- Assertion 2: No TermFeeReward shadow txs for Publish expiry --- + let epoch_block = node.get_block_by_height(target_height).await?; + let evm_block = node + .wait_for_evm_block(epoch_block.evm_block_hash, 30) + .await?; + for tx in &evm_block.body.transactions { + let mut input = tx.input().as_ref(); + if let Ok(shadow) = ShadowTransaction::decode(&mut input) { + if let Some(packet) = shadow.as_v1() { + // TermFeeReward should never appear for Publish ledger expiry + assert!( + !matches!(packet, TransactionPacket::TermFeeReward(_)), + "Unexpected TermFeeReward shadow tx in perm expiry epoch block" + ); + } + } + } + info!( + "Verified no TermFeeReward shadow txs in epoch block at height {}", + target_height + ); + + // --- Assertion 3: Expired partitions are reassigned to non-expired slots --- + // With num_partitions_per_slot=1 and unfilled slots from epoch 1's allocation, + // backfill deterministically reassigns every expired partition at this epoch. + let expired_infos = epoch_snapshot + .expired_partition_infos + .as_ref() + .expect("expired_partition_infos should be set after expiry"); + let perm_expired_partitions: Vec<_> = expired_infos + .iter() + .filter(|info| info.ledger_id == DataLedger::Publish) + .collect(); + assert!( + !perm_expired_partitions.is_empty(), + "Expected at least one expired Publish partition in expired_partition_infos" + ); + for info in &perm_expired_partitions { + let orig_slot = &perm_slots[info.slot_index]; + assert!( + orig_slot.is_expired, + "Slot {} should be expired", + info.slot_index + ); + assert!( + !orig_slot.partitions.contains(&info.partition_hash), + "Expired partition {:?} should have been removed from slot {}", + info.partition_hash, + info.slot_index + ); + let assignment = epoch_snapshot + .partition_assignments + .get_assignment(info.partition_hash) + .unwrap_or_else(|| { + panic!( + "Expired partition {:?} vanished from assignments", + info.partition_hash + ) + }); + let (Some(new_ledger_id), Some(new_slot_index)) = + (assignment.ledger_id, assignment.slot_index) + else { + panic!( + "Expired partition {:?} still in capacity pool — expected reassigned", + info.partition_hash + ); + }; + let new_ledger = DataLedger::try_from(new_ledger_id).unwrap(); + let new_slot = &epoch_snapshot.ledgers.get_slots(new_ledger)[new_slot_index]; + assert!( + !new_slot.is_expired, + "Partition {:?} reassigned to expired {:?} slot {}", + info.partition_hash, new_ledger, new_slot_index + ); + assert!( + new_slot.partitions.contains(&info.partition_hash), + "Partition {:?} assigned to {:?} slot {} but not in slot's partition list", + info.partition_hash, + new_ledger, + new_slot_index + ); + assert!( + new_ledger != info.ledger_id || new_slot_index != info.slot_index, + "Partition {:?} still assigned to original expired slot", + info.partition_hash + ); + } + info!( + "Verified {} Publish partitions expired and reassigned", + perm_expired_partitions.len() + ); + + // --- Assertion 4: User balance unchanged after perm expiry --- + let post_expiry_block = node.get_block_by_height(final_height).await?; + let post_expiry_balance = node + .get_balance(signer.address(), post_expiry_block.evm_block_hash) + .await; + assert_eq!( + pre_expiry_balance, post_expiry_balance, + "User balance should not change due to perm expiry (no fees or refunds)" + ); + + info!("Publish ledger expiry test passed!"); + node.stop().await; + Ok(()) +} + +/// Tests that publish and submit ledger slots can expire in the same epoch block. +/// Verifies: +/// - Both perm and term slots expire when both epoch lengths are reached simultaneously +/// - TermFeeReward shadow txs are generated (Submit fee distribution runs) +/// - Publish expiry does not block Submit fee distribution +/// - All expired partitions from both ledgers are reassigned to non-expired slots +#[test_log::test(tokio::test)] +async fn heavy_perm_and_term_expiry_same_epoch() -> eyre::Result<()> { + const CHUNK_SIZE: u64 = 32; + const DATA_SIZE: usize = 64; // 2 chunks — enough to trigger slot allocation at epoch boundary + const BLOCKS_PER_EPOCH: u64 = 3; + const PUBLISH_LEDGER_EPOCH_LENGTH: u64 = 2; + const SUBMIT_LEDGER_EPOCH_LENGTH: u64 = 2; + const INITIAL_BALANCE: u128 = 10_000_000_000_000_000_000; + + let mut config = NodeConfig::testing(); + config.consensus.get_mut().block_migration_depth = 1; + config.consensus.get_mut().chunk_size = CHUNK_SIZE; + config.consensus.get_mut().num_chunks_in_partition = 4; + config.consensus.get_mut().num_chunks_in_recall_range = 1; + config.consensus.get_mut().epoch.num_blocks_in_epoch = BLOCKS_PER_EPOCH; + config.consensus.get_mut().epoch.publish_ledger_epoch_length = + Some(PUBLISH_LEDGER_EPOCH_LENGTH); + config.consensus.get_mut().epoch.submit_ledger_epoch_length = SUBMIT_LEDGER_EPOCH_LENGTH; + + let signer = IrysSigner::random_signer(&config.consensus_config()); + config.consensus.extend_genesis_accounts(vec![( + signer.address(), + GenesisAccount { + balance: U256::from(INITIAL_BALANCE).into(), + ..Default::default() + }, + )]); + + let node = IrysNodeTest::new_genesis(config.clone()) + .start_and_wait_for_packing("perm_term_expiry_test", 30) + .await; + + let anchor = node.get_block_by_height(0).await?.block_hash; + + // Post tx1 (no chunks → stays on Submit) + let tx1 = node + .post_data_tx(anchor, vec![1_u8; DATA_SIZE], &signer) + .await; + node.wait_for_mempool(tx1.header.id, 10).await?; + + // Post tx2 (chunks uploaded → promoted to Publish) + let tx2 = node + .post_data_tx(anchor, vec![2_u8; DATA_SIZE], &signer) + .await; + node.wait_for_mempool(tx2.header.id, 10).await?; + node.upload_chunks(&tx2).await?; + node.wait_for_ingress_proofs_no_mining(vec![tx2.header.id], 20) + .await?; + + // Mine 1 block to include both txs, then another so block migration + // (block_migration_depth=1) promotes tx2 to Publish. + node.mine_block().await?; + node.mine_block().await?; + + // Verify promotion state before expiry + assert!( + !node.get_is_promoted(&tx1.header.id).await?, + "tx1 should NOT be promoted (no chunks uploaded)" + ); + assert!( + node.get_is_promoted(&tx2.header.id).await?, + "tx2 should be promoted (chunks uploaded)" + ); + + // Mine to first epoch boundary to trigger slot allocation (need 2+ slots per ledger) + let (_, epoch_height) = node.mine_until_next_epoch().await?; + info!("Reached first epoch boundary at height {}", epoch_height); + + // Verify multi-slot precondition for both ledgers + let snapshot = node.get_canonical_epoch_snapshot(); + let perm_slots = snapshot.ledgers.get_slots(DataLedger::Publish); + let submit_slots = snapshot.ledgers.get_slots(DataLedger::Submit); + assert!( + perm_slots.len() >= 2, + "Expected 2+ perm slots after epoch boundary, got {}", + perm_slots.len() + ); + assert!( + submit_slots.len() >= 2, + "Expected 2+ submit slots after epoch boundary, got {}", + submit_slots.len() + ); + + // Derive expiry target from observed state — compute each ledger's rounded + // expiry boundary separately so we catch cases where they would diverge. + let perm_slot0_last_height = perm_slots[0].last_height; + let submit_slot0_last_height = submit_slots[0].last_height; + let perm_earliest = PUBLISH_LEDGER_EPOCH_LENGTH * BLOCKS_PER_EPOCH + perm_slot0_last_height; + let submit_earliest = SUBMIT_LEDGER_EPOCH_LENGTH * BLOCKS_PER_EPOCH + submit_slot0_last_height; + let perm_target = perm_earliest.div_ceil(BLOCKS_PER_EPOCH) * BLOCKS_PER_EPOCH; + let submit_target = submit_earliest.div_ceil(BLOCKS_PER_EPOCH) * BLOCKS_PER_EPOCH; + assert_eq!( + perm_target, submit_target, + "Perm and Submit expiry must land on the same epoch boundary \ + (perm_target={perm_target}, submit_target={submit_target})" + ); + let target_height = perm_target; + info!( + "Perm slot0 last_height={}, Submit slot0 last_height={}, target expiry height={}", + perm_slot0_last_height, submit_slot0_last_height, target_height + ); + + // Mine to expiry target + let current_height = node.get_canonical_chain_height().await; + for _ in current_height..target_height { + node.mine_block().await?; + } + + let final_height = node.get_canonical_chain_height().await; + info!( + "Reached height {}, target was {}", + final_height, target_height + ); + assert!( + final_height >= target_height, + "Should have reached target height" + ); + + // --- Assertion 1: Submit slots have at least one expired --- + let epoch_snapshot = node.get_canonical_epoch_snapshot(); + let submit_slots = epoch_snapshot.ledgers.get_slots(DataLedger::Submit); + let submit_expired = submit_slots.iter().filter(|s| s.is_expired).count(); + assert!( + submit_expired > 0, + "Expected at least one expired submit slot after height {}", + final_height + ); + info!( + "{} of {} submit slots are expired", + submit_expired, + submit_slots.len() + ); + + // --- Assertion 2: Perm slots have at least one expired non-last slot --- + let perm_slots = epoch_snapshot.ledgers.get_slots(DataLedger::Publish); + let perm_num = perm_slots.len(); + let perm_expired = perm_slots + .iter() + .enumerate() + .filter(|(i, s)| *i < perm_num - 1 && s.is_expired) + .count(); + assert!( + perm_expired > 0, + "Expected at least one expired non-last perm slot after height {}", + final_height + ); + info!("{} of {} perm slots are expired", perm_expired, perm_num); + + // --- Assertion 3: TermFeeReward shadow tx found (proves Submit fee distribution ran) --- + let epoch_block = node.get_block_by_height(target_height).await?; + let evm_block = node + .wait_for_evm_block(epoch_block.evm_block_hash, 30) + .await?; + let mut found_term_fee_reward = false; + for tx in &evm_block.body.transactions { + let mut input = tx.input().as_ref(); + if let Ok(shadow) = ShadowTransaction::decode(&mut input) { + if let Some(TransactionPacket::TermFeeReward(_)) = shadow.as_v1() { + found_term_fee_reward = true; + info!( + "Found TermFeeReward shadow tx in epoch block at height {}", + target_height + ); + } + } + } + assert!( + found_term_fee_reward, + "TermFeeReward shadow tx must be present — Submit fee distribution should run \ + even when Publish expires simultaneously" + ); + + // --- Assertion 4 & 5: Expired partitions have coherent assignment state (both ledgers) --- + // Verify each expired partition was removed from its original slot and reassigned to a + // valid non-expired slot. Cross-ledger reassignment is expected since backfill draws + // from the global capacity pool. + let expired_infos = epoch_snapshot + .expired_partition_infos + .as_ref() + .expect("expired_partition_infos should be set after expiry"); + let mut submit_expired_count = 0_usize; + let mut perm_expired_count = 0_usize; + for info in expired_infos { + let orig_slots = epoch_snapshot.ledgers.get_slots(info.ledger_id); + let orig_slot = &orig_slots[info.slot_index]; + assert!( + orig_slot.is_expired, + "Slot {} should be expired", + info.slot_index + ); + assert!( + !orig_slot.partitions.contains(&info.partition_hash), + "Expired partition {:?} should have been removed from {:?} slot {}", + info.partition_hash, + info.ledger_id, + info.slot_index + ); + let assignment = epoch_snapshot + .partition_assignments + .get_assignment(info.partition_hash) + .unwrap_or_else(|| { + panic!( + "Expired partition {:?} vanished from assignments", + info.partition_hash + ) + }); + let (Some(new_ledger_id), Some(new_slot_index)) = + (assignment.ledger_id, assignment.slot_index) + else { + panic!( + "Expired partition {:?} still in capacity pool — expected reassigned", + info.partition_hash + ); + }; + let new_ledger = DataLedger::try_from(new_ledger_id).unwrap(); + let new_slot = &epoch_snapshot.ledgers.get_slots(new_ledger)[new_slot_index]; + assert!( + !new_slot.is_expired, + "Partition {:?} reassigned to expired {:?} slot {}", + info.partition_hash, new_ledger, new_slot_index + ); + assert!( + new_slot.partitions.contains(&info.partition_hash), + "Partition {:?} assigned to {:?} slot {} but not in slot's partition list", + info.partition_hash, + new_ledger, + new_slot_index + ); + assert!( + new_ledger != info.ledger_id || new_slot_index != info.slot_index, + "Partition {:?} still assigned to original expired slot", + info.partition_hash + ); + if info.ledger_id == DataLedger::Submit { + submit_expired_count += 1; + } else if info.ledger_id == DataLedger::Publish { + perm_expired_count += 1; + } + } + assert!( + submit_expired_count > 0, + "Expected at least one expired Submit partition in expired_partition_infos" + ); + assert!( + perm_expired_count > 0, + "Expected at least one expired Publish partition in expired_partition_infos" + ); + info!( + "Verified {} Submit + {} Publish partitions expired and reassigned", + submit_expired_count, perm_expired_count + ); + + info!("Simultaneous perm+term expiry test passed!"); + node.stop().await; + Ok(()) +} + +/// Tests that a perm slot is NOT expired one epoch before its boundary, but IS expired +/// at the exact boundary. Best defense against off-by-one bugs in expiry logic. +/// Requires 2+ slots to exercise the boundary case (not last-slot protection). +/// Validates: +/// - At pre_expiry_epoch: slot 0 is NOT expired +/// - At expiry_epoch: slot 0 IS expired +#[test_log::test(tokio::test)] +async fn heavy_perm_exact_boundary_expiry() -> eyre::Result<()> { + const CHUNK_SIZE: u64 = 32; + const DATA_SIZE: usize = 64; // 2 chunks — enough to trigger slot allocation at epoch boundary + const BLOCKS_PER_EPOCH: u64 = 3; + const PUBLISH_LEDGER_EPOCH_LENGTH: u64 = 2; + const INITIAL_BALANCE: u128 = 10_000_000_000_000_000_000; + + let mut config = NodeConfig::testing(); + config.consensus.get_mut().block_migration_depth = 1; + config.consensus.get_mut().chunk_size = CHUNK_SIZE; + config.consensus.get_mut().num_chunks_in_partition = 4; + config.consensus.get_mut().num_chunks_in_recall_range = 1; + config.consensus.get_mut().epoch.num_blocks_in_epoch = BLOCKS_PER_EPOCH; + config.consensus.get_mut().epoch.publish_ledger_epoch_length = + Some(PUBLISH_LEDGER_EPOCH_LENGTH); + + let signer = IrysSigner::random_signer(&config.consensus_config()); + config.consensus.extend_genesis_accounts(vec![( + signer.address(), + GenesisAccount { + balance: U256::from(INITIAL_BALANCE).into(), + ..Default::default() + }, + )]); + + let node = IrysNodeTest::new_genesis(config.clone()) + .start_and_wait_for_packing("perm_exact_boundary_test", 30) + .await; + + let anchor = node.get_block_by_height(0).await?.block_hash; + + // Post and promote a tx + let tx = node + .post_data_tx(anchor, vec![1_u8; DATA_SIZE], &signer) + .await; + node.wait_for_mempool(tx.header.id, 10).await?; + node.upload_chunks(&tx).await?; + node.wait_for_ingress_proofs_no_mining(vec![tx.header.id], 20) + .await?; + + // Mine 1 block to include tx (triggers promotion to Publish) + node.mine_block().await?; + + // Mine to first epoch boundary to trigger slot allocation (need 2+ slots) + let (_, epoch_height) = node.mine_until_next_epoch().await?; + info!("Reached first epoch boundary at height {}", epoch_height); + + // Read slot state and compute exact boundaries + let snapshot = node.get_canonical_epoch_snapshot(); + let perm_slots = snapshot.ledgers.get_slots(DataLedger::Publish); + assert!( + perm_slots.len() >= 2, + "Expected 2+ publish slots so this test exercises exact-boundary expiry instead of last-slot protection, got {}", + perm_slots.len() + ); + let slot0_last_height = perm_slots[0].last_height; + info!( + "Perm slots: {}, slot0 last_height: {}", + perm_slots.len(), + slot0_last_height + ); + + let min_blocks = PUBLISH_LEDGER_EPOCH_LENGTH * BLOCKS_PER_EPOCH; + let earliest_expiry = min_blocks + slot0_last_height; + // Round up to epoch boundary + let expiry_epoch = earliest_expiry.div_ceil(BLOCKS_PER_EPOCH) * BLOCKS_PER_EPOCH; + let pre_expiry_epoch = expiry_epoch - BLOCKS_PER_EPOCH; + info!( + "earliest_expiry={}, expiry_epoch={}, pre_expiry_epoch={}", + earliest_expiry, expiry_epoch, pre_expiry_epoch + ); + + // Mine to pre_expiry_epoch (may already be there if pre_expiry == current height) + let current_height = node.get_canonical_chain_height().await; + for _ in current_height..pre_expiry_epoch { + node.mine_block().await?; + } + + // --- Assertion 1: At pre_expiry_epoch, slot 0 is NOT expired --- + let pre_snapshot = node.get_canonical_epoch_snapshot(); + let pre_perm_slots = pre_snapshot.ledgers.get_slots(DataLedger::Publish); + assert!( + !pre_perm_slots[0].is_expired, + "Slot 0 should NOT be expired at pre-expiry epoch height {}", + pre_expiry_epoch + ); + info!( + "Confirmed slot 0 is NOT expired at height {}", + node.get_canonical_chain_height().await + ); + + // Mine to expiry_epoch + let current_height = node.get_canonical_chain_height().await; + for _ in current_height..expiry_epoch { + node.mine_block().await?; + } + + // --- Assertion 2: At expiry_epoch, slot 0 should be expired (multi-slot guaranteed above) --- + let post_snapshot = node.get_canonical_epoch_snapshot(); + let post_perm_slots = post_snapshot.ledgers.get_slots(DataLedger::Publish); + assert!( + post_perm_slots.len() >= 2, + "Expected 2+ publish slots at expiry epoch, got {}", + post_perm_slots.len() + ); + assert!( + post_perm_slots[0].is_expired, + "Slot 0 should be expired exactly at expiry epoch height {}", + expiry_epoch + ); + info!("Confirmed slot 0 IS expired at expiry epoch"); + + info!("Exact boundary expiry test passed!"); + node.stop().await; + Ok(()) +} + +/// Tests that the last remaining Publish slot never expires, even far past its expiry boundary. +/// Verifies: +/// - The last slot remains active after 5x the expiry window +/// - All last-slot partitions keep their ledger assignment (not returned to capacity pool) +/// - No TermFeeReward shadow txs are generated in any epoch block past min_blocks +#[test_log::test(tokio::test)] +async fn heavy_perm_last_slot_never_expires() -> eyre::Result<()> { + const CHUNK_SIZE: u64 = 32; + const DATA_SIZE: usize = 32; + const BLOCKS_PER_EPOCH: u64 = 3; + const PUBLISH_LEDGER_EPOCH_LENGTH: u64 = 1; // Very short — expiry would trigger quickly + const INITIAL_BALANCE: u128 = 10_000_000_000_000_000_000; + + let mut config = NodeConfig::testing(); + config.consensus.get_mut().block_migration_depth = 1; + config.consensus.get_mut().chunk_size = CHUNK_SIZE; + config.consensus.get_mut().num_chunks_in_partition = 4; + config.consensus.get_mut().num_chunks_in_recall_range = 1; + config.consensus.get_mut().epoch.num_blocks_in_epoch = BLOCKS_PER_EPOCH; + config.consensus.get_mut().epoch.publish_ledger_epoch_length = + Some(PUBLISH_LEDGER_EPOCH_LENGTH); + // Set Submit epoch length very high so Submit never expires during this test + config.consensus.get_mut().epoch.submit_ledger_epoch_length = 100; + + let signer = IrysSigner::random_signer(&config.consensus_config()); + config.consensus.extend_genesis_accounts(vec![( + signer.address(), + GenesisAccount { + balance: U256::from(INITIAL_BALANCE).into(), + ..Default::default() + }, + )]); + + let node = IrysNodeTest::new_genesis(config.clone()) + .start_and_wait_for_packing("perm_last_slot_test", 30) + .await; + + let anchor = node.get_block_by_height(0).await?.block_hash; + + // Post 1 tx and promote it (uses genesis slot) + let tx = node + .post_data_tx(anchor, vec![1_u8; DATA_SIZE], &signer) + .await; + node.wait_for_mempool(tx.header.id, 10).await?; + node.upload_chunks(&tx).await?; + node.wait_for_ingress_proofs_no_mining(vec![tx.header.id], 20) + .await?; + + // Mine 1 block to include tx + node.mine_block().await?; + + // Mine to 5x past where expiry would trigger: (EPOCH_LENGTH + 4) * BLOCKS_PER_EPOCH + let target_height = (PUBLISH_LEDGER_EPOCH_LENGTH + 4) * BLOCKS_PER_EPOCH; + let current_height = node.get_canonical_chain_height().await; + info!( + "Mining from height {} to {} (5x past expiry window)", + current_height, target_height + ); + for _ in current_height..target_height { + node.mine_block().await?; + } + + let final_height = node.get_canonical_chain_height().await; + assert!( + final_height >= target_height, + "Should have reached target height" + ); + + // --- Assertion 1: Last perm slot is NOT expired --- + let epoch_snapshot = node.get_canonical_epoch_snapshot(); + let perm_slots = epoch_snapshot.ledgers.get_slots(DataLedger::Publish); + assert_eq!( + perm_slots.len(), + 1, + "Expected exactly 1 perm slot (single-slot test), got {}", + perm_slots.len() + ); + let last_slot = perm_slots.last().unwrap(); + assert!( + !last_slot.is_expired, + "Last perm slot should never expire (last-slot protection)" + ); + info!( + "Perm slots: {}, last slot is_expired: {}", + perm_slots.len(), + last_slot.is_expired + ); + + // --- Assertion 2: All last-slot partitions still have Publish ledger assignment --- + let partition_assignments = &epoch_snapshot.partition_assignments; + for partition_hash in &last_slot.partitions { + let assignment = partition_assignments + .get_assignment(*partition_hash) + .unwrap_or_else(|| { + panic!( + "Missing partition assignment for last-slot partition {:?}", + partition_hash + ) + }); + assert!( + assignment.ledger_id == Some(DataLedger::Publish as u32), + "Last-slot partition {:?} should still be assigned to Publish, but has ledger_id={:?}", + partition_hash, + assignment.ledger_id + ); + } + info!("Verified all last-slot partitions remain assigned to Publish"); + + // --- Assertion 3: No TermFeeReward shadow txs in any epoch block past min_blocks --- + let min_blocks = PUBLISH_LEDGER_EPOCH_LENGTH * BLOCKS_PER_EPOCH; + let mut epoch_height = min_blocks; + while epoch_height <= final_height { + let epoch_block = node.get_block_by_height(epoch_height).await?; + let evm_block = node + .wait_for_evm_block(epoch_block.evm_block_hash, 30) + .await?; + for tx in &evm_block.body.transactions { + let mut input = tx.input().as_ref(); + if let Ok(shadow) = ShadowTransaction::decode(&mut input) { + if let Some(TransactionPacket::TermFeeReward(_)) = shadow.as_v1() { + panic!( + "Unexpected TermFeeReward shadow tx at epoch height {} — \ + last-slot protection should prevent any fee distribution", + epoch_height + ); + } + } + } + info!( + "No TermFeeReward at epoch height {} (confirmed)", + epoch_height + ); + epoch_height += BLOCKS_PER_EPOCH; + } + + info!("Last-slot protection test passed!"); + node.stop().await; + Ok(()) +} + +/// Tests the full Publish ledger recycling cycle: data posted → promoted → partition expires → +/// partition returns to capacity pool → partition reassigned to new slot → new data posted and +/// promoted into the recycled partition. +/// +/// This is the only test that proves the **complete** recycling path. Other expiry tests stop at +/// verifying coherent assignment state; this one goes further by posting new data after expiry and +/// confirming the recycled partition is actually reused for live storage. +/// +/// Verifies: +/// - tx1 is promoted to Publish before expiry +/// - Slot 0 expires and its partitions are returned to capacity +/// - tx2 (posted after expiry) is promoted to Publish +/// - At least one previously-expired partition hash is reassigned to a new non-expired Publish slot +/// - The recycled partition's assignment is coherent (ledger_id, slot_index, slot membership) +#[test_log::test(tokio::test)] +async fn heavy_perm_partition_recycle_and_reuse() -> eyre::Result<()> { + const CHUNK_SIZE: u64 = 32; + const DATA_SIZE: usize = 64; // 2 chunks — enough to trigger slot allocation at epoch boundary + const BLOCKS_PER_EPOCH: u64 = 3; + const PUBLISH_LEDGER_EPOCH_LENGTH: u64 = 2; + const INITIAL_BALANCE: u128 = 10_000_000_000_000_000_000; + + let mut config = NodeConfig::testing(); + config.consensus.get_mut().block_migration_depth = 1; + config.consensus.get_mut().chunk_size = CHUNK_SIZE; + config.consensus.get_mut().num_chunks_in_partition = 4; + config.consensus.get_mut().num_chunks_in_recall_range = 1; + // 1 partition per slot makes recycling deterministic: each expired slot yields exactly + // 1 capacity partition, and each new Publish slot needs exactly 1. Publish is processed + // first in backfill, so the expired partition must land back in Publish. + config.consensus.get_mut().num_partitions_per_slot = 1; + config.consensus.get_mut().epoch.num_blocks_in_epoch = BLOCKS_PER_EPOCH; + config.consensus.get_mut().epoch.publish_ledger_epoch_length = + Some(PUBLISH_LEDGER_EPOCH_LENGTH); + // Set Submit epoch length very high so Submit expiry doesn't interfere + config.consensus.get_mut().epoch.submit_ledger_epoch_length = 100; + + let signer = IrysSigner::random_signer(&config.consensus_config()); + config.consensus.extend_genesis_accounts(vec![( + signer.address(), + GenesisAccount { + balance: U256::from(INITIAL_BALANCE).into(), + ..Default::default() + }, + )]); + + let node = IrysNodeTest::new_genesis(config.clone()) + .start_and_wait_for_packing("perm_recycle_test", 30) + .await; + + // ========== Phase 1: Post initial data and promote to Publish ========== + let anchor = node.get_block_by_height(0).await?.block_hash; + + let tx1 = node + .post_data_tx(anchor, vec![1_u8; DATA_SIZE], &signer) + .await; + node.wait_for_mempool(tx1.header.id, 10).await?; + node.upload_chunks(&tx1).await?; + node.wait_for_ingress_proofs_no_mining(vec![tx1.header.id], 20) + .await?; + + // Mine 2 blocks: 1 to include tx, 1 for migration (block_migration_depth=1) → promotion + node.mine_block().await?; + node.mine_block().await?; + + assert!( + node.get_is_promoted(&tx1.header.id).await?, + "tx1 should be promoted to Publish before expiry" + ); + info!("tx1 promoted to Publish"); + + // Mine to first epoch boundary → triggers slot allocation (need 2+ Publish slots) + let (_, epoch_height) = node.mine_until_next_epoch().await?; + info!("Reached first epoch boundary at height {}", epoch_height); + + // Record pre-expiry state + let snapshot = node.get_canonical_epoch_snapshot(); + let perm_slots = snapshot.ledgers.get_slots(DataLedger::Publish); + assert!( + perm_slots.len() >= 2, + "Expected 2+ perm slots after epoch boundary, got {}", + perm_slots.len() + ); + let slot0_last_height = perm_slots[0].last_height; + info!( + "Pre-expiry: {} perm slots, slot0 last_height={}", + perm_slots.len(), + slot0_last_height + ); + + // ========== Phase 2: Expire Publish slots ========== + let min_blocks = PUBLISH_LEDGER_EPOCH_LENGTH * BLOCKS_PER_EPOCH; + let earliest_expiry = min_blocks + slot0_last_height; + let target_height = earliest_expiry.div_ceil(BLOCKS_PER_EPOCH) * BLOCKS_PER_EPOCH; + info!( + "Expiry target height={} (earliest_expiry={})", + target_height, earliest_expiry + ); + + let current_height = node.get_canonical_chain_height().await; + for _ in current_height..target_height { + node.mine_block().await?; + } + + // Verify expiry occurred + let expiry_snapshot = node.get_canonical_epoch_snapshot(); + let perm_slots = expiry_snapshot.ledgers.get_slots(DataLedger::Publish); + let num_slots = perm_slots.len(); + let expired_count = perm_slots + .iter() + .enumerate() + .filter(|(i, slot)| *i < num_slots - 1 && slot.is_expired) + .count(); + assert!( + expired_count > 0, + "Expected at least one non-last perm slot to be expired at height {}", + target_height + ); + + // Record expired partition info (retaining origin slot_index for stronger assertions) + let expired_infos = expiry_snapshot + .expired_partition_infos + .as_ref() + .expect("expired_partition_infos should be set after expiry"); + let expired_perm_infos: Vec<_> = expired_infos + .iter() + .filter(|info| info.ledger_id == DataLedger::Publish) + .copied() + .collect(); + assert!( + !expired_perm_infos.is_empty(), + "Expected at least one expired Publish partition in expired_partition_infos" + ); + + // Coherence checks at expiry time (pattern from heavy_perm_ledger_expiry_basic): + // For each expired partition, verify original slot is expired, partition was removed, + // and the assignment still exists. + for info in &expired_perm_infos { + let orig_slot = &perm_slots[info.slot_index]; + assert!( + orig_slot.is_expired, + "Original slot {} should be expired", + info.slot_index + ); + assert!( + !orig_slot.partitions.contains(&info.partition_hash), + "Expired partition {:?} should have been removed from slot {}", + info.partition_hash, + info.slot_index + ); + expiry_snapshot + .partition_assignments + .get_assignment(info.partition_hash) + .unwrap_or_else(|| { + panic!( + "Expired partition {:?} vanished from assignments at expiry", + info.partition_hash + ) + }); + } + info!( + "{} perm slots expired, {} partition hashes returned to capacity", + expired_count, + expired_perm_infos.len() + ); + + // ========== Phase 3: Post new data after expiry ========== + let post_expiry_height = node.get_canonical_chain_height().await; + let anchor2 = node + .get_block_by_height(post_expiry_height) + .await? + .block_hash; + + let tx2 = node + .post_data_tx(anchor2, vec![2_u8; DATA_SIZE], &signer) + .await; + node.wait_for_mempool(tx2.header.id, 10).await?; + node.upload_chunks(&tx2).await?; + node.wait_for_ingress_proofs_no_mining(vec![tx2.header.id], 20) + .await?; + + // Mine 2 blocks: 1 to include tx2, 1 for migration → promotion + node.mine_block().await?; + node.mine_block().await?; + + // Mine to next epoch boundary → triggers slot allocation for new data + backfill with + // recycled partitions + let (_, epoch_height2) = node.mine_until_next_epoch().await?; + info!("Reached second epoch boundary at height {}", epoch_height2); + + // ========== Phase 4: Verify recycling worked ========== + // Verify tx2 is promoted + assert!( + node.get_is_promoted(&tx2.header.id).await?, + "tx2 should be promoted to Publish after expiry" + ); + info!("tx2 promoted to Publish after recycling"); + + // Get final snapshot and verify every expired Publish partition was recycled. + // With num_partitions_per_slot=1 and Publish processed first in backfill, every + // expired partition deterministically ends up in a non-expired Publish slot. + let final_snapshot = node.get_canonical_epoch_snapshot(); + let final_perm_slots = final_snapshot.ledgers.get_slots(DataLedger::Publish); + + for info in &expired_perm_infos { + let assignment = final_snapshot + .partition_assignments + .get_assignment(info.partition_hash) + .unwrap_or_else(|| { + panic!( + "Expired partition {:?} vanished from assignments", + info.partition_hash + ) + }); + let (Some(new_ledger_id), Some(new_slot_index)) = + (assignment.ledger_id, assignment.slot_index) + else { + panic!( + "Expired partition {:?} still in capacity pool — expected recycled into Publish", + info.partition_hash + ); + }; + let new_ledger = DataLedger::try_from(new_ledger_id).unwrap(); + assert_eq!( + new_ledger, + DataLedger::Publish, + "Expired partition {:?} recycled into {:?} instead of Publish", + info.partition_hash, + new_ledger + ); + let new_slot = &final_perm_slots[new_slot_index]; + assert!( + !new_slot.is_expired, + "Partition {:?} recycled into expired Publish slot {}", + info.partition_hash, new_slot_index + ); + assert!( + new_slot.partitions.contains(&info.partition_hash), + "Partition {:?} assigned to Publish slot {} but not in slot's partition list", + info.partition_hash, + new_slot_index + ); + assert!( + new_ledger != info.ledger_id || new_slot_index != info.slot_index, + "Partition {:?} still in its original expired slot ({:?} slot {})", + info.partition_hash, + info.ledger_id, + info.slot_index + ); + info!( + "Partition {:?} recycled: {:?} slot {} → Publish slot {}", + info.partition_hash, info.ledger_id, info.slot_index, new_slot_index + ); + } + info!( + "All {} expired partitions recycled into non-expired Publish slots", + expired_perm_infos.len() + ); + + info!("Perm partition recycle and reuse test passed!"); + node.stop().await; + Ok(()) +} + +/// Tests that perm slots never expire when publish_ledger_epoch_length is None (mainnet config). +/// Uses multi-slot setup to distinguish the None config gate from last-slot protection. +/// Verifies: +/// - Multiple Publish slots exist (precondition — proves None is what blocks expiry, not last-slot) +/// - Zero perm slots are expired after mining far past where expiry would trigger +/// - All Publish partition assignments remain active (not returned to capacity pool) +#[test_log::test(tokio::test)] +async fn heavy_perm_expiry_disabled_nothing_expires() -> eyre::Result<()> { + const CHUNK_SIZE: u64 = 32; + const DATA_SIZE: usize = 64; // 2 chunks — enough to trigger slot allocation at epoch boundary + const BLOCKS_PER_EPOCH: u64 = 3; + const INITIAL_BALANCE: u128 = 10_000_000_000_000_000_000; + + let mut config = NodeConfig::testing(); + config.consensus.get_mut().block_migration_depth = 1; + config.consensus.get_mut().chunk_size = CHUNK_SIZE; + config.consensus.get_mut().num_chunks_in_partition = 4; + config.consensus.get_mut().num_chunks_in_recall_range = 1; + config.consensus.get_mut().num_partitions_per_slot = 1; + config.consensus.get_mut().epoch.num_blocks_in_epoch = BLOCKS_PER_EPOCH; + // Mainnet config: perm expiry disabled + config.consensus.get_mut().epoch.publish_ledger_epoch_length = None; + + let signer = IrysSigner::random_signer(&config.consensus_config()); + config.consensus.extend_genesis_accounts(vec![( + signer.address(), + GenesisAccount { + balance: U256::from(INITIAL_BALANCE).into(), + ..Default::default() + }, + )]); + + let node = IrysNodeTest::new_genesis(config.clone()) + .start_and_wait_for_packing("perm_expiry_disabled_test", 30) + .await; + + let anchor = node.get_block_by_height(0).await?.block_hash; + + // Post + promote tx1 in epoch 0 + let tx1 = node + .post_data_tx(anchor, vec![1_u8; DATA_SIZE], &signer) + .await; + node.wait_for_mempool(tx1.header.id, 10).await?; + node.upload_chunks(&tx1).await?; + node.wait_for_ingress_proofs_no_mining(vec![tx1.header.id], 20) + .await?; + + // Mine 1 block to include tx1 + node.mine_block().await?; + + // Mine to first epoch boundary → triggers slot allocation → 2+ Publish slots + let (_, epoch_height) = node.mine_until_next_epoch().await?; + info!("Reached first epoch boundary at height {}", epoch_height); + + // Post + promote tx2 in epoch 1 + let anchor2 = node.get_block_by_height(epoch_height).await?.block_hash; + let tx2 = node + .post_data_tx(anchor2, vec![2_u8; DATA_SIZE], &signer) + .await; + node.wait_for_mempool(tx2.header.id, 10).await?; + node.upload_chunks(&tx2).await?; + node.wait_for_ingress_proofs_no_mining(vec![tx2.header.id], 20) + .await?; + + // Mine 1 block to include tx2 + node.mine_block().await?; + + // Mine to 8 * BLOCKS_PER_EPOCH (far past where expiry would trigger if Some(2)) + let target_height = 8 * BLOCKS_PER_EPOCH; + let current_height = node.get_canonical_chain_height().await; + info!( + "Mining from height {} to {} (far past hypothetical expiry)", + current_height, target_height + ); + for _ in current_height..target_height { + node.mine_block().await?; + } + + let final_height = node.get_canonical_chain_height().await; + assert!( + final_height >= target_height, + "Should have reached target height" + ); + + // --- Precondition: Multi-slot (proves None config gate, not last-slot protection) --- + let epoch_snapshot = node.get_canonical_epoch_snapshot(); + let perm_slots = epoch_snapshot.ledgers.get_slots(DataLedger::Publish); + assert!( + perm_slots.len() >= 2, + "Expected 2+ perm slots for multi-slot verification, got {}. \ + With a single slot, last-slot protection prevents expiry regardless of config.", + perm_slots.len() + ); + info!( + "Perm slots: {} (multi-slot precondition met)", + perm_slots.len() + ); + + // --- Assertion 1: Zero perm slots are expired --- + let expired_count = perm_slots.iter().filter(|s| s.is_expired).count(); + assert_eq!( + expired_count, 0, + "No perm slots should expire when publish_ledger_epoch_length is None, but {} are expired", + expired_count + ); + info!( + "Confirmed zero perm slots are expired at height {}", + final_height + ); + + // --- Assertion 2: All Publish partition assignments still have Publish ledger --- + let partition_assignments = &epoch_snapshot.partition_assignments; + for (slot_index, slot) in perm_slots.iter().enumerate() { + for partition_hash in &slot.partitions { + let assignment = partition_assignments + .get_assignment(*partition_hash) + .unwrap_or_else(|| { + panic!( + "Missing partition assignment for {:?} at slot {}", + partition_hash, slot_index + ) + }); + assert!( + assignment.ledger_id == Some(DataLedger::Publish as u32), + "Perm partition {:?} at slot {} should still be assigned to Publish but has ledger_id={:?}", + partition_hash, slot_index, assignment.ledger_id + ); + } + } + info!("Verified all Publish partition assignments remain active"); + + info!("Perm expiry disabled (mainnet safety) test passed!"); + node.stop().await; + Ok(()) +} diff --git a/crates/database/src/data_ledger.rs b/crates/database/src/data_ledger.rs index f1a9a297ca..3c3079a5c7 100644 --- a/crates/database/src/data_ledger.rs +++ b/crates/database/src/data_ledger.rs @@ -82,25 +82,30 @@ impl TermLedger { pub fn get_expired_slot_indexes(&self, epoch_height: u64) -> Vec { let mut expired_slot_indexes = Vec::new(); + let min_blocks = self + .epoch_length + .checked_mul(self.num_blocks_in_epoch) + .expect("epoch_length * num_blocks_in_epoch overflows u64"); + tracing::debug!( "expire_old_slots: epoch_height={}, epoch_length={}, num_blocks_in_epoch={}, min_height_needed={}", epoch_height, self.epoch_length, self.num_blocks_in_epoch, - self.epoch_length * self.num_blocks_in_epoch + min_blocks ); // Make sure enough blocks have transpired before calculating expiry height - if epoch_height < self.epoch_length * self.num_blocks_in_epoch { + if epoch_height < min_blocks { tracing::warn!( "Not enough blocks yet: {} < {}, returning empty", epoch_height, - self.epoch_length * self.num_blocks_in_epoch + min_blocks ); return expired_slot_indexes; } - let expiry_height = epoch_height - self.epoch_length * self.num_blocks_in_epoch; + let expiry_height = epoch_height - min_blocks; tracing::info!("Calculated expiry_height={}", expiry_height); // Collect indices of slots to expire @@ -182,7 +187,7 @@ impl LedgerCore for PermanentLedger { .enumerate() .filter_map(|(idx, slot)| { let needed = self.num_partitions_per_slot as usize - slot.partitions.len(); - if needed > 0 { + if needed > 0 && !slot.is_expired { Some((idx, needed)) } else { None @@ -247,21 +252,25 @@ impl LedgerCore for TermLedger { } /// A container for managing permanent and term ledgers with type-safe access -/// through the \[Ledger\] enum. +/// through the [Ledger] enum. /// /// The permanent and term ledgers are intentionally given different types to -/// prevent runtime errors: -/// - The permanent ledger (`perm`) holds critical data that must never -/// be expired or lost -/// - Term ledgers (`term`) hold temporary data and support expiration +/// provide distinct behavior: +/// - The permanent ledger (`perm`) holds published data; when +/// `publish_ledger_epoch_length` is configured, its slots can expire +/// - Term ledgers (`term`) hold temporary data with mandatory expiration /// -/// This type separation ensures operations like partition expiration can only -/// be performed on term ledgers, making any attempt to expire a permanent -/// ledger partition fail at compile time. +/// Expiry logic for both ledger types is handled by `Ledgers` methods +/// (`expire_partitions`, `get_expiring_partitions`), keeping `PermanentLedger` +/// itself clean of expiry concerns. #[derive(Debug, Clone, Hash)] pub struct Ledgers { perm: PermanentLedger, term: Vec, + /// When Some(n), publish ledger slots expire after n epochs + publish_ledger_epoch_length: Option, + /// Blocks per epoch (needed for expiry height calculation) + num_blocks_in_epoch: u64, } impl Ledgers { @@ -270,6 +279,8 @@ impl Ledgers { Self { perm: PermanentLedger::new(config), term: vec![TermLedger::new(DataLedger::Submit, config)], + publish_ledger_epoch_length: config.epoch.publish_ledger_epoch_length, + num_blocks_in_epoch: config.epoch.num_blocks_in_epoch, } } @@ -282,16 +293,30 @@ impl Ledgers { 1 + self.term.len() } - /// Get all of the partition hashes that have expired out of term ledgers - pub fn expire_term_partitions(&mut self, epoch_height: u64) -> Vec { + /// Get all partition hashes that have expired out of both perm and term ledgers. + /// Perm slots only expire when `publish_ledger_epoch_length` is configured. + pub fn expire_partitions(&mut self, epoch_height: u64) -> Vec { let mut expired_partitions: Vec = Vec::new(); + // Expire perm ledger slots using shared helper + for (slot_index, partition_hashes, ledger_id) in self.get_perm_expiring_slots(epoch_height) + { + self.perm.slots[slot_index].is_expired = true; + for partition_hash in partition_hashes { + expired_partitions.push(ExpiringPartitionInfo { + partition_hash, + ledger_id, + slot_index, + }); + } + } + // Collect expired partition hashes from term ledgers for term_ledger in &mut self.term { - let ledger_id = DataLedger::try_from(term_ledger.ledger_id).unwrap(); + let ledger_id = DataLedger::try_from(term_ledger.ledger_id) + .expect("term ledger_id is always constructed from a valid DataLedger variant"); for expired_index in term_ledger.expire_old_slots(epoch_height) { for partition_hash in term_ledger.slots[expired_index].partitions.iter() { - // Add ExpiringPartitionInfo for each expired partition_hash expired_partitions.push(ExpiringPartitionInfo { partition_hash: *partition_hash, ledger_id, @@ -304,15 +329,29 @@ impl Ledgers { expired_partitions } - pub fn get_expiring_term_partitions(&self, epoch_height: u64) -> Vec { + /// Get all partition hashes that would expire at this epoch height (read-only). + /// Unlike `expire_partitions`, this does NOT mark slots as expired. + pub fn get_expiring_partitions(&self, epoch_height: u64) -> Vec { let mut expired_partitions: Vec = Vec::new(); - // Collect expired partition hashes from term ledgers + // Check perm ledger slots using shared helper + for (slot_index, partition_hashes, ledger_id) in self.get_perm_expiring_slots(epoch_height) + { + for partition_hash in partition_hashes { + expired_partitions.push(ExpiringPartitionInfo { + partition_hash, + ledger_id, + slot_index, + }); + } + } + + // Collect from term ledgers (existing logic) for term_ledger in &self.term { - let ledger_id = DataLedger::try_from(term_ledger.ledger_id).unwrap(); + let ledger_id = DataLedger::try_from(term_ledger.ledger_id) + .expect("term ledger_id is always constructed from a valid DataLedger variant"); for expiring_slot_index in term_ledger.get_expired_slot_indexes(epoch_height) { for partition_hash in term_ledger.slots[expiring_slot_index].partitions.iter() { - // Add ExpiringPartitionInfo for each expired partition_hash expired_partitions.push(ExpiringPartitionInfo { partition_hash: *partition_hash, ledger_id, @@ -340,6 +379,43 @@ impl Ledgers { .unwrap_or_else(|| panic!("Term ledger {:?} not found", ledger)) } + /// Returns (slot_index, partition_hashes, perm_ledger_id) for each perm slot + /// that would expire at `epoch_height`. Read-only — does not mark slots. + fn get_perm_expiring_slots( + &self, + epoch_height: u64, + ) -> Vec<(usize, Vec, DataLedger)> { + let Some(epoch_length) = self.publish_ledger_epoch_length else { + return Vec::new(); + }; + + let min_blocks = epoch_length + .checked_mul(self.num_blocks_in_epoch) + .expect("publish_ledger_epoch_length * num_blocks_in_epoch overflows u64"); + + if epoch_height < min_blocks { + return Vec::new(); + } + + let expiry_height = epoch_height - min_blocks; + let perm_ledger_id = DataLedger::try_from(self.perm.ledger_id) + .expect("perm.ledger_id is always DataLedger::Publish"); + let num_slots = self.perm.slots.len(); + let last_slot_index = num_slots.saturating_sub(1); + + let mut result = Vec::new(); + for (slot_index, slot) in self.perm.slots.iter().enumerate() { + // Never expire the last slot + if num_slots > 0 && slot_index == last_slot_index { + continue; + } + if slot.last_height <= expiry_height && !slot.is_expired { + result.push((slot_index, slot.partitions.clone(), perm_ledger_id)); + } + } + result + } + pub fn get_slots(&self, ledger: DataLedger) -> &Vec { match ledger { DataLedger::Publish => self.perm.get_slots(), @@ -426,3 +502,137 @@ impl IndexMut for Ledgers { } } } + +#[cfg(test)] +mod tests { + use super::*; + use irys_types::ConsensusConfig; + + fn make_test_config(publish_epoch_length: Option) -> ConsensusConfig { + let mut config = ConsensusConfig::testing(); + config.epoch.publish_ledger_epoch_length = publish_epoch_length; + config.epoch.num_blocks_in_epoch = 10; + config + } + + #[test] + fn test_perm_expiry_disabled() { + let config = make_test_config(None); + let mut ledgers = Ledgers::new(&config); + // Add a perm slot at height 1 + ledgers.perm.allocate_slots(1, 1); + ledgers.perm.slots[0].partitions.push(H256::random()); + // At height 1000, nothing should expire + let expired = ledgers.expire_partitions(1000); + assert!(expired.iter().all(|e| e.ledger_id != DataLedger::Publish)); + } + + #[test] + fn test_perm_expiry_enabled() { + let config = make_test_config(Some(2)); // 2 epochs + let mut ledgers = Ledgers::new(&config); + // num_blocks_in_epoch = 10, epoch_length = 2 + // expiry_height = epoch_height - (2 * 10) = epoch_height - 20 + + // Add two perm slots + ledgers.perm.allocate_slots(1, 1); // slot 0 at height 1 + ledgers.perm.slots[0].partitions.push(H256::random()); + ledgers.perm.allocate_slots(1, 25); // slot 1 at height 25 + ledgers.perm.slots[1].partitions.push(H256::random()); + + // At epoch_height = 30: expiry_height = 30 - 20 = 10 + // Slot 0 (last_height=1) <= 10: EXPIRED + // Slot 1 (last_height=25) > 10: NOT expired (also last slot) + let expired = ledgers.expire_partitions(30); + let perm_expired: Vec<_> = expired + .iter() + .filter(|e| e.ledger_id == DataLedger::Publish) + .collect(); + assert_eq!(perm_expired.len(), 1); + assert!(ledgers.perm.slots[0].is_expired); + assert!(!ledgers.perm.slots[1].is_expired); + } + + #[test] + fn test_perm_expiry_never_expires_last_slot() { + let config = make_test_config(Some(1)); // 1 epoch + let mut ledgers = Ledgers::new(&config); + // Add only one perm slot + ledgers.perm.allocate_slots(1, 1); + ledgers.perm.slots[0].partitions.push(H256::random()); + + // At epoch_height = 100: should NOT expire (it's the last slot) + let expired = ledgers.expire_partitions(100); + let perm_expired: Vec<_> = expired + .iter() + .filter(|e| e.ledger_id == DataLedger::Publish) + .collect(); + assert_eq!(perm_expired.len(), 0); + assert!(!ledgers.perm.slots[0].is_expired); + } + + #[test] + fn test_perm_expiry_not_enough_blocks() { + let config = make_test_config(Some(2)); // 2 epochs * 10 blocks = 20 min + let mut ledgers = Ledgers::new(&config); + ledgers.perm.allocate_slots(2, 1); + ledgers.perm.slots[0].partitions.push(H256::random()); + ledgers.perm.slots[1].partitions.push(H256::random()); + + // At epoch_height = 15 (< 20 minimum): nothing expires + let expired = ledgers.expire_partitions(15); + let perm_expired: Vec<_> = expired + .iter() + .filter(|e| e.ledger_id == DataLedger::Publish) + .collect(); + assert_eq!(perm_expired.len(), 0); + } + + #[test] + fn test_get_expiring_partitions_includes_perm() { + let config = make_test_config(Some(2)); + let mut ledgers = Ledgers::new(&config); + ledgers.perm.allocate_slots(2, 1); + ledgers.perm.slots[0].partitions.push(H256::random()); + ledgers.perm.slots[1].partitions.push(H256::random()); + + // Read-only: should report slot 0 as expiring without marking it + let expiring = ledgers.get_expiring_partitions(30); + let perm_expiring: Vec<_> = expiring + .iter() + .filter(|e| e.ledger_id == DataLedger::Publish) + .collect(); + assert_eq!(perm_expiring.len(), 1); + // Verify NOT marked as expired (read-only) + assert!(!ledgers.perm.slots[0].is_expired); + } + + #[test] + fn test_perm_get_slot_needs_filters_expired() { + let config = ConsensusConfig::testing(); + let mut perm = PermanentLedger::new(&config); + + // Add two slots (both empty, so both need partitions) + perm.allocate_slots(2, 1); + + // Mark slot 0 as expired + perm.slots[0].is_expired = true; + + let needs = perm.get_slot_needs(); + // Slot 0 is expired — should not appear in needs + // Slot 1 needs partitions — should appear + assert_eq!(needs.len(), 1); + assert_eq!(needs[0].0, 1); // slot index 1 + } + + #[test] + fn test_get_expiring_partitions_disabled_perm() { + let config = make_test_config(None); + let mut ledgers = Ledgers::new(&config); + ledgers.perm.allocate_slots(1, 1); + ledgers.perm.slots[0].partitions.push(H256::random()); + + let expiring = ledgers.get_expiring_partitions(1000); + assert!(expiring.iter().all(|e| e.ledger_id != DataLedger::Publish)); + } +} diff --git a/crates/domain/src/snapshots/epoch_snapshot/epoch_snapshot.rs b/crates/domain/src/snapshots/epoch_snapshot/epoch_snapshot.rs index a0b2f171e0..b1f586a3be 100644 --- a/crates/domain/src/snapshots/epoch_snapshot/epoch_snapshot.rs +++ b/crates/domain/src/snapshots/epoch_snapshot/epoch_snapshot.rs @@ -307,7 +307,7 @@ impl EpochSnapshot { self.allocate_additional_ledger_slots(previous_epoch_block, new_epoch_block); - self.expire_term_ledger_slots(new_epoch_block); + self.expire_ledger_slots(new_epoch_block); self.apply_unpledges(&new_epoch_commitments)?; @@ -466,13 +466,13 @@ impl EpochSnapshot { } } - /// Loops though all of the term ledgers and looks for slots that are older - /// than the `epoch_length` (term length) of the ledger. - /// Stores a vec of expired partition hashes in the epoch snapshot - fn expire_term_ledger_slots(&mut self, new_epoch_block: &IrysBlockHeader) { + /// Loops through all ledgers and looks for slots that are older + /// than their configured epoch length. Marks them expired and stores + /// the expired partition hashes in the epoch snapshot. + fn expire_ledger_slots(&mut self, new_epoch_block: &IrysBlockHeader) { let epoch_height = new_epoch_block.height; let expired_partitions: Vec = - self.ledgers.expire_term_partitions(epoch_height); + self.ledgers.expire_partitions(epoch_height); // Return early if there's no more work to do if expired_partitions.is_empty() { @@ -1198,8 +1198,7 @@ impl EpochSnapshot { /// Used during block production to produce epoch blocks with the correct term fee distributions. pub fn get_expiring_partition_info(&self, epoch_height: u64) -> Vec { // expiring at next next block - let ledgers = self.ledgers.clone(); - ledgers.get_expiring_term_partitions(epoch_height) + self.ledgers.get_expiring_partitions(epoch_height) } pub fn get_first_unexpired_slot_index(&self, ledger_id: DataLedger) -> usize { diff --git a/crates/types/src/config/consensus.rs b/crates/types/src/config/consensus.rs index 2d3f8e8837..e5d2cbed7e 100644 --- a/crates/types/src/config/consensus.rs +++ b/crates/types/src/config/consensus.rs @@ -324,6 +324,13 @@ pub struct EpochConfig { /// Optional configuration for capacity provisioning at genesis pub num_capacity_partitions: Option, + + /// Number of epochs before a publish ledger partition expires. + /// `None` = never expire (mainnet). `Some(n)` = expire after n epochs (testnet). + /// `skip_serializing_if` ensures `None` is omitted from canonical JSON, + /// keeping the consensus hash unchanged for mainnet nodes. + #[serde(default, skip_serializing_if = "Option::is_none")] + pub publish_ledger_epoch_length: Option, } /// # EMA (Exponential Moving Average) Configuration @@ -585,6 +592,8 @@ impl ConsensusConfig { submit_ledger_epoch_length: 5, // Optional configuration for capacity provisioning at genesis num_capacity_partitions: None, + // Publish ledger never expires on mainnet + publish_ledger_epoch_length: None, }, // Number of blocks between EMA price recalculations Lower values make prices more responsive, higher values provide more stability ema: EmaConfig { @@ -703,6 +712,8 @@ impl ConsensusConfig { num_blocks_in_epoch: 100, submit_ledger_epoch_length: 5, num_capacity_partitions: None, + // Publish ledger never expires in testing config + publish_ledger_epoch_length: None, }, difficulty_adjustment: DifficultyAdjustmentConfig { @@ -836,6 +847,8 @@ impl ConsensusConfig { num_blocks_in_epoch: 360, submit_ledger_epoch_length: 5, num_capacity_partitions: None, + // 168 epochs * ~1hr/epoch = ~7 days + publish_ledger_epoch_length: Some(168), }, difficulty_adjustment: DifficultyAdjustmentConfig { diff --git a/crates/types/src/config/mod.rs b/crates/types/src/config/mod.rs index 960cfe4a3c..87c72d897c 100644 --- a/crates/types/src/config/mod.rs +++ b/crates/types/src/config/mod.rs @@ -146,6 +146,22 @@ impl Config { "mempool.max_pending_chunk_items must be > 0 (a zero-capacity pending chunk cache would silently drop all pre-header chunks)" ); + // publish_ledger_epoch_length must be > 0 if set, and must not overflow when multiplied + if let Some(n) = self.consensus.epoch.publish_ledger_epoch_length { + ensure!( + n > 0, + "publish_ledger_epoch_length must be > 0 when set (got {})", + n + ); + ensure!( + n.checked_mul(self.consensus.epoch.num_blocks_in_epoch) + .is_some(), + "publish_ledger_epoch_length ({}) * num_blocks_in_epoch ({}) overflows u64", + n, + self.consensus.epoch.num_blocks_in_epoch + ); + } + Ok(()) } } @@ -837,6 +853,49 @@ mod tests { assert_eq!(consensus.chain_id, 3282); } + #[test] + fn test_publish_ledger_epoch_length_validation() { + // Some(0) should fail + let mut node_config = NodeConfig::testing(); + node_config + .consensus + .get_mut() + .epoch + .publish_ledger_epoch_length = Some(0); + let config = Config::new_with_random_peer_id(node_config); + assert!(config.validate().is_err()); + + // Some(1) should pass + let mut node_config = NodeConfig::testing(); + node_config + .consensus + .get_mut() + .epoch + .publish_ledger_epoch_length = Some(1); + let config = Config::new_with_random_peer_id(node_config); + assert!(config.validate().is_ok()); + + // None should pass + let mut node_config = NodeConfig::testing(); + node_config + .consensus + .get_mut() + .epoch + .publish_ledger_epoch_length = None; + let config = Config::new_with_random_peer_id(node_config); + assert!(config.validate().is_ok()); + + // u64::MAX should fail (overflow with num_blocks_in_epoch) + let mut node_config = NodeConfig::testing(); + node_config + .consensus + .get_mut() + .epoch + .publish_ledger_epoch_length = Some(u64::MAX); + let config = Config::new_with_random_peer_id(node_config); + assert!(config.validate().is_err()); + } + #[test] fn test_with_expected_genesis_hash() { let config = Config::new_with_random_peer_id(NodeConfig::testing());