Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughWidespread test additions/refactors (proptest/rstest), many crates gain workspace dev-dependencies, several functions now return Results with propagated errors, api-client URL construction refactored to use reqwest::Url, CLI parsing helpers added, and numeric/time handling hardened with try_from/saturating conversions. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
82da67d to
c980b34
Compare
There was a problem hiding this comment.
Actionable comments posted: 29
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
crates/database/src/database.rs (1)
920-955: 🧹 Nitpick | 🔵 TrivialAdd an equality-boundary case for
max_height.The suite covers above/below conditions, but not the inclusive boundary (
included_height == max_height) that should pass.Proposed test case addition
#[rstest] #[case::within_height_and_migrated(10, true, true)] +#[case::at_exact_height_and_migrated(5, true, true)] #[case::height_exceeds_max(3, true, false)] #[case::unmigrated_height(10, false, false)] fn canonical_lookup(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/database/src/database.rs` around lines 920 - 955, Add a rstest case covering the inclusive boundary where the stored included height equals max_height: in the canonical_lookup test add a case like #[case::boundary_inclusive(5, true, true)] so that when set_data_tx_included_height(tx, &tx_id, 5) and max_height is 5 the test expects a found result; update the case list in the canonical_lookup function (which uses tx_header_by_txid_canonical) to include this equality-boundary scenario.crates/api-client/src/ext.rs (1)
163-179: 🧹 Nitpick | 🔵 TrivialConsider consistent naming with
get_data_price.In
get_data_price(line 151), you useledger_idas the intermediate variable name, but here you useledger_num. For consistency, consider using the same naming convention across both functions.♻️ Suggested rename for consistency
async fn get_chunk( &self, peer: SocketAddr, ledger_id: DataLedger, data_root: DataRoot, offset: u32, // data root relative offset ) -> eyre::Result<ChunkFormat> { - let ledger_num = u32::from(ledger_id); + let ledger = u32::from(ledger_id); self.make_request( peer, Method::GET, - &format!("/chunk/data-root/{ledger_num}/{data_root}/{offset}"), + &format!("/chunk/data-root/{ledger}/{data_root}/{offset}"), None::<&()>, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/api-client/src/ext.rs` around lines 163 - 179, Rename the intermediate numeric ledger variable in get_chunk to match get_data_price: replace let ledger_num = u32::from(ledger_id); with let ledger_id = u32::from(ledger_id); and update the format! call to use {ledger_id} instead of {ledger_num} so both get_chunk and get_data_price use the same intermediate naming (refer to function get_chunk and get_data_price).crates/api-client/src/lib.rs (1)
285-301:⚠️ Potential issue | 🟡 MinorRemove redundant body parameter from GET request.
Query parameters are already appended to the URL via
query_pairs_mut, butSome(&block_index_query)is also passed as the request body. The body will be serialized and sent as JSON (line 157 inmake_request_url), making it redundant with the query parameters already in the URL. This pattern is inconsistent with other GET requests in the file, which useNone::<&()>.🔧 Suggested fix
let response = self .make_request_url::<Vec<BlockIndexItem>, _>( url.as_str(), Method::GET, - Some(&block_index_query), + None::<&()>, ) .await;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/api-client/src/lib.rs` around lines 285 - 301, The GET call is sending the same data both as query params and as a serialized JSON body; in the function that builds the URL (extend_url) we already append height/limit, so update the call to make_request_url in this block to pass no body (use None::<&()>) instead of Some(&block_index_query) so the request matches other GETs and avoids redundant JSON serialization; keep the URL construction and the return handling (BlockIndexItem, Method::GET) unchanged.crates/cli/src/main.rs (1)
332-336:⚠️ Potential issue | 🔴 CriticalAdd import or fully qualify
size_of.Line 334 uses
size_of::<...>(), butstd::mem::size_ofis not imported in this file. The code will not compile. Use the fully qualified pathstd::mem::size_ofor adduse std::mem::size_of;at the top of the file.Suggested fix
// Workaround: zero-sized stub -- rocksdb feature must be disabled const _: () = assert!( - size_of::<reth_provider::providers::RocksDBProvider>() == 0, + std::mem::size_of::<reth_provider::providers::RocksDBProvider>() == 0, "RocksDBProvider must be the zero-sized stub (rocksdb feature must be disabled)" );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/cli/src/main.rs` around lines 332 - 336, The assert uses size_of::<reth_provider::providers::RocksDBProvider>() but std::mem::size_of is not imported; fix by either adding use std::mem::size_of; at the top of the file or change the call to std::mem::size_of::<reth_provider::providers::RocksDBProvider>() so the compile error is resolved while keeping the zero-sized RocksDBProvider check intact.crates/irys-reth/src/lib.rs (1)
769-806:⚠️ Potential issue | 🟡 MinorDon't bake a fixed 100 ms race into the shared helper.
This helper now underpins three tests, but it still sleeps for 100 ms before calling
best_payload(). On slower CI, that can easily be too short and make the tests fail intermittently withNoneinstead of the expected build error.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/irys-reth/src/lib.rs` around lines 769 - 806, The test helper assert_nonexistent_account_fails_block_production should not use a fixed 100ms sleep; replace that sleep with a polling loop (or tokio::time::timeout) that repeatedly calls node.payload.payload_builder.best_payload(attributes.payload_id()) until it returns Some(...) or a reasonable timeout elapses (e.g., several seconds) to avoid flaky CI; implement polling with a short interval (e.g., 25–100ms) and fail with a clear error if the timeout is reached, keeping the rest of the flow (send_new_payload(...), then inspect the Err variant and assert the error message) intact.crates/packing/src/lib.rs (1)
231-239:⚠️ Potential issue | 🟠 MajorUse
checked_addto prevent silent overflow of the partition chunk offset.Lines 234 and 259 perform unchecked
chunk_offset + u64::try_from(pos)additions. If the result wraps, the wrong offset is passed tocompute_entropy_chunk, which directly feeds it into the entropy hash computation viacompute_seed_hash. This silently corrupts the encryption seed.🛠️ Proposed fix
capacity_single::compute_entropy_chunk( mining_address, - chunk_offset + u64::try_from(pos).expect("chunk position exceeds u64"), + chunk_offset + .checked_add(pos.try_into().expect("chunk position exceeds chunk offset type")) + .expect("chunk offset overflow"), partition_hash.0, iterations, chunk_size, &mut entropy_chunk, irys_chain_id, @@ capacity_pack_range_c( mining_address, - chunk_offset + u64::try_from(pos).expect("chunk position exceeds u64"), + chunk_offset + .checked_add(pos.try_into().expect("chunk position exceeds chunk offset type")) + .expect("chunk offset overflow"), partition_hash, &mut entropy_chunk, entropy_packing_iterations, irys_chain_id, );Also applies to: 256-264
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/packing/src/lib.rs` around lines 231 - 239, The unchecked addition chunk_offset + u64::try_from(pos) can overflow and corrupt the seed; replace it with chunk_offset.checked_add(u64::try_from(pos).expect("chunk position exceeds u64")).ok_or_else(|| /* appropriate error */) or .expect(...) to fail loudly and propagate the error before calling capacity_single::compute_entropy_chunk (the same change where you call compute_entropy_chunk and any other place using this sum, e.g., the call path into compute_seed_hash); ensure you compute let offset = chunk_offset.checked_add(pos_u64) and handle the None case by returning an Err or explicit panic with a clear message so overflow cannot silently wrap.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/api-client/src/lib.rs`:
- Around line 303-307: The node_info_url function is sending an unnecessary
empty body for a GET by passing Some(&()), so update the call to
make_request_url in node_info_url to use None::<&()> (i.e., no body) to match
other GET methods; locate node_info_url and the make_request_url::<NodeInfo,
_>(..., Method::GET, ...) invocation and replace the body argument accordingly.
In `@crates/api-server/src/routes/index.rs`:
- Around line 25-29: The current Err(_) arm discards the error from
get_node_info; change it to capture the error (e.g., Err(e)) and log it before
converting to ApiError. Locate the match/Result handling around get_node_info
and replace Err(_) with Err(e) and call your logger (for example
tracing::error!("get_node_info failed: {:?}", e) or log::error!) then return the
same ApiError::into(( "Failed to retrieve node info".to_owned(),
StatusCode::INTERNAL_SERVER_ERROR)).error_response(); this preserves diagnostics
while keeping the client-facing message unchanged.
In `@crates/api-server/src/routes/observability.rs`:
- Around line 288-299: Remove the redundant unit test
should_count_correctly_by_ledger_id which duplicates assertions already covered
by the parameterized test should_return_count_when_found; delete the entire test
function (the block that calls count_assignments_by_ledger_type for
DataLedger::Submit and DataLedger::Publish and asserts submit_count == 2 and
publish_count == 1) so the suite only keeps the parameterized coverage and
avoids duplicate assertions.
In `@crates/api-server/src/routes/proxy.rs`:
- Around line 112-128: Add missing test cases for the remaining hop-by-hop
headers so coverage includes "proxy-authorization", "te", and "trailers": update
the hop_by_hop_detection test (which calls is_hop_by_hop_header) to add #[case]
entries for those three headers with expected true and include at least one
different casing variant for each (e.g.,
"Proxy-Authorization"/"PROXY-AUTHORIZATION", "Te"/"TE", "Trailers"/"TRAILERS")
to ensure case-insensitive matching against HOP_BY_HOP_HEADERS.
In `@crates/c/src/capacity_single.rs`:
- Around line 200-227: The tests pass platform-specific u64 partition offsets to
capacity_single::compute_seed_hash but the function expects partition_offset:
std::ffi::c_ulong; update the proptest generators in both deterministic
(partition_offset) and differs_with_different_offset (offset_a, offset_b) to use
any::<std::ffi::c_ulong>() (and import std::ffi::c_ulong if needed) so the
generated types match compute_seed_hash’s signature across platforms; keep the
rest of the test logic unchanged.
In `@crates/cli/src/main.rs`:
- Around line 94-98: The function timestamp_millis_to_secs currently checks
overflow against u64::MAX in milliseconds; change it to first divide the
incoming millis (u128) by 1000 to get seconds (u128), then use u64::try_from on
that seconds value in timestamp_millis_to_secs to detect overflow, returning the
seconds as u64; additionally update the unit test that asserts overflow (the
test that constructs an overflowing timestamp) so it supplies a millisecond
value whose seconds (millis/1000) exceed u64::MAX rather than a value that only
exceeds u64::MAX in milliseconds.
In `@crates/database/Cargo.toml`:
- Around line 34-35: The crate's Cargo.toml currently pins proptest as `proptest
= "1.6"` which duplicates the workspace-managed dependency; update that entry to
use the workspace override like the other crates by replacing the version pin
with `proptest.workspace = true` so this crate consumes the workspace-defined
proptest; locate the `proptest` entry in this crate's Cargo.toml and change it
accordingly.
In `@crates/domain/src/models/forkchoice_markers.rs`:
- Around line 57-58: The u64::try_from(...) conversion for depth_delta is
technically infallible and the fallible map_err branch is only defensive; either
replace the map_err(...) call with an explicit .expect("usize fits in u64") to
simplify the code or keep the current conversion but add a brief comment above
the statement (and the other occurrence around lines 113-114) noting that
converting a usize to u64 cannot fail on supported platforms and the error
handling is purely defensive; update the statements that compute depth_delta
(and the second similar conversion) accordingly.
In `@crates/efficient-sampling/src/lib.rs`:
- Around line 583-589: The loop currently uses 1..=k.min(n) which prevents
exercising the k>n path; change the iteration to 1..=k so the sampler is called
k times (e.g., use (1..=k).map(...)) and then add explicit assertions in the
k_gt_n_handling test that verify behavior after the first n draws (e.g., that
the sampler reinitializes or exhausts correctly) by inspecting the samples
vector contents returned from ranges.get_recall_range(seed, partition_hash)
across indices beyond n; keep using the same identifiers (samples,
ranges.get_recall_range, seed, partition_hash, k, n) to locate and update the
test.
- Around line 113-117: The constructor now rejects a zero
num_recall_ranges_in_partition but tests and other code paths still allow a zero
to reach reset_step_number (and related reset_step* functions), causing a
divide-by-zero when step_num > 0; either ensure num_recall_ranges_in_partition()
cannot return 0 (update where it's constructed/returned so test_zero_chunks uses
a non-zero value) or add the same guard at the start of reset_step_number and
any reset_step* helpers to return an Err or handle the zero safely before
performing division, referencing the num_recall_ranges_in_partition value and
the reset_step_number/reset_step* functions to locate where to add the check.
- Around line 82-88: The modulo base uses self.last_range_pos as an exclusive
length but last_range_pos is an inclusive index, so the modulus should be
last_range_pos + 1; change the computation that produces next_range_pos to use a
base of (self.last_range_pos + 1) converted to u32 (or use checked/expect
conversion) before applying rng.next() % base, e.g. compute let base =
u32::try_from(self.last_range_pos + 1).expect(...); then use rng.next() % base
and convert the result to usize as you do now to produce next_range_pos,
preserving the existing expect messages for conversion failures.
- Around line 613-627: The test can become a no-op when valid_range equals
bad_range (e.g., valid_range == 0) because the guard if bad_range != valid_range
skips the Err assertion; change the test to always exercise the error path by
deriving a guaranteed-invalid bad_range from valid_range (for example set
bad_range = (valid_range + 1) mod num_ranges or choose bad_range =
valid_range.saturating_add(1) when valid_range < num_ranges) and then call
recall_range_is_valid(bad_range, num_ranges, &seeds, &partition_hash) to assert
result.is_err() using the existing description variable; alternatively replace
non-deterministic inputs with fixed seeds/num_ranges that make valid_range
deterministic so the Err-path is always tested.
In `@crates/irys-reth/src/evm.rs`:
- Around line 737-742: The code only clears staged shadow state via
std::mem::take(&mut self.state) on the success path, so if
distribute_priority_fee() or process_shadow_transaction() returns early after
inserting staged accounts the leftover snapshot persists and breaks the next
transaction; locate uses of self.state and the error return that emits "Custom
IrysEVM state should be empty, but got: ..." and ensure the staged state is
cleared on every exit (e.g., call std::mem::take(&mut self.state) or
self.state.clear() before every Err return, or introduce a small RAII
guard/cleanup at the start of the function to always reset self.state on
function exit) so that any early returns from distribute_priority_fee() or
process_shadow_transaction() do not leave stale state behind.
- Around line 938-956: The code currently demands the beneficiary pre-exist by
unwrapping self.load_account(beneficiary) into bef with ok_or_else, which breaks
valid None->Some flows; change the pre-commit read to accept an Option (e.g. let
bef = self.load_account(beneficiary)?;) instead of erroring, keep
commit_account_change(…) as-is, then keep the post-commit check but only error
if aft is None (retain the ok_or_else for aft) and compare bef (Option) to aft
(Some) to decide if distribution had effect; update the equality check to
compare Option<Account> vs Account (or compare bef.as_ref() vs Some(&aft)) so
the new-account case is handled correctly.
In `@crates/irys-reth/src/lib.rs`:
- Around line 808-859: The three async tests test_nonexistent_account_stake,
test_nonexistent_account_unpledge and test_nonexistent_account_pledge duplicate
setup and assertions; refactor them into a single parameterized rstest that
iterates over the TransactionPacket variants (Stake, Unpledge, Pledge). Move the
common setup (TestContext::new, ctx.get_single_node, random Address) into the
test body, accept a parameter for a factory/enum indicating which packet to
build, call either ShadowTransaction::new_v1 with TransactionPacket::Stake or
::Pledge or call the existing unpledge() helper for Unpledge, then sign via
sign_shadow_tx and call assert_nonexistent_account_fails_block_production; do
the same kind of consolidation for test_unpledge_fee_only /
test_unstake_debit_fee_only by parameterizing the differing inputs and reusing
the shared assertion logic.
In `@crates/irys-reth/src/shadow_tx.rs`:
- Around line 877-883: The test reject_old_format_without_solution_hash is
hitting the IRYS_SHADOW_EXEC check in ShadowTransaction::decode instead of
exercising the truncated V1 payload path; fix by ensuring IRYS_SHADOW_EXEC is
unset or disabled inside the test before calling ShadowTransaction::decode
(e.g., save any existing value, call std::env::remove_var("IRYS_SHADOW_EXEC") or
set it to "0" for the duration of the test, then restore it afterward) so the
decode path processes the V1 buffer and the missing solution_hash assertion is
reached.
- Around line 838-858: Add at least one deterministic golden-byte test that
asserts the stable wire format for ShadowTransaction instead of relying only on
property tests; create a new test (e.g., shadow_transaction_golden_bytes) that
uses a fixed known byte sequence (or a fixed ShadowTransaction instance
constructed from known fields) and asserts decode/deserialize produces the
expected ShadowTransaction and/or that serialize produces the exact expected
byte sequence; reference the existing types and helpers (ShadowTransaction,
ShadowTransaction::V1, IRYS_SHADOW_EXEC, and arb_shadow_transaction for pattern)
and ensure the test fails if the version byte, packet discriminant, field order,
or endianness changes.
In `@crates/packing/src/lib.rs`:
- Around line 790-795: Replace the hardcoded chain_id (currently set to
1270_u64) in the roundtrip proptests with the canonical test value from
ConsensusConfig::testing(); i.e., obtain chain_id by calling
ConsensusConfig::testing().chain_id (or import ConsensusConfig::testing() if
needed) so the entropy derivation uses the same canonical test config; update
both occurrences around the roundtrip proptests that define
mining_address/partition_hash/chunk_offset/iterations/chunk_size/chain_id.
- Around line 907-914: The test function packing_type_json_roundtrip currently
iterates over PackingType variants in a loop; convert it to an
rstest-parameterized test: add the #[rstest] attribute to
packing_type_json_roundtrip, import rstest::rstest, and declare the function as
fn packing_type_json_roundtrip(#[case] pt: PackingType) with separate
#[case(PackingType::CPU)], #[case(PackingType::CUDA)], #[case(PackingType::AMD)]
attributes so each enum variant is a distinct test case; keep the body identical
(serialize with serde_json::to_string, deserialize, and assert_eq!).
In `@crates/storage/src/recovered_mempool_state.rs`:
- Around line 218-234: Add a test that exercises the cleanup branch of
RecoveredMempoolState::load_from_disk by calling load_from_disk(&mempool_dir,
true) (i.e., remove_files = true) after creating the empty subdirectories
(commitment_tx and storage_tx), then assert that state.commitment_txs and
state.storage_txs are empty and that the created directories no longer exist on
disk (use tokio::fs::metadata or path.exists checks). This ensures the code path
around the remove_files boolean (the directory removal logic) is covered and
prevents regressions in the cleanup behavior.
- Around line 185-186: The async test test_handles_malformed_json should be
annotated to capture tracing output from load_from_disk (which emits
tracing::warn! and tracing::debug!) — replace the current #[tokio::test] on the
test_handles_malformed_json function with #[test_log::test(tokio::test)] so the
test harness captures and displays tracing logs during the test run; update the
test attribute only and keep the async signature and body unchanged.
In `@crates/types/src/commitment_common.rs`:
- Around line 883-891: After decoding with CommitmentTransaction::decode(&mut
slice), assert that the decoder consumed the entire RLP buffer by checking the
remaining slice is empty (e.g., assert!(slice.is_empty()) or
prop_assert_eq!(slice.len(), 0)) so partial-decode regressions are caught; add
this check immediately after let decoded = CommitmentTransaction::decode(&mut
slice).unwrap() and before the existing field equality assertions, referencing
the variables buf, slice, and decoded.
In `@crates/types/src/ingress.rs`:
- Around line 431-432: The prop_ingress_tests module calls Encodable trait
methods (.encode(), .length()) but only Decodable is brought into scope; add the
Encodable trait import into the module imports (mirror the file-level import) so
those method calls resolve. Specifically, in the prop_ingress_tests module add a
use alloy_rlp::Encodable import (similar to the existing use
alloy_rlp::Decodable as _; line) so functions/methods in prop_ingress_tests that
call .encode() and .length() compile.
In `@crates/types/src/storage_pricing.rs`:
- Around line 2683-2688: The percentage inputs passed to Amount::percentage
(e.g., where decay_rate is created) are whole numbers but the constructor
expects fractional percentages (1.0 == 100%), so change those calls (for example
where decay_rate is set from 50.0, 10.0, 80.0, etc.) to use fractional values
(divide by 100 or supply 0.5, 0.1, 0.8) so decay_rate is correct; also stop
silently dropping Err from apply_pledge_decay (references: Amount::percentage,
apply_pledge_decay, base_fee, decay_rate, value_a/value_b) — handle the Result
explicitly (propagate the error, use expect with a clear message, or match and
return Err) to avoid panics or vacuous test success.
In `@crates/types/tests/versioned_compact_roundtrip_tests.rs`:
- Around line 46-48: The test currently compares headers via try_as_header_v1()
which is inconsistent with other roundtrip tests; replace the three assertions
(decoded.try_as_header_v1().is_some(), assert_eq!(decoded.try_as_header_v1(),
versioned.try_as_header_v1()), assert_eq!(decoded.version(), 1)) with a single
direct equality assertion assert_eq!(decoded, versioned) to match
block_header_compact_roundtrip and commitment_tx_compact_roundtrip (or, if there
is a specific equality semantic reason to use try_as_header_v1(), leave the
existing checks but add a brief comment above the assertions explaining why
DataTransactionHeader must be compared via try_as_header_v1() rather than direct
PartialEq).
In `@crates/utils/nextest-monitor/Cargo.toml`:
- Around line 32-33: The crate's Cargo.toml pins rstest = "0.24" which conflicts
with the workspace-managed rstest = "0.25" (proptest is already matching), so
update this crate to use the workspace-managed versions: in
crates/utils/nextest-monitor/Cargo.toml remove the explicit version strings for
rstest (and proptest if present) and instead declare them to use the workspace
(e.g., set rstest = { workspace = true } or remove the entries so the workspace
[workspace.dependencies] versions are used), ensuring the symbols rstest and
proptest resolve to the workspace-managed versions.
In `@crates/utils/nextest-monitor/src/bin/nextest-wrapper.rs`:
- Around line 628-647: The test table for extract_test_name doesn't exercise the
branch that detects path-like args (arg.contains('/') || arg.contains('\\')), so
add rstest cases to hit that branch: include inputs like ["path/to/test",
"--exact"] and ["some\\windows\\test", "--nocapture"] with expected
Some("path/to/test") and Some("some\\windows\\test") (or their String
equivalents) so extract_test_name's path-filter branch is covered; update the
extract_test_name_cases parametrized cases (function extract_test_name) to
include these new cases.
In `@crates/vdf/src/vdf.rs`:
- Around line 563-570: Update the property test at_boundary_applies_seed so it
includes the genesis boundary by allowing k to be zero (currently k starts at
1), e.g. change the domain for k to include 0 (use 0_u64..50 or an inclusive
range that contains 0) so that global_step (computed as k * reset_frequency) can
be 0 and the test exercises the first reset boundary; adjust any related
assumptions in the test body if needed to handle k == 0.
---
Outside diff comments:
In `@crates/api-client/src/ext.rs`:
- Around line 163-179: Rename the intermediate numeric ledger variable in
get_chunk to match get_data_price: replace let ledger_num =
u32::from(ledger_id); with let ledger_id = u32::from(ledger_id); and update the
format! call to use {ledger_id} instead of {ledger_num} so both get_chunk and
get_data_price use the same intermediate naming (refer to function get_chunk and
get_data_price).
In `@crates/api-client/src/lib.rs`:
- Around line 285-301: The GET call is sending the same data both as query
params and as a serialized JSON body; in the function that builds the URL
(extend_url) we already append height/limit, so update the call to
make_request_url in this block to pass no body (use None::<&()>) instead of
Some(&block_index_query) so the request matches other GETs and avoids redundant
JSON serialization; keep the URL construction and the return handling
(BlockIndexItem, Method::GET) unchanged.
In `@crates/cli/src/main.rs`:
- Around line 332-336: The assert uses
size_of::<reth_provider::providers::RocksDBProvider>() but std::mem::size_of is
not imported; fix by either adding use std::mem::size_of; at the top of the file
or change the call to
std::mem::size_of::<reth_provider::providers::RocksDBProvider>() so the compile
error is resolved while keeping the zero-sized RocksDBProvider check intact.
In `@crates/database/src/database.rs`:
- Around line 920-955: Add a rstest case covering the inclusive boundary where
the stored included height equals max_height: in the canonical_lookup test add a
case like #[case::boundary_inclusive(5, true, true)] so that when
set_data_tx_included_height(tx, &tx_id, 5) and max_height is 5 the test expects
a found result; update the case list in the canonical_lookup function (which
uses tx_header_by_txid_canonical) to include this equality-boundary scenario.
In `@crates/irys-reth/src/lib.rs`:
- Around line 769-806: The test helper
assert_nonexistent_account_fails_block_production should not use a fixed 100ms
sleep; replace that sleep with a polling loop (or tokio::time::timeout) that
repeatedly calls
node.payload.payload_builder.best_payload(attributes.payload_id()) until it
returns Some(...) or a reasonable timeout elapses (e.g., several seconds) to
avoid flaky CI; implement polling with a short interval (e.g., 25–100ms) and
fail with a clear error if the timeout is reached, keeping the rest of the flow
(send_new_payload(...), then inspect the Err variant and assert the error
message) intact.
In `@crates/packing/src/lib.rs`:
- Around line 231-239: The unchecked addition chunk_offset + u64::try_from(pos)
can overflow and corrupt the seed; replace it with
chunk_offset.checked_add(u64::try_from(pos).expect("chunk position exceeds
u64")).ok_or_else(|| /* appropriate error */) or .expect(...) to fail loudly and
propagate the error before calling capacity_single::compute_entropy_chunk (the
same change where you call compute_entropy_chunk and any other place using this
sum, e.g., the call path into compute_seed_hash); ensure you compute let offset
= chunk_offset.checked_add(pos_u64) and handle the None case by returning an Err
or explicit panic with a clear message so overflow cannot silently wrap.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 4d996aeb-9433-44a0-9066-25436b1a7bbd
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (92)
crates/actors/src/chunk_ingress_service/chunks.rscrates/actors/src/partition_mining_service.rscrates/api-client/Cargo.tomlcrates/api-client/src/ext.rscrates/api-client/src/lib.rscrates/api-server/src/error.rscrates/api-server/src/routes/balance.rscrates/api-server/src/routes/block.rscrates/api-server/src/routes/block_index.rscrates/api-server/src/routes/index.rscrates/api-server/src/routes/ledger.rscrates/api-server/src/routes/observability.rscrates/api-server/src/routes/price.rscrates/api-server/src/routes/proxy.rscrates/api-server/src/routes/supply.rscrates/c/Cargo.tomlcrates/c/src/capacity_single.rscrates/chain-tests/src/api/hardfork_tests.rscrates/chain/Cargo.tomlcrates/chain/src/chain.rscrates/cli/Cargo.tomlcrates/cli/src/main.rscrates/config/Cargo.tomlcrates/config/src/chain/chainspec.rscrates/config/src/submodules.rscrates/database/Cargo.tomlcrates/database/src/database.rscrates/database/src/db_cache.rscrates/database/src/metadata.rscrates/domain/src/models/chain_sync_state.rscrates/domain/src/models/circular_buffer.rscrates/domain/src/models/forkchoice_markers.rscrates/domain/src/models/node_info.rscrates/domain/src/models/peer_list.rscrates/domain/src/snapshots/commitment_snapshot.rscrates/domain/src/snapshots/ema_snapshot.rscrates/efficient-sampling/Cargo.tomlcrates/efficient-sampling/src/lib.rscrates/irys-reth/Cargo.tomlcrates/irys-reth/src/evm.rscrates/irys-reth/src/lib.rscrates/irys-reth/src/shadow_tx.rscrates/p2p/Cargo.tomlcrates/p2p/src/gossip_client.rscrates/p2p/src/server.rscrates/p2p/src/types.rscrates/packing-worker/Cargo.tomlcrates/packing-worker/src/api.rscrates/packing-worker/src/packing.rscrates/packing-worker/src/types.rscrates/packing/Cargo.tomlcrates/packing/src/lib.rscrates/price-oracle/Cargo.tomlcrates/price-oracle/proptest-regressions/coinmarketcap.txtcrates/price-oracle/src/coinmarketcap.rscrates/price-oracle/src/lib.rscrates/price-oracle/src/mock_oracle.rscrates/reth-node-bridge/Cargo.tomlcrates/reth-node-bridge/src/dump.rscrates/reth-node-bridge/src/ext.rscrates/reward-curve/Cargo.tomlcrates/reward-curve/src/lib.rscrates/storage/Cargo.tomlcrates/storage/src/recovered_mempool_state.rscrates/types/proptest-regressions/storage_pricing.txtcrates/types/src/chunk.rscrates/types/src/commitment_common.rscrates/types/src/commitment_v1.rscrates/types/src/commitment_v2.rscrates/types/src/config/consensus.rscrates/types/src/config/mod.rscrates/types/src/hardfork_config.rscrates/types/src/ingress.rscrates/types/src/merkle.rscrates/types/src/serialization.rscrates/types/src/storage_pricing.rscrates/types/src/time.rscrates/types/src/transaction/bounded_fee.rscrates/types/tests/version_mismatch_tests.rscrates/types/tests/versioned_compact_roundtrip_tests.rscrates/utils/nextest-monitor/Cargo.tomlcrates/utils/nextest-monitor/src/bin/nextest-wrapper.rscrates/utils/utils/Cargo.tomlcrates/utils/utils/src/circuit_breaker/metrics.rscrates/utils/utils/src/circuit_breaker/mod.rscrates/utils/utils/src/circuit_breaker/state.rscrates/utils/utils/src/circuit_breaker/test_utils.rscrates/utils/utils/src/listener.rscrates/utils/utils/src/telemetry.rscrates/vdf/Cargo.tomlcrates/vdf/src/lib.rscrates/vdf/src/vdf.rs
💤 Files with no reviewable changes (10)
- crates/types/src/hardfork_config.rs
- crates/types/src/commitment_v2.rs
- crates/types/src/commitment_v1.rs
- crates/types/tests/version_mismatch_tests.rs
- crates/domain/src/snapshots/ema_snapshot.rs
- crates/types/src/time.rs
- crates/types/src/transaction/bounded_fee.rs
- crates/types/src/config/consensus.rs
- crates/types/src/chunk.rs
- crates/actors/src/chunk_ingress_service/chunks.rs
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
crates/packing-worker/src/packing.rs (1)
127-129:⚠️ Potential issue | 🟠 MajorPropagate errors from CUDA operations instead of panicking.
Both
.unwrap()calls insidespawn_blockingwill panic on failure, crashing the worker thread. Sincecapacity_pack_range_cuda_creturnseyre::Result<()>andfrom_device_default()likely returnsResult, these errors should be propagated.🛠️ Proposed fix
let out = runtime_handle .spawn_blocking(move || { let mut out: Vec<u8> = Vec::with_capacity( usize::try_from(num_chunks * chunk_size_u32) .expect("total size fits in usize"), ); + let cuda_config = CUDAConfig::from_device_default() + .map_err(|e| eyre::eyre!("CUDA config error: {e}"))?; capacity_pack_range_cuda_c( num_chunks, mining_address, u64::from(start), partition_hash, entropy_packing_iterations, chain_id, - CUDAConfig::from_device_default().unwrap(), + cuda_config, &mut out, - ).unwrap(); - out + )?; + Ok::<_, eyre::Report>(out) }) - .await?; + .await??;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/packing-worker/src/packing.rs` around lines 127 - 129, The two .unwrap() calls inside the spawn_blocking closure (specifically the call to CUDAConfig::from_device_default() and the call to capacity_pack_range_cuda_c) must not panic; change the closure to return eyre::Result and replace the unwraps with ? to propagate errors (i.e., use CUDAConfig::from_device_default()? and capacity_pack_range_cuda_c(...)?), then propagate the spawn_blocking result (await and ? as needed) so the calling function returns/forwards the eyre::Result instead of panicking.crates/packing/src/lib.rs (1)
337-339: 🧹 Nitpick | 🔵 TrivialRemove unreachable post-clamp zero check.
After Line 333–Line 335 clamps to
1.0..=u32::MAX,iterations == 0cannot happen.♻️ Proposed cleanup
- if iterations == 0 { - iterations = 1; - }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/packing/src/lib.rs` around lines 337 - 339, Remove the unreachable post-clamp zero check for the iterations variable: after clamping iterations to 1.0..=u32::MAX (the earlier clamp that guarantees iterations >= 1), delete the if block "if iterations == 0 { iterations = 1; }" (referencing the iterations variable) since it can never be true; ensure no other code relies on this redundant branch.crates/config/src/submodules.rs (2)
38-47:⚠️ Potential issue | 🟡 MinorMake the validation error reference the actual config path.
The message hardcodes
.irys_submodules.toml, which is misleading whenfrom_tomlis used with another file (including tests). Include the passed-in path in the error text.Proposed fix
- pub fn from_toml(path: impl AsRef<Path>) -> eyre::Result<Self> { - let contents = fs::read_to_string(path)?; + pub fn from_toml(path: impl AsRef<Path>) -> eyre::Result<Self> { + let path = path.as_ref(); + let contents = fs::read_to_string(path)?; let config: Self = toml::from_str(&contents)?; let submodule_count = config.submodule_paths.len(); if submodule_count < 3 { bail!( - "Insufficient submodules: found {}, but minimum of 3 required in .irys_submodules.toml for chain initialization", - submodule_count + "Insufficient submodules: found {}, but minimum of 3 required in {} for chain initialization", + submodule_count, + path.display() ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/config/src/submodules.rs` around lines 38 - 47, The validation in from_toml currently bails with a hardcoded filename ".irys_submodules.toml"; update the error to include the actual path argument so messages reflect the file used. Change the bail call in from_toml to format the path (e.g., path.as_ref().display()) into the error string alongside submodule_count and the minimum requirement, referencing the existing variables path and submodule_count to construct the message.
90-104:⚠️ Potential issue | 🟠 MajorAvoid panic paths inside a
Result-returning loader.This block still uses
expect/unwrapin fallible filesystem operations. A transient FS issue can hard-crash startup instead of returning an actionable error to callers.Proposed fix
- fs::create_dir_all(base_path.clone()).expect("to create storage_modules directory"); // clone: needed below in multiple branches + fs::create_dir_all(&base_path)?; - fs::read_dir(base_path.clone()) // clone: needed below in symlink creation - .expect("to read storage_modules dir") - .filter_map(std::result::Result::ok) - .filter(|e| e.file_type().map(|t| t.is_symlink()).unwrap_or(false)) - .for_each(|e| { - debug!("removing symlink {:?}", e.path()); - fs::remove_dir_all(e.path()).unwrap() - }); + for entry in fs::read_dir(&base_path)? { + let entry = entry?; + if entry.file_type()?.is_symlink() { + debug!("removing symlink {:?}", entry.path()); + fs::remove_dir_all(entry.path())?; + } + } - let config = Self::from_toml(config_path_local).expect("To load the submodule config"); + let config = Self::from_toml(config_path_local)?; if config.is_using_hardcoded_paths { return Ok(config); - }; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/config/src/submodules.rs` around lines 90 - 104, The loader currently uses expect/unwrap on fallible filesystem ops (fs::create_dir_all, fs::read_dir, DirEntry::file_type, fs::remove_dir_all and Self::from_toml), which can panic; change these to propagate errors via the function's Result: replace expect/unwrap with ? or map_err to return a meaningful error type/message from this loader, and for non-critical cleanup (removing symlinks) log failures and continue rather than panicking; ensure the early return still returns Ok(config) when config.is_using_hardcoded_paths is true but any IO failures produce Err with context instead of aborting the process.crates/cli/src/main.rs (1)
333-336:⚠️ Potential issue | 🔴 CriticalUnqualified
size_ofcall will fail to compile.Line 334 uses
size_of::<...>()without importing it;size_ofis not in Rust prelude. Qualify it asstd::mem::size_ofor add the import.Proposed fix
const _: () = assert!( - size_of::<reth_provider::providers::RocksDBProvider>() == 0, + std::mem::size_of::<reth_provider::providers::RocksDBProvider>() == 0, "RocksDBProvider must be the zero-sized stub (rocksdb feature must be disabled)" );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/cli/src/main.rs` around lines 333 - 336, The unqualified size_of call in the const assertion will not compile; update the assertion that references size_of::<reth_provider::providers::RocksDBProvider>() to use a qualified path or import. Either change the call to std::mem::size_of::<reth_provider::providers::RocksDBProvider>() or add use std::mem::size_of; at the top and keep the existing assertion text, ensuring the const _: () = assert!(...) still validates that RocksDBProvider is zero-sized.
♻️ Duplicate comments (14)
crates/types/src/commitment_common.rs (1)
883-885:⚠️ Potential issue | 🟡 MinorAssert full RLP buffer consumption after decode.
At Line 884, decode succeeds, but the test doesn’t verify that
sliceis fully consumed. Add an empty-slice assertion to catch partial decode regressions.🔧 Proposed fix
let mut slice = &buf[..]; let decoded = CommitmentTransaction::decode(&mut slice).unwrap(); + prop_assert!(slice.is_empty(), "trailing bytes after RLP decode"); prop_assert_eq!(decoded.id(), H256::zero(), "id is rlp-skipped, should default"); prop_assert_eq!(versioned.fee(), decoded.fee()); prop_assert_eq!(versioned.value(), decoded.value());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/types/src/commitment_common.rs` around lines 883 - 885, After calling CommitmentTransaction::decode(&mut slice) in the test, assert that the remaining slice is empty to ensure full RLP buffer consumption; specifically, after the decode call that produces decoded, add a property assertion using the same test framework (e.g., prop_assert!(slice.is_empty(), "RLP buffer not fully consumed") or prop_assert_eq!(slice.len(), 0, "RLP buffer not fully consumed")) referencing the existing local variable slice and the decode call (CommitmentTransaction::decode) so partial-decode regressions are caught.crates/utils/nextest-monitor/Cargo.toml (1)
32-33: 🧹 Nitpick | 🔵 TrivialPrefer workspace-managed
rstest/proptestinstead of local pins.Line 32 and Line 33 pin versions locally, which can drift from workspace test dependency versions and cause inconsistency across crates. If these are defined in
[workspace.dependencies], switch to workspace references.♻️ Suggested change
-rstest = "0.24" -proptest = "1.6" +rstest.workspace = true +proptest.workspace = true#!/bin/bash set -euo pipefail echo "== Root workspace dependency declarations ==" rg -n -C2 '^\[workspace\.dependencies\]|^\s*rstest\s*=|^\s*proptest\s*=' Cargo.toml || true echo echo "== nextest-monitor dev-dependencies ==" rg -n -C2 '^\[dev-dependencies\]|^\s*rstest\s*=|^\s*proptest\s*=' crates/utils/nextest-monitor/Cargo.tomlExpected verification result: if
Cargo.tomlat repo root definesrstest/proptestin[workspace.dependencies], this crate should use*.workspace = true.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/utils/nextest-monitor/Cargo.toml` around lines 32 - 33, The crate pins test deps rstest and proptest locally (`rstest = "0.24"`, `proptest = "1.6"`); change these to use the workspace-managed dependencies instead by replacing the local version pins with workspace references (use the crate dev-dependencies entries for rstest and proptest and set them to reference the workspace, e.g., enable workspace = true for rstest and proptest) so the crate inherits the versions from [workspace.dependencies]; update the dev-dependencies block (entries named rstest and proptest) to reference the workspace-managed packages rather than hardcoding versions.crates/types/src/ingress.rs (1)
431-432:⚠️ Potential issue | 🔴 CriticalImport
Encodablein this test module to resolve trait-method calls.
prop_ingress_testscallsencode()/length()(Line 459, Line 496, Line 497, Line 524, Line 526), but at Line 431 onlyDecodableis imported in-module. AddEncodablehere as well to avoid test-target compile failures.🔧 Proposed fix
- use alloy_rlp::Decodable as _; + use alloy_rlp::{Decodable as _, Encodable as _};#!/bin/bash # Verify trait-method usage and in-module alloy_rlp trait imports in ingress.rs rg -n -A6 -B3 'mod prop_ingress_tests|use alloy_rlp::|\.encode\(|\.length\(' crates/types/src/ingress.rs🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/types/src/ingress.rs` around lines 431 - 432, The test module prop_ingress_tests imports alloy_rlp::Decodable but calls Encodable methods (encode(), length()), so add the missing trait import by bringing alloy_rlp::Encodable into the module (similar to the existing use alloy_rlp::Decodable as _; line) so the encode()/length() calls on types in prop_ingress_tests resolve correctly.crates/storage/src/recovered_mempool_state.rs (2)
185-186:⚠️ Potential issue | 🟡 MinorUse
test_logwrapper for this async tracing-path testAt Line [185],
test_handles_malformed_jsonexercises parse/read failure paths that emit tracing logs; this should use#[test_log::test(tokio::test)]instead of plain#[tokio::test].#!/bin/bash # Verify async test attributes and tracing callsites in this file. rg -n -C2 '^\s*#\[(tokio::test|test_log::test\(tokio::test\))\]' crates/storage/src/recovered_mempool_state.rs rg -n 'tracing::(warn|debug)!' crates/storage/src/recovered_mempool_state.rsSuggested patch
- #[tokio::test] + #[test_log::test(tokio::test)] async fn test_handles_malformed_json() {As per coding guidelines "Use
#[test_log::test(tokio::test)]for async tests with tracing output in Rust".🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/storage/src/recovered_mempool_state.rs` around lines 185 - 186, Replace the plain #[tokio::test] on the async test function test_handles_malformed_json with the tracing-aware wrapper attribute #[test_log::test(tokio::test)] so the test's tracing log calls are captured; locate the attribute immediately above the test_handles_malformed_json function and change it to #[test_log::test(tokio::test)] (add test-log as a dev-dependency if not already present).
218-234:⚠️ Potential issue | 🟡 MinorAdd coverage for
remove_files = truecleanup branchThe suite still does not test the directory-removal path in
load_from_diskwhenremove_filesistrue, so regressions in cleanup behavior can slip through.Suggested test addition
+ #[tokio::test] + async fn test_remove_files_true_removes_mempool_dir() { + let dir = tempdir().unwrap(); + let mempool_dir = dir.path().join("mempool"); + tokio::fs::create_dir_all(mempool_dir.join("commitment_tx")) + .await + .unwrap(); + tokio::fs::create_dir_all(mempool_dir.join("storage_tx")) + .await + .unwrap(); + + let state = RecoveredMempoolState::load_from_disk(&mempool_dir, true).await; + + assert!(state.commitment_txs.is_empty()); + assert!(state.storage_txs.is_empty()); + assert!(!mempool_dir.exists()); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/storage/src/recovered_mempool_state.rs` around lines 218 - 234, Add a test that exercises the cleanup branch by calling RecoveredMempoolState::load_from_disk(&mempool_dir, true) (remove_files = true) and asserting the empty subdirectories are removed; create the same subdirs (mempool_dir.join("commitment_tx") and mempool_dir.join("storage_tx")), call load_from_disk with true, then check that those directories no longer exist (or the parent mempool_dir is empty) — add this as a new tokio::test (e.g., test_empty_subdirs_removes_dirs_when_remove_files_true) alongside the existing test so the remove_files cleanup behavior is covered.crates/efficient-sampling/src/lib.rs (3)
82-88:⚠️ Potential issue | 🟠 MajorOff-by-one error persists in range selection modulo.
The past review correctly identified that
last_range_posis an inclusive index (the last valid position in therangesvector). Using% self.last_range_posexcludes the final slot from selection. For example, with 2 ranges (last_range_pos = 1), the modulo% 1always yields 0, never selecting slot 1.The modulo base should be
last_range_pos + 1to include all valid positions.🐛 Proposed fix
- // rng.next() returns u32; % yields u32 in [0, last_range_pos). u32 always fits in usize. - let next_range_pos = usize::try_from( - rng.next() - % u32::try_from(self.last_range_pos) - .expect("last_range_pos bounded by num_recall_ranges_in_partition which is usize; u32 max (4B) exceeds any realistic partition range count"), - ) - .expect("u32 always fits in usize on 32-bit and above architectures"); + // rng.next() returns u32; % yields u32 in [0, last_range_pos]. u32 always fits in usize. + let candidate_count = u32::try_from(self.last_range_pos + 1) + .expect("last_range_pos + 1 bounded by num_recall_ranges_in_partition which fits in u32"); + let next_range_pos = usize::try_from(rng.next() % candidate_count) + .expect("u32 always fits in usize on 32-bit and above architectures");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/efficient-sampling/src/lib.rs` around lines 82 - 88, The modulo base is off-by-one: `self.last_range_pos` is an inclusive index so using `% self.last_range_pos` omits the final slot; change the modulo to use `self.last_range_pos + 1` (i.e., convert `self.last_range_pos + 1` to u32 for the modulus) when computing `next_range_pos` from `rng.next()`, ensuring the `usize::try_from(...).expect(...)` conversions reflect the added +1 and preserve the existing bounds/assertion messages around `last_range_pos` and `u32` conversion.
583-598:⚠️ Potential issue | 🟡 MinorTest does not exercise the
k > nscenario it claims to test.The loop
1..=k.min(n)caps iteration atmin(k, n). For case(100, 1, ...), this iterates only once (1..=1), never exercising what happens when more samples are requested than available ranges (exhaustion/reinitialization behavior).If the intent is to verify behavior when
k > n, iterate tokand assert the post-exhaustion behavior explicitly.💡 Suggested fix
- let samples: Vec<usize> = (1..=k.min(n)) + // Iterate up to k to exercise exhaustion when k > n + let samples: Vec<usize> = (1..=k) .map(|step| { + // After n draws, sampler reinitializes; verify no panic ranges .get_recall_range(step as u64, &seed, &partition_hash) .unwrap() }) .collect(); - assert_eq!( - samples.len(), - k.min(n), + assert_eq!( + samples.len(), + k,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/efficient-sampling/src/lib.rs` around lines 583 - 598, The test currently limits iterations with 1..=k.min(n) so it never exercises the k > n case; change the loop to iterate 1..=k (use k directly) when calling ranges.get_recall_range(step as u64, &seed, &partition_hash) so the code requests more samples than available, then update assertions to explicitly check the expected post-exhaustion behavior (e.g., that samples.len() == k and that duplicated/reinitialized indices or wrap behavior conforms to the contract), referencing variables k and n and the function get_recall_range to locate the logic to adjust.
613-627:⚠️ Potential issue | 🟡 MinorErr-path test can silently pass without exercising the error case.
When
bad_rangehappens to equalvalid_range(e.g., both are 0 for case(0, 10, ...)), theif bad_range != valid_rangeguard skips the assertion entirely, and the test passes without verifying the error path.Derive a guaranteed-invalid range from
valid_rangeto ensure the error path is always tested.💡 Suggested fix
- // Only expect Err when bad_range != valid_range. - if bad_range != valid_range { - let result = recall_range_is_valid(bad_range, num_ranges, &seeds, &partition_hash); - assert!( - result.is_err(), - "{}: expected Err for range {} (valid was {})", - description, - bad_range, - valid_range - ); - } + // Derive an invalid range that differs from the valid one + let invalid_range = if bad_range == valid_range { + (valid_range + 1) % (num_ranges + 1) // Ensures it differs + } else { + bad_range + }; + let result = recall_range_is_valid(invalid_range, num_ranges, &seeds, &partition_hash); + assert!( + result.is_err(), + "{}: expected Err for range {} (valid was {})", + description, + invalid_range, + valid_range + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/efficient-sampling/src/lib.rs` around lines 613 - 627, The test can skip the error-path when bad_range == valid_range; replace the conditional with a deterministically-invalid range derived from valid_range (e.g., compute invalid_range = (valid_range + 1) % (num_ranges + 1) or otherwise pick a value guaranteed != valid_range, handling the special usize::MAX case) and then call recall_range_is_valid(invalid_range, num_ranges, &seeds, &partition_hash) and assert result.is_err() with the existing message (use the same description, invalid_range, and valid_range) so the Err path is always exercised; locate this change around the get_recall_range and recall_range_is_valid usage where bad_range, valid_range, num_ranges, seeds, partition_hash, and description are referenced.crates/api-server/src/routes/observability.rs (1)
288-299: 🧹 Nitpick | 🔵 TrivialRedundant test duplicates assertions from parameterized test.
The test
should_count_correctly_by_ledger_idasserts the same counts (Submit=2, Publish=1) that are already verified by the parameterizedshould_return_count_when_foundtest cases at lines 233-245. Consider removing this test to reduce redundancy.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/api-server/src/routes/observability.rs` around lines 288 - 299, The test function should_count_correctly_by_ledger_id duplicates assertions already covered by the parameterized test should_return_count_when_found (which validates Submit=2 and Publish=1 for the same input); remove the redundant should_count_correctly_by_ledger_id test function to avoid duplicate coverage and keep the parameterized tests as the single source of truth for these counts.crates/irys-reth/src/lib.rs (1)
808-859: 🧹 Nitpick | 🔵 TrivialParameterize these three identical nonexistent-account tests.
test_nonexistent_account_stake,test_nonexistent_account_unpledge, andtest_nonexistent_account_pledgestill repeat setup/assert flow and should be collapsed into a singlerstestcase table.As per coding guidelines: Use
rstestfor parameterized tests andproptestfor property-based tests in Rust.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/irys-reth/src/lib.rs` around lines 808 - 859, The three tests test_nonexistent_account_stake, test_nonexistent_account_unpledge, and test_nonexistent_account_pledge duplicate setup and assertion; replace them with a single parameterized rstest that iterates the three TransactionPacket variants (Stake, Unpledge, Pledge) or a small enum mapping to the helper unpledge() for Unpledge and constructing ShadowTransaction::new_v1 for Stake/Pledge, reusing TestContext::new().await, get_single_node(), sign_shadow_tx, and assert_nonexistent_account_fails_block_production; implement the rstest case table supplying the variant and expected construction logic and call sign_shadow_tx(...) and assert_nonexistent_account_fails_block_production(&mut node, shadow_tx).await once.crates/cli/src/main.rs (1)
94-98:⚠️ Potential issue | 🟠 MajorOverflow check is applied at the wrong unit (milliseconds instead of seconds).
At Line 95, converting
millisdirectly tou64rejects valid inputs whose seconds still fit inu64. This contradicts the function contract and also makes Line 547’s overflow test assert the wrong boundary.Proposed fix
pub fn timestamp_millis_to_secs(millis: u128) -> eyre::Result<u64> { - let millis_u64 = u64::try_from(millis) - .map_err(|_| eyre::eyre!("timestamp_millis {} overflows u64", millis))?; - Ok(std::time::Duration::from_millis(millis_u64).as_secs()) + let secs = millis / 1_000; + u64::try_from(secs).map_err(|_| eyre::eyre!("timestamp_secs {} overflows u64", secs)) }#[test] fn test_timestamp_millis_to_secs_overflow() { - let overflow_value: u128 = u128::from(u64::MAX) + 1; + let overflow_value: u128 = (u128::from(u64::MAX) + 1) * 1_000; assert!( timestamp_millis_to_secs(overflow_value).is_err(), "should reject values that overflow u64" ); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/cli/src/main.rs` around lines 94 - 98, The function timestamp_millis_to_secs currently checks overflow by converting the input milliseconds to u64, which incorrectly rejects values whose seconds would still fit in u64; change the logic to compute seconds first (divide the u128 millis by 1000 using checked_div or integer division), then attempt to convert that seconds value to u64 (u64::try_from) and return it, returning an eyre error only if the seconds value overflows u64; update any related tests (the one referenced at Line 547) to assert the correct seconds overflow boundary rather than milliseconds.crates/utils/nextest-monitor/src/bin/nextest-wrapper.rs (1)
628-647:⚠️ Potential issue | 🟡 MinorAdd cases that isolate the path-like argument branch.
This table still doesn't prove the slash/backslash skip at Lines 50-52. Cases behind
--manifest-pathwould also be skipped by the value-taking-flag logic at Lines 55-56, so add a positional path-like arg that must be ignored before the real test name.➕ Suggested cases
#[case::skips_option_values( &["--test-threads", "1", "my_test", "--exact"], Some("my_test") )] + #[case::skips_unix_path_like_arg( + &["crates/foo/Cargo.toml", "my_test"], + Some("my_test") + )] + #[case::skips_windows_path_like_arg( + &[r"crates\foo\Cargo.toml", "my_test"], + Some("my_test") + )] #[case::empty(&[], None)]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/utils/nextest-monitor/src/bin/nextest-wrapper.rs` around lines 628 - 647, Add tests in extract_test_name_cases that specifically exercise the "path-like argument" branch by inserting positional path-like values (one with forward slashes like "path/to/crate" and one with backslashes like "C:\\path\\to\\crate") immediately before a genuine test name so extract_test_name must skip them; add cases such as &["path/to/something", "real_test"] and &["C:\\\\dir\\\\file", "real_test"] (and variants with leading flags) inside the #[rstest] table in the extract_test_name_cases function to ensure the slash/backslash early-skip logic is exercised independently of --manifest-path and value-taking flags.crates/irys-reth/src/evm.rs (2)
737-742:⚠️ Potential issue | 🟠 MajorClear staged shadow state on every error exit.
This guard only reports leaked entries and returns. If
IrysEvm::distribute_priority_feeor a later invariant check errors after staging accounts,self.statestays populated and the next shadow transaction on the same executor will fail here again. Please make the cleanup unconditional instead of relying on the success-pathstd::mem::take.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/irys-reth/src/evm.rs` around lines 737 - 742, The guard that checks self.state in the EVM exit path must unconditionally clear the staged shadow state before returning an error; change the early-return branch in the check (the block that currently returns Err(...) inside evm.rs) to first take or clear self.state (e.g., std::mem::take(&mut self.state) or self.state.clear()) so the staged entries are removed even on error, and then construct the error message using the taken/cleared data (preserve the diagnostic content) before returning Err — apply this change around the existing check that references self.state.iter().collect::<Vec<_>>() so future shadow transactions don’t fail due to leftover state.
938-956:⚠️ Potential issue | 🔴 CriticalDon't require the beneficiary to exist before commit.
IrysEvm::prepare_fee_transferalready supports creating a fresh beneficiary account, but Line 938 turns that validNone -> Some(Account)transition into an internal error. A block that pays priority fees to a new fee recipient will fail incorrectly here.Minimal fix
- let bef = self.load_account(beneficiary)?.ok_or_else(|| { - Self::create_internal_error(format!( - "beneficiary account missing before commit: {beneficiary}" - )) - })?; + let beneficiary_before = self.load_account(beneficiary)?; @@ - if bef == aft { + if beneficiary_before.as_ref() == Some(&aft) { return Err(Self::create_internal_error(format!( "priority fee distribution had no effect on beneficiary {beneficiary}" ))); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/irys-reth/src/evm.rs` around lines 938 - 956, The code incorrectly treats a missing beneficiary before commit as an error; change the pre-commit check to allow None by calling let bef = self.load_account(beneficiary)?; (keep it as Option<Account>) instead of ok_or_else(...). After calling commit_account_change(…) and reloading with let aft = self.load_account(beneficiary)? and turning that into an error only if aft is still None (use create_internal_error for "beneficiary account missing after commit"), then detect a no-op by comparing values correctly: if bef.is_some() && bef == aft { return Err(Self::create_internal_error(format!("priority fee distribution had no effect on beneficiary {beneficiary}"))); } — this preserves support for creating a fresh beneficiary while still erroring when the commit had no effect.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/api-client/src/lib.rs`:
- Around line 290-294: The GET request is incorrectly sending a JSON body via
make_request_url; update the call so it does not include a request body (remove
the Some(&block_index_query) argument) and rely on the existing query-string
encoding for block_index_query when calling
make_request_url::<Vec<BlockIndexItem>, _>(url.as_str(), Method::GET, ...).
Ensure the signature still matches by passing None (or the appropriate no-body
sentinel) for the body parameter so only the query parameters are sent.
In `@crates/api-server/src/routes/index.rs`:
- Around line 22-24: The handler currently serializes node_info with
serde_json::to_string_pretty(...).unwrap(), which can panic; replace this with
Actix's non-panicking JSON path by returning HttpResponse::Ok().json(node_info)
(or explicitly map serde_json errors to an HttpResponse error) so serialization
errors are handled rather than unwrapped; locate the Ok(node_info) match arm and
update the HttpResponse::Ok() usage to use .json(node_info) or return an Err
mapped from the serde_json::to_string_pretty failure.
In `@crates/config/src/chain/chainspec.rs`:
- Around line 111-115: The test is relying on index-based access to
block.data_ledgers (e.g., using publish_ledger = &block.data_ledgers[0] and
similar at other locations) which is brittle to ordering changes; instead, find
the ledger entries by matching their ledger_id and then assert on
required_proof_count. Update the assertions to search
block.data_ledgers.iter().find(|l| l.ledger_id == <expected_id>) (or equivalent)
to locate the correct ledger before checking l.required_proof_count, and apply
the same change to the other occurrence that currently uses index [1].
In `@crates/database/src/metadata.rs`:
- Around line 17-18: The change in metadata.rs switching (self as
u32).to_be_bytes() to big-endian will break existing DBs that wrote
MetadataKey::DBSchemaVersion in little-endian; add explicit backward-compat
handling by updating the read path (MetadataKey::decode or the DB open/migration
routine that reads DBSchemaVersion) to try both BE and LE decodes: if a BE-read
fails, attempt from_le_bytes decoding for DBSchemaVersion, and if that succeeds
write back the canonical BE-encoded key (or run a one-time migration that
rewrites old LE keys to BE). Modify MetadataKey::encode/decode or the startup
migration logic to include this dual-read fallback and an idempotent
migration/write-back so lookups continue to succeed after upgrade.
In `@crates/domain/src/models/chain_sync_state.rs`:
- Around line 729-751: Consolidate the three tests is_queue_full_past_target,
is_queue_full_at_capacity, and is_queue_full_below_capacity into a single
parameterized rstest that drives inputs for sync_target_height,
processed_height, and expected is_queue_full result; for each case call
ChainSyncState::new(true, false), set_sync_target_height(...),
mark_processed(...), then assert_eq!(state.is_queue_full(), expected). Reference
the existing symbols ChainSyncState::new, set_sync_target_height,
mark_processed, and is_queue_full when creating the rstest table and remove the
now-redundant individual test functions.
In `@crates/domain/src/snapshots/commitment_snapshot.rs`:
- Around line 667-695: The test test_commitment_without_stake_is_rejected should
stop using Option<CommitmentTypeV1> as a sentinel and instead define a small
explicit test-case enum (e.g., enum TestCase { Pledge, Unstake,
UpdateRewardAddress }) and use that as the #[case] type; update the match on
variant to handle TestCase::Pledge, TestCase::Unstake and
TestCase::UpdateRewardAddress (creating the CommitmentTypeV1::Pledge,
CommitmentTypeV1::Unstake, and the CommitmentTypeV2::UpdateRewardAddress via
create_test_commitment_v2 respectively), remove the unreachable!() arm, and
update the #[case::...] attributes to reference the new enum variants so the
intent is explicit.
In `@crates/price-oracle/src/coinmarketcap.rs`:
- Around line 338-347: Add a focused unit test that verifies
chrono_to_unix_timestamp clamps negative DateTime values to zero: create a test
(e.g., prop_chrono_to_unix_timestamp_negative_clamps) that constructs a DateTime
with a negative seconds value via DateTime::from_timestamp(-1, 0) (and
optionally other negative values), calls chrono_to_unix_timestamp(dt), and
asserts the result equals UnixTimestamp::from_secs(0); reference the
chrono_to_unix_timestamp function and DateTime::from_timestamp to locate the
code to test.
In `@crates/price-oracle/src/lib.rs`:
- Around line 248-271: The test suite lacks a tie-case to assert behavior when
two or more oracles share the same timestamp; add a new rstest case in
test_current_snapshot_returns_freshest that supplies equal timestamps (e.g.,
ts_a == ts_b > ts_c and/or all equal) and an expected_idx reflecting the
intended tie-breaker (current_snapshot uses strict `>` so the first encountered
should win). Update the test cases array (where make_oracle_with_cache, prices,
timestamps and IrysPriceOracle::new are used) to include at least one case with
equal timestamps to lock in/clarify the intended behavior.
In `@crates/types/src/config/mod.rs`:
- Around line 889-905: The test validate_rejects_depth_invariant_violations (and
the similar cases around it) currently asserts on the Debug-formatted error via
format!("{err:?}") which is brittle; change these to call
cfg.validate().unwrap_err().to_string() and assert that the resulting string
contains a more specific invariant substring (not just "Condition failed"), and
factor the check into a small helper (e.g., assert_invariant_error(err_str,
expected_substr)) so all cases (including the ones near lines 914–918, 937–940,
956–959, 970–973, 999–1002) use the same robust assertion pattern referencing
ConsensusConfig, config_with_consensus, and cfg.validate().
---
Outside diff comments:
In `@crates/cli/src/main.rs`:
- Around line 333-336: The unqualified size_of call in the const assertion will
not compile; update the assertion that references
size_of::<reth_provider::providers::RocksDBProvider>() to use a qualified path
or import. Either change the call to
std::mem::size_of::<reth_provider::providers::RocksDBProvider>() or add use
std::mem::size_of; at the top and keep the existing assertion text, ensuring the
const _: () = assert!(...) still validates that RocksDBProvider is zero-sized.
In `@crates/config/src/submodules.rs`:
- Around line 38-47: The validation in from_toml currently bails with a
hardcoded filename ".irys_submodules.toml"; update the error to include the
actual path argument so messages reflect the file used. Change the bail call in
from_toml to format the path (e.g., path.as_ref().display()) into the error
string alongside submodule_count and the minimum requirement, referencing the
existing variables path and submodule_count to construct the message.
- Around line 90-104: The loader currently uses expect/unwrap on fallible
filesystem ops (fs::create_dir_all, fs::read_dir, DirEntry::file_type,
fs::remove_dir_all and Self::from_toml), which can panic; change these to
propagate errors via the function's Result: replace expect/unwrap with ? or
map_err to return a meaningful error type/message from this loader, and for
non-critical cleanup (removing symlinks) log failures and continue rather than
panicking; ensure the early return still returns Ok(config) when
config.is_using_hardcoded_paths is true but any IO failures produce Err with
context instead of aborting the process.
In `@crates/packing-worker/src/packing.rs`:
- Around line 127-129: The two .unwrap() calls inside the spawn_blocking closure
(specifically the call to CUDAConfig::from_device_default() and the call to
capacity_pack_range_cuda_c) must not panic; change the closure to return
eyre::Result and replace the unwraps with ? to propagate errors (i.e., use
CUDAConfig::from_device_default()? and capacity_pack_range_cuda_c(...)?), then
propagate the spawn_blocking result (await and ? as needed) so the calling
function returns/forwards the eyre::Result instead of panicking.
In `@crates/packing/src/lib.rs`:
- Around line 337-339: Remove the unreachable post-clamp zero check for the
iterations variable: after clamping iterations to 1.0..=u32::MAX (the earlier
clamp that guarantees iterations >= 1), delete the if block "if iterations == 0
{ iterations = 1; }" (referencing the iterations variable) since it can never be
true; ensure no other code relies on this redundant branch.
---
Duplicate comments:
In `@crates/api-server/src/routes/observability.rs`:
- Around line 288-299: The test function should_count_correctly_by_ledger_id
duplicates assertions already covered by the parameterized test
should_return_count_when_found (which validates Submit=2 and Publish=1 for the
same input); remove the redundant should_count_correctly_by_ledger_id test
function to avoid duplicate coverage and keep the parameterized tests as the
single source of truth for these counts.
In `@crates/cli/src/main.rs`:
- Around line 94-98: The function timestamp_millis_to_secs currently checks
overflow by converting the input milliseconds to u64, which incorrectly rejects
values whose seconds would still fit in u64; change the logic to compute seconds
first (divide the u128 millis by 1000 using checked_div or integer division),
then attempt to convert that seconds value to u64 (u64::try_from) and return it,
returning an eyre error only if the seconds value overflows u64; update any
related tests (the one referenced at Line 547) to assert the correct seconds
overflow boundary rather than milliseconds.
In `@crates/efficient-sampling/src/lib.rs`:
- Around line 82-88: The modulo base is off-by-one: `self.last_range_pos` is an
inclusive index so using `% self.last_range_pos` omits the final slot; change
the modulo to use `self.last_range_pos + 1` (i.e., convert `self.last_range_pos
+ 1` to u32 for the modulus) when computing `next_range_pos` from `rng.next()`,
ensuring the `usize::try_from(...).expect(...)` conversions reflect the added +1
and preserve the existing bounds/assertion messages around `last_range_pos` and
`u32` conversion.
- Around line 583-598: The test currently limits iterations with 1..=k.min(n) so
it never exercises the k > n case; change the loop to iterate 1..=k (use k
directly) when calling ranges.get_recall_range(step as u64, &seed,
&partition_hash) so the code requests more samples than available, then update
assertions to explicitly check the expected post-exhaustion behavior (e.g., that
samples.len() == k and that duplicated/reinitialized indices or wrap behavior
conforms to the contract), referencing variables k and n and the function
get_recall_range to locate the logic to adjust.
- Around line 613-627: The test can skip the error-path when bad_range ==
valid_range; replace the conditional with a deterministically-invalid range
derived from valid_range (e.g., compute invalid_range = (valid_range + 1) %
(num_ranges + 1) or otherwise pick a value guaranteed != valid_range, handling
the special usize::MAX case) and then call recall_range_is_valid(invalid_range,
num_ranges, &seeds, &partition_hash) and assert result.is_err() with the
existing message (use the same description, invalid_range, and valid_range) so
the Err path is always exercised; locate this change around the get_recall_range
and recall_range_is_valid usage where bad_range, valid_range, num_ranges, seeds,
partition_hash, and description are referenced.
In `@crates/irys-reth/src/evm.rs`:
- Around line 737-742: The guard that checks self.state in the EVM exit path
must unconditionally clear the staged shadow state before returning an error;
change the early-return branch in the check (the block that currently returns
Err(...) inside evm.rs) to first take or clear self.state (e.g.,
std::mem::take(&mut self.state) or self.state.clear()) so the staged entries are
removed even on error, and then construct the error message using the
taken/cleared data (preserve the diagnostic content) before returning Err —
apply this change around the existing check that references
self.state.iter().collect::<Vec<_>>() so future shadow transactions don’t fail
due to leftover state.
- Around line 938-956: The code incorrectly treats a missing beneficiary before
commit as an error; change the pre-commit check to allow None by calling let bef
= self.load_account(beneficiary)?; (keep it as Option<Account>) instead of
ok_or_else(...). After calling commit_account_change(…) and reloading with let
aft = self.load_account(beneficiary)? and turning that into an error only if aft
is still None (use create_internal_error for "beneficiary account missing after
commit"), then detect a no-op by comparing values correctly: if bef.is_some() &&
bef == aft { return Err(Self::create_internal_error(format!("priority fee
distribution had no effect on beneficiary {beneficiary}"))); } — this preserves
support for creating a fresh beneficiary while still erroring when the commit
had no effect.
In `@crates/irys-reth/src/lib.rs`:
- Around line 808-859: The three tests test_nonexistent_account_stake,
test_nonexistent_account_unpledge, and test_nonexistent_account_pledge duplicate
setup and assertion; replace them with a single parameterized rstest that
iterates the three TransactionPacket variants (Stake, Unpledge, Pledge) or a
small enum mapping to the helper unpledge() for Unpledge and constructing
ShadowTransaction::new_v1 for Stake/Pledge, reusing TestContext::new().await,
get_single_node(), sign_shadow_tx, and
assert_nonexistent_account_fails_block_production; implement the rstest case
table supplying the variant and expected construction logic and call
sign_shadow_tx(...) and assert_nonexistent_account_fails_block_production(&mut
node, shadow_tx).await once.
In `@crates/storage/src/recovered_mempool_state.rs`:
- Around line 185-186: Replace the plain #[tokio::test] on the async test
function test_handles_malformed_json with the tracing-aware wrapper attribute
#[test_log::test(tokio::test)] so the test's tracing log calls are captured;
locate the attribute immediately above the test_handles_malformed_json function
and change it to #[test_log::test(tokio::test)] (add test-log as a
dev-dependency if not already present).
- Around line 218-234: Add a test that exercises the cleanup branch by calling
RecoveredMempoolState::load_from_disk(&mempool_dir, true) (remove_files = true)
and asserting the empty subdirectories are removed; create the same subdirs
(mempool_dir.join("commitment_tx") and mempool_dir.join("storage_tx")), call
load_from_disk with true, then check that those directories no longer exist (or
the parent mempool_dir is empty) — add this as a new tokio::test (e.g.,
test_empty_subdirs_removes_dirs_when_remove_files_true) alongside the existing
test so the remove_files cleanup behavior is covered.
In `@crates/types/src/commitment_common.rs`:
- Around line 883-885: After calling CommitmentTransaction::decode(&mut slice)
in the test, assert that the remaining slice is empty to ensure full RLP buffer
consumption; specifically, after the decode call that produces decoded, add a
property assertion using the same test framework (e.g.,
prop_assert!(slice.is_empty(), "RLP buffer not fully consumed") or
prop_assert_eq!(slice.len(), 0, "RLP buffer not fully consumed")) referencing
the existing local variable slice and the decode call
(CommitmentTransaction::decode) so partial-decode regressions are caught.
In `@crates/types/src/ingress.rs`:
- Around line 431-432: The test module prop_ingress_tests imports
alloy_rlp::Decodable but calls Encodable methods (encode(), length()), so add
the missing trait import by bringing alloy_rlp::Encodable into the module
(similar to the existing use alloy_rlp::Decodable as _; line) so the
encode()/length() calls on types in prop_ingress_tests resolve correctly.
In `@crates/utils/nextest-monitor/Cargo.toml`:
- Around line 32-33: The crate pins test deps rstest and proptest locally
(`rstest = "0.24"`, `proptest = "1.6"`); change these to use the
workspace-managed dependencies instead by replacing the local version pins with
workspace references (use the crate dev-dependencies entries for rstest and
proptest and set them to reference the workspace, e.g., enable workspace = true
for rstest and proptest) so the crate inherits the versions from
[workspace.dependencies]; update the dev-dependencies block (entries named
rstest and proptest) to reference the workspace-managed packages rather than
hardcoding versions.
In `@crates/utils/nextest-monitor/src/bin/nextest-wrapper.rs`:
- Around line 628-647: Add tests in extract_test_name_cases that specifically
exercise the "path-like argument" branch by inserting positional path-like
values (one with forward slashes like "path/to/crate" and one with backslashes
like "C:\\path\\to\\crate") immediately before a genuine test name so
extract_test_name must skip them; add cases such as &["path/to/something",
"real_test"] and &["C:\\\\dir\\\\file", "real_test"] (and variants with leading
flags) inside the #[rstest] table in the extract_test_name_cases function to
ensure the slash/backslash early-skip logic is exercised independently of
--manifest-path and value-taking flags.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: a1fe8321-51d2-4b93-86c3-855c1989b094
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (92)
crates/actors/src/chunk_ingress_service/chunks.rscrates/actors/src/partition_mining_service.rscrates/api-client/Cargo.tomlcrates/api-client/src/ext.rscrates/api-client/src/lib.rscrates/api-server/src/error.rscrates/api-server/src/routes/balance.rscrates/api-server/src/routes/block.rscrates/api-server/src/routes/block_index.rscrates/api-server/src/routes/index.rscrates/api-server/src/routes/ledger.rscrates/api-server/src/routes/observability.rscrates/api-server/src/routes/price.rscrates/api-server/src/routes/proxy.rscrates/api-server/src/routes/supply.rscrates/c/Cargo.tomlcrates/c/src/capacity_single.rscrates/chain-tests/src/api/hardfork_tests.rscrates/chain/Cargo.tomlcrates/chain/src/chain.rscrates/cli/Cargo.tomlcrates/cli/src/main.rscrates/config/Cargo.tomlcrates/config/src/chain/chainspec.rscrates/config/src/submodules.rscrates/database/Cargo.tomlcrates/database/src/database.rscrates/database/src/db_cache.rscrates/database/src/metadata.rscrates/domain/src/models/chain_sync_state.rscrates/domain/src/models/circular_buffer.rscrates/domain/src/models/forkchoice_markers.rscrates/domain/src/models/node_info.rscrates/domain/src/models/peer_list.rscrates/domain/src/snapshots/commitment_snapshot.rscrates/domain/src/snapshots/ema_snapshot.rscrates/efficient-sampling/Cargo.tomlcrates/efficient-sampling/src/lib.rscrates/irys-reth/Cargo.tomlcrates/irys-reth/src/evm.rscrates/irys-reth/src/lib.rscrates/irys-reth/src/shadow_tx.rscrates/p2p/Cargo.tomlcrates/p2p/src/gossip_client.rscrates/p2p/src/server.rscrates/p2p/src/types.rscrates/packing-worker/Cargo.tomlcrates/packing-worker/src/api.rscrates/packing-worker/src/packing.rscrates/packing-worker/src/types.rscrates/packing/Cargo.tomlcrates/packing/src/lib.rscrates/price-oracle/Cargo.tomlcrates/price-oracle/proptest-regressions/coinmarketcap.txtcrates/price-oracle/src/coinmarketcap.rscrates/price-oracle/src/lib.rscrates/price-oracle/src/mock_oracle.rscrates/reth-node-bridge/Cargo.tomlcrates/reth-node-bridge/src/dump.rscrates/reth-node-bridge/src/ext.rscrates/reward-curve/Cargo.tomlcrates/reward-curve/src/lib.rscrates/storage/Cargo.tomlcrates/storage/src/recovered_mempool_state.rscrates/types/proptest-regressions/storage_pricing.txtcrates/types/src/chunk.rscrates/types/src/commitment_common.rscrates/types/src/commitment_v1.rscrates/types/src/commitment_v2.rscrates/types/src/config/consensus.rscrates/types/src/config/mod.rscrates/types/src/hardfork_config.rscrates/types/src/ingress.rscrates/types/src/merkle.rscrates/types/src/serialization.rscrates/types/src/storage_pricing.rscrates/types/src/time.rscrates/types/src/transaction/bounded_fee.rscrates/types/tests/version_mismatch_tests.rscrates/types/tests/versioned_compact_roundtrip_tests.rscrates/utils/nextest-monitor/Cargo.tomlcrates/utils/nextest-monitor/src/bin/nextest-wrapper.rscrates/utils/utils/Cargo.tomlcrates/utils/utils/src/circuit_breaker/metrics.rscrates/utils/utils/src/circuit_breaker/mod.rscrates/utils/utils/src/circuit_breaker/state.rscrates/utils/utils/src/circuit_breaker/test_utils.rscrates/utils/utils/src/listener.rscrates/utils/utils/src/telemetry.rscrates/vdf/Cargo.tomlcrates/vdf/src/lib.rscrates/vdf/src/vdf.rs
💤 Files with no reviewable changes (10)
- crates/types/src/transaction/bounded_fee.rs
- crates/types/src/commitment_v1.rs
- crates/types/tests/version_mismatch_tests.rs
- crates/types/src/time.rs
- crates/types/src/hardfork_config.rs
- crates/types/src/config/consensus.rs
- crates/types/src/commitment_v2.rs
- crates/actors/src/chunk_ingress_service/chunks.rs
- crates/types/src/chunk.rs
- crates/domain/src/snapshots/ema_snapshot.rs
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
crates/packing/src/lib.rs (1)
332-339: 🧹 Nitpick | 🔵 TrivialRemove the now-dead
iterations == 0branch.After clamping to
1.0..=u32::MAX,iterationscannot be0, so the follow-up guard is unreachable.♻️ Proposed cleanup
iterations = (f64::from(iterations) * ratio) .round() .clamp(1.0, f64::from(u32::MAX)) as u32; - - if iterations == 0 { - iterations = 1; - }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/packing/src/lib.rs` around lines 332 - 339, Remove the now-dead guard that checks and sets iterations to 1 after converting and clamping; specifically in the block where iterations is computed via (f64::from(iterations) * ratio).round().clamp(1.0, f64::from(u32::MAX)) as u32, delete the subsequent if iterations == 0 { iterations = 1; } branch because the clamp already guarantees a minimum of 1.0 so iterations can never be zero.
♻️ Duplicate comments (2)
crates/api-server/src/routes/proxy.rs (1)
122-124:⚠️ Potential issue | 🟡 MinorAdd mixed-case variants for the newly added hop-by-hop headers.
Coverage now includes
proxy-authorization,te, andtrailers, but only in lowercase; add casing variants to fully validate case-insensitive matching.Suggested test additions
#[case("proxy-authorization", true)] + #[case("Proxy-Authorization", true)] #[case("te", true)] + #[case("TE", true)] #[case("trailers", true)] + #[case("Trailers", true)]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/api-server/src/routes/proxy.rs` around lines 122 - 124, The test only includes lowercase hop-by-hop header cases ("proxy-authorization", "te", "trailers"); update the parameterized test attributes where #[case("proxy-authorization", true)], #[case("te", true)], and #[case("trailers", true)] appear to also include mixed-case variants (e.g., "Proxy-Authorization", "Proxy-authorization" or "PROXY-AUTHORIZATION"; "Te" or "TE"; "Trailers" or "TRAILERS") so the case-insensitive matching is fully validated—add corresponding #[case(..., true)] entries for each mixed-case form next to the existing ones.crates/irys-reth/src/evm.rs (1)
737-742:⚠️ Potential issue | 🟠 MajorAlways clear staged shadow state before returning
Err.Line 941 mutates
self.state, but the new validation branches at Line 949 and Line 954 can still returnErr. That leaves staged accounts behind until the next call, where Line 737 rejects execution with the leftover snapshot. The same leak still exists ifprocess_shadow_transaction()fails afterdistribute_priority_fee()succeeds. Please centralize cleanup inprocess_shadow_tx()so failed shadow-tx execution cannot poison the executor state.🔧 Minimal fix
- if !self.state.is_empty() { - return Err(Self::create_internal_error(format!( - "Custom IrysEVM state should be empty, but got: {:?}", - &self.state.iter().collect::<Vec<_>>() - ))); - } - - self.distribute_priority_fee(total_fee, fee_payer_address, beneficiary)?; - - let (new_account_state, target) = - self.process_shadow_transaction(&shadow_tx, tx_hash, beneficiary)?; + if !self.state.is_empty() { + let leaked_state = std::mem::take(&mut self.state); + return Err(Self::create_internal_error(format!( + "Custom IrysEVM state should be empty, but got: {:?}", + leaked_state.iter().collect::<Vec<_>>() + ))); + } + + let (new_account_state, target) = match (|| { + self.distribute_priority_fee(total_fee, fee_payer_address, beneficiary)?; + self.process_shadow_transaction(&shadow_tx, tx_hash, beneficiary) + })() { + Ok(res) => res, + Err(err) => { + self.state.clear(); + return Err(err); + } + };Also applies to: 938-958
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/irys-reth/src/evm.rs` around lines 737 - 742, The staged shadow state (self.state) can be left populated on early Err returns, so update process_shadow_tx() to centralize cleanup: ensure any error path (including failures after distribute_priority_fee() and the validation branches that now return Err) clears the staged state before returning Err (e.g., call a single cleanup helper like self.clear_staged_shadow_state() or self.state.clear()), and remove ad-hoc returns that bypass cleanup; reference process_shadow_tx(), distribute_priority_fee(), and the validation logic that currently returns Err so all failure exits invoke the cleanup helper.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/api-client/src/lib.rs`:
- Around line 630-644: The property test peer_base_url_format only covers IPv4;
update the test in peer_base_url_format (and/or add a new property) to also
generate and assert IPv6 SocketAddr behavior for peer_base_url: create IPv6
addrs (e.g., using SocketAddrV6::new or parsing "[::1]:{port}" strings), call
peer_base_url(addr) and assert the returned URL starts with "http://", ends with
"/v1", includes the port, and that the IPv6 host is bracketed (contains '[' and
']'). Ensure the new assertions reference peer_base_url and handle both IPv4 and
IPv6 generators.
- Around line 715-723: The test extend_url_non_http_base_returns_ok_or_err
should assert concrete outcomes instead of accepting any Result; update the test
to call extend_url(base, &["tx"]) and assert specific expectations per case:
"not-a-url" must produce Err, "ftp://host/path" should produce Ok (with the path
segment appended), and "data:text/plain,hello" should produce Err. Locate the
test function extend_url_non_http_base_returns_ok_or_err and the helper under
test extend_url to implement these assert!(result.is_ok()) /
assert!(result.is_err()) checks (and optionally verify the Ok value contains the
appended "tx") so regressions are caught.
In `@crates/cli/src/main.rs`:
- Around line 332-334: The const assert uses
size_of::<reth_provider::providers::RocksDBProvider>() but size_of is not in
scope; replace the bare call with a fully qualified path such as
core::mem::size_of::<reth_provider::providers::RocksDBProvider>() (or
std::mem::size_of::<...>), keeping the same assertion message and surrounding
code; this ensures the const assertion compiles and still checks that
RocksDBProvider is zero-sized.
In `@crates/types/src/commitment_common.rs`:
- Around line 821-835: arb_commitment_tx_v2 currently builds
CommitmentTransactionV2.value from a u64 (U256::from(value_raw)), which misses
exercising upper 192-bit U256 paths; change the generator used in
arb_commitment_tx_v2 to produce full-width U256 values (e.g., generate 32- or
24-byte arrays / big-endian limbs and convert with U256::from_big_endian or
equivalent) and replace the value construction (the value: U256::from(value_raw)
site) so tests cover the full 256-bit range.
In `@crates/types/src/storage_pricing.rs`:
- Around line 2548-2669: The ln_fp18 helper never exercises the m < TOKEN_SCALE
branch (inputs < 1) in the property tests, so either add explicit tests for that
domain or assert it's invalid; update tests to cover and document the behavior
of ln_fp18 for 0 < x < 1 by adding a new unit/property test that calls ln_fp18
with a value less than TOKEN_SCALE (e.g., TOKEN_SCALE - 1 or fractional inputs
used elsewhere), and either assert ln_fp18 returns an Err for values <
TOKEN_SCALE or assert and convert the returned U256 representation used to
encode negative logs (whichever matches ln_fp18's intended contract), and add a
comment in the test file documenting the chosen contract for ln_fp18 and
TOKEN_SCALE so future tests exercise the m < TOKEN_SCALE branch.
- Around line 2713-2728: The test prop_decay_zero_rate_returns_base should
assert exact equality rather than a decimal tolerance: when zero_rate =
Amount::percentage(dec!(0.0)) the call to base_fee.apply_pledge_decay(count,
zero_rate) should return the identical Amount; replace the approximate diff
check with a direct equality assertion (e.g., assert_eq!(result, base_fee) or
assert_eq!(result.token_to_decimal().unwrap(),
base_fee.token_to_decimal().unwrap())) so the zero-rate invariant is enforced
exactly for the apply_pledge_decay path.
- Around line 2677-2696: The property test prop_decay_monotonically_decreasing
currently only asserts when count_a < count_b and skips all other cases; update
the test to assert in every branch: keep the existing check for count_a <
count_b (prop_assert!(value_a >= value_b)), add an else-if for count_a > count_b
asserting the reverse (prop_assert!(value_a <= value_b)), and an else case for
equality using prop_assert_eq!(value_a, value_b) so apply_pledge_decay
comparisons always run (refer to prop_decay_monotonically_decreasing,
apply_pledge_decay, base_fee, value_a/value_b).
---
Outside diff comments:
In `@crates/packing/src/lib.rs`:
- Around line 332-339: Remove the now-dead guard that checks and sets iterations
to 1 after converting and clamping; specifically in the block where iterations
is computed via (f64::from(iterations) * ratio).round().clamp(1.0,
f64::from(u32::MAX)) as u32, delete the subsequent if iterations == 0 {
iterations = 1; } branch because the clamp already guarantees a minimum of 1.0
so iterations can never be zero.
---
Duplicate comments:
In `@crates/api-server/src/routes/proxy.rs`:
- Around line 122-124: The test only includes lowercase hop-by-hop header cases
("proxy-authorization", "te", "trailers"); update the parameterized test
attributes where #[case("proxy-authorization", true)], #[case("te", true)], and
#[case("trailers", true)] appear to also include mixed-case variants (e.g.,
"Proxy-Authorization", "Proxy-authorization" or "PROXY-AUTHORIZATION"; "Te" or
"TE"; "Trailers" or "TRAILERS") so the case-insensitive matching is fully
validated—add corresponding #[case(..., true)] entries for each mixed-case form
next to the existing ones.
In `@crates/irys-reth/src/evm.rs`:
- Around line 737-742: The staged shadow state (self.state) can be left
populated on early Err returns, so update process_shadow_tx() to centralize
cleanup: ensure any error path (including failures after
distribute_priority_fee() and the validation branches that now return Err)
clears the staged state before returning Err (e.g., call a single cleanup helper
like self.clear_staged_shadow_state() or self.state.clear()), and remove ad-hoc
returns that bypass cleanup; reference process_shadow_tx(),
distribute_priority_fee(), and the validation logic that currently returns Err
so all failure exits invoke the cleanup helper.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 88d09605-266e-46cd-be4b-8ed52a467708
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (18)
crates/api-client/src/lib.rscrates/api-server/src/routes/index.rscrates/api-server/src/routes/observability.rscrates/api-server/src/routes/proxy.rscrates/c/src/capacity_single.rscrates/cli/src/main.rscrates/database/Cargo.tomlcrates/efficient-sampling/src/lib.rscrates/irys-reth/src/evm.rscrates/irys-reth/src/lib.rscrates/irys-reth/src/shadow_tx.rscrates/packing/src/lib.rscrates/storage/src/recovered_mempool_state.rscrates/types/src/commitment_common.rscrates/types/src/storage_pricing.rscrates/types/tests/versioned_compact_roundtrip_tests.rscrates/utils/nextest-monitor/Cargo.tomlcrates/vdf/src/vdf.rs
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
crates/p2p/src/server.rs (1)
977-995: 🧹 Nitpick | 🔵 TrivialExtract shared node-info fetch/error handling to one helper.
handle_infoandhandle_info_v2now duplicate identicalmatch + 500logic. A small helper would reduce maintenance drift and keep V1/V2 behavior consistent.Also applies to: 1004-1022
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/p2p/src/server.rs` around lines 977 - 995, The node-info fetching and error handling logic duplicated between handle_info and handle_info_v2 (the match on get_node_info that produces node_info or logs and returns an InternalServerError JSON Rejected with RejectionReason::InvalidData) should be moved into a single async helper function (e.g., fetch_node_info_or_reject) that encapsulates calling get_node_info(...).await and returns Result<NodeInfoType, HttpResponse> (or a custom enum) so callers can early-return the same HttpResponse on Err; update handle_info and handle_info_v2 to call this helper and use its Ok value as node_info, preserving the existing tracing::error message and the HttpResponse::InternalServerError().content_type(ContentType::json()).json(GossipResponse::<()>::Rejected(RejectionReason::InvalidData)) behavior.
♻️ Duplicate comments (1)
crates/types/src/config/mod.rs (1)
938-942: 🧹 Nitpick | 🔵 TrivialInconsistent error assertion pattern — some tests still use
format!("{err:?}").The fix for brittle Debug-string assertions was only partially applied. Lines 902 and 916 correctly use
err.to_string(), but these locations still useformat!("{err:?}"). For consistency and stability, switch toerr.to_string()throughout.♻️ Proposed fix to use consistent error assertions
fn validate_rejects_insufficient_vdf_fork_steps() { let cfg = config_with_consensus(|c| { c.vdf.max_allowed_vdf_fork_steps = 0; }); let err = cfg.validate().unwrap_err(); + let err_str = err.to_string(); assert!( - format!("{err:?}").contains("max_allowed_vdf_fork_steps"), - "expected error about vdf fork steps, got: {err:?}" + err_str.contains("max_allowed_vdf_fork_steps"), + "expected error about vdf fork steps, got: {err_str}" ); }fn validate_rejects_bad_prune_capacity_percent( #[case] percent: f64, #[case] expected_msg: &str, ) { let cfg = config_with_node(|nc| { nc.cache.prune_at_capacity_percent = percent; }); let err = cfg.validate().unwrap_err(); + let err_str = err.to_string(); assert!( - format!("{err:?}").contains(expected_msg), - "expected error containing {expected_msg:?}, got: {err:?}" + err_str.contains(expected_msg), + "expected error containing {expected_msg:?}, got: {err_str}" ); }fn validate_rejects_misaligned_partition_chunks() { let cfg = config_with_consensus(|c| { c.num_chunks_in_partition = 11; c.num_chunks_in_recall_range = 2; c.vdf.max_allowed_vdf_fork_steps = 60_000; }); let err = cfg.validate().unwrap_err(); + let err_str = err.to_string(); assert!( - format!("{err:?}").contains("num_chunks_in_partition must be a multiple"), - "expected alignment error, got: {err:?}" + err_str.contains("num_chunks_in_partition must be a multiple"), + "expected alignment error, got: {err_str}" ); }fn validate_rejects_zero_mempool_capacities( #[case] mutate: fn(&mut NodeConfig), #[case] expected_msg: &str, ) { let cfg = config_with_node(mutate); let err = cfg.validate().unwrap_err(); + let err_str = err.to_string(); assert!( - format!("{err:?}").contains(expected_msg), - "expected error containing {expected_msg:?}, got: {err:?}" + err_str.contains(expected_msg), + "expected error containing {expected_msg:?}, got: {err_str}" ); }Also applies to: 957-961, 971-975, 1000-1004
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/types/src/config/mod.rs` around lines 938 - 942, Replace brittle Debug-format assertions that use format!("{err:?}") with stable string assertions using err.to_string(); specifically change the assert! that captures err from cfg.validate().unwrap_err() (variable err) so it calls err.to_string().contains("max_allowed_vdf_fork_steps") and uses the err string in the failure message (e.g., "... got: {}", err.to_string()). Repeat the same replacement for the other duplicate spots mentioned (the assertions around the other cfg.validate().unwrap_err() checks at the indicated ranges).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/p2p/src/server.rs`:
- Around line 989-994: The error branch after the get_node_info call is
returning RejectionReason::InvalidData even though the failure is internal;
replace that enum variant with the appropriate server-side failure variant
(e.g., RejectionReason::ServerError or RejectionReason::InternalError—use the
exact existing variant from the RejectionReason enum) in the
HttpResponse::InternalServerError().json(...) call, and make the same change in
the other identical branch around lines 1016-1021 that also handles
get_node_info errors; keep the tracing::error!(...) logging as-is.
In `@crates/types/src/commitment_common.rs`:
- Around line 873-892: The roundtrip test
prop_commitment_transaction_rlp_roundtrip currently asserts many fields but
doesn't check that decoding preserves the version tag or the chain_id; update
the test to pattern-match the decoded value and assert it is
CommitmentTransaction::V2 carrying the same CommitmentV2WithMetadata data, then
assert decoded.chain_id() == versioned.chain_id() in addition to the existing
field checks (use the same approach in the sibling roundtrip test around lines
895-912). Ensure you reference CommitmentTransaction::V2,
CommitmentV2WithMetadata, and the
chain_id()/id()/fee()/value()/signer()/anchor()/commitment_type() accessors when
adding these assertions.
---
Outside diff comments:
In `@crates/p2p/src/server.rs`:
- Around line 977-995: The node-info fetching and error handling logic
duplicated between handle_info and handle_info_v2 (the match on get_node_info
that produces node_info or logs and returns an InternalServerError JSON Rejected
with RejectionReason::InvalidData) should be moved into a single async helper
function (e.g., fetch_node_info_or_reject) that encapsulates calling
get_node_info(...).await and returns Result<NodeInfoType, HttpResponse> (or a
custom enum) so callers can early-return the same HttpResponse on Err; update
handle_info and handle_info_v2 to call this helper and use its Ok value as
node_info, preserving the existing tracing::error message and the
HttpResponse::InternalServerError().content_type(ContentType::json()).json(GossipResponse::<()>::Rejected(RejectionReason::InvalidData))
behavior.
---
Duplicate comments:
In `@crates/types/src/config/mod.rs`:
- Around line 938-942: Replace brittle Debug-format assertions that use
format!("{err:?}") with stable string assertions using err.to_string();
specifically change the assert! that captures err from
cfg.validate().unwrap_err() (variable err) so it calls
err.to_string().contains("max_allowed_vdf_fork_steps") and uses the err string
in the failure message (e.g., "... got: {}", err.to_string()). Repeat the same
replacement for the other duplicate spots mentioned (the assertions around the
other cfg.validate().unwrap_err() checks at the indicated ranges).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 7c3101da-67df-4e0d-9487-3d01e6193f75
📒 Files selected for processing (11)
crates/api-client/src/lib.rscrates/domain/src/models/chain_sync_state.rscrates/domain/src/snapshots/commitment_snapshot.rscrates/p2p/src/server.rscrates/price-oracle/src/coinmarketcap.rscrates/price-oracle/src/lib.rscrates/types/src/commitment_common.rscrates/types/src/config/mod.rscrates/types/src/storage_pricing.rscrates/types/tests/versioned_compact_roundtrip_tests.rscrates/utils/nextest-monitor/src/bin/nextest-wrapper.rs
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/config/src/chain/chainspec.rs`:
- Around line 124-134: The test rejects_ingress_proof_count_over_255 can be
simplified: make it a plain test function (remove the eyre::Result<()> return)
and replace the manual Result checking with result.expect_err(...) to assert the
call to build_unsigned_irys_genesis_block(&config, B256::ZERO, 256) fails; then
convert the error to string and assert it contains the expected substring (e.g.,
"exceeds u8::MAX"). Locate rejects_ingress_proof_count_over_255 and
build_unsigned_irys_genesis_block/B256::ZERO in the diff and perform these small
refactors to simplify the negative-path test.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 853f427b-868a-49fa-a398-ebc02fc1d852
📒 Files selected for processing (6)
crates/cli/src/main.rscrates/config/src/chain/chainspec.rscrates/irys-reth/src/lib.rscrates/p2p/src/server.rscrates/types/src/commitment_common.rscrates/types/src/ingress.rs
Describe the changes
A clear and concise description of what the bug fix, feature, or improvement is.
Related Issue(s)
Please link to the issue(s) that will be closed with this PR.
Checklist
Additional Context
Add any other context about the pull request here.
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Chores