-
Notifications
You must be signed in to change notification settings - Fork 10
refactor(spv): share ChainState between client and header sync #142
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor(spv): share ChainState between client and header sync #142
Conversation
- Eliminate duplicate ChainState initialization by sharing a single Arc<RwLock<ChainState>> across client and header sync. - HeaderSyncManagerWithReorg now holds shared state and no longer constructs its own ChainState. - SequentialSyncManager::new signature updated to accept the shared ChainState and plumb it through. - Client passes its ChainState to sync manager and removes the client-side header copy path. - Read/write updated to use RwLock guards; added lightweight cached checkpoint flags in header sync. This removes the duplicate "Initialized ChainState" logs and unifies state as a single source of truth.
WalkthroughUpdates integrate shared ChainState (Arc<RwLock<...>>) across client and sync managers, replace set_chain_state with update_chain_state_cache, adjust constructor signatures, and shift tip/header access to async lock-based reads. Client removes eager header loading into in-memory state and uses cache updates for checkpoint/genesis initialization. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor App as Client
participant SSM as SequentialSyncManager
participant HSM as HeaderSyncManagerWithReorg
participant CS as ChainState (Arc<RwLock>)
participant ST as Storage
App->>SSM: new(config, received_filter_heights, wallet, state)
SSM->>HSM: new(config, reorg_config, state)
Note over HSM,CS: HSM holds Arc<RwLock<ChainState>> and cached flags
alt Init from checkpoint/genesis
App->>SSM: update_chain_state_cache(from_checkpoint, base_height, headers_len)
SSM->>HSM: update_cached_from_state_snapshot(...)
else Normal sync
App--X CS: (removed) no eager header loading
end
loop Headers sync
SSM->>HSM: request getheaders/validate/update
HSM->>CS: read()/write() for tip/headers
HSM-->>SSM: progress (total_headers_synced)
end
SSM->>ST: get_blockchain_height_from_storage()
ST-->>SSM: stored heights
SSM-->>App: height adjusted by sync_base_height (if checkpoint)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
dash-spv/src/sync/sequential/mod.rs (3)
335-346: Fix potential underflow when computing filter download rangeIf
filter_header_tip < start_height(e.g., checkpoint sync with no CFHeaders yet),filter_header_tip - start_height + 1underflows. Guard and early-exit or clamp before computingcount.- let start_height = self.header_sync.get_sync_base_height().max(1); - let count = filter_header_tip - start_height + 1; + let start_height = self.header_sync.get_sync_base_height().max(1); + if filter_header_tip < start_height { + tracing::warn!( + "Filter header tip {} is behind start height {}; skipping filter download", + filter_header_tip, + start_height + ); + self.transition_to_next_phase( + storage, + network, + "Filter headers behind start height", + ) + .await?; + return Ok(()); + } + let count = filter_header_tip - start_height + 1;
1114-1140: Avoid cloning potentially large header batchesYou only need length/emptiness after passing headers to the manager. Precompute and move
headersto avoid an unnecessary clone on hot path.- let continue_sync = - self.header_sync.handle_headers_message(headers.clone(), storage, network).await?; + let headers_len = headers.len() as u32; + let is_empty = headers_len == 0; + let continue_sync = + self.header_sync.handle_headers_message(headers, storage, network).await?; @@ - *headers_downloaded += headers.len() as u32; + *headers_downloaded += headers_len; @@ - if headers.is_empty() { + if is_empty {Also applies to: 1142-1151
1279-1302: Avoid cloning CFHeaders; use precomputed length
cfheaderscan be large; pass it by move and precompute the batch length for progress accounting.- let continue_sync = - self.filter_sync.handle_cfheaders_message(cfheaders.clone(), storage, network).await?; + let batch_len = cfheaders.filter_hashes.len() as u32; + let continue_sync = + self.filter_sync.handle_cfheaders_message(cfheaders, storage, network).await?; @@ - *cfheaders_downloaded += cfheaders.filter_hashes.len() as u32; + *cfheaders_downloaded += batch_len;
🧹 Nitpick comments (7)
dash-spv/src/sync/sequential/mod.rs (4)
1071-1073: Don’t swallow storage errors; propagate insteadUsing
unwrap_or(0)masks real storage failures. Return the error and let callers decide recovery.- let blockchain_height = self.get_blockchain_height_from_storage(storage).await.unwrap_or(0); + let blockchain_height = self.get_blockchain_height_from_storage(storage).await?;
1117-1119: Same here: propagate storage errorsFollow the same pattern to keep error handling consistent.
- let blockchain_height = self.get_blockchain_height_from_storage(storage).await.unwrap_or(0); + let blockchain_height = self.get_blockchain_height_from_storage(storage).await?;
1988-1992: Saturating add for height conversionUse
saturating_addwhen converting masternode storage heights to absolute heights to avoid theoretical overflow.- let mn_blockchain_height = if is_ckpt && sync_base > 0 { - sync_base + mn_state.last_height - } else { - mn_state.last_height - }; + let mn_blockchain_height = if is_ckpt && sync_base > 0 { + sync_base.saturating_add(mn_state.last_height) + } else { + mn_state.last_height + };
2089-2095: Saturating add for blockchain height computationSmall safety win; avoids overflow in pathological cases.
- Ok(self.header_sync.get_sync_base_height() + storage_height) + Ok(self + .header_sync + .get_sync_base_height() + .saturating_add(storage_height))dash-spv/src/client/mod.rs (1)
2198-2204: Checkpoint cache update: ensure off‑by‑one semantics are correct downstream
headers_lenhere represents number of headers in memory (includes the checkpoint header). The absolute tip height should besync_base_height + headers_len - 1. Ensure the callee applies this (see suggested fix in headers_with_reorg.rs).dash-spv/src/sync/headers_with_reorg.rs (2)
662-671: Use saturating add for effective heightMinor safety improvement when converting storage index to absolute height during prepare.
- let actual_height = self.get_sync_base_height() + stored_headers; + let actual_height = self.get_sync_base_height().saturating_add(stored_headers);
119-124: State snapshot load looks correct; consider atomicityYou update caches then replace the in‑memory
ChainState. That’s fine, but if a consumer reads between the two, caches and state could be momentarily inconsistent. Wrap in one scope (already done) and consider documenting the ordering.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
dash-spv/src/client/mod.rs(3 hunks)dash-spv/src/sync/headers_with_reorg.rs(31 hunks)dash-spv/src/sync/sequential/mod.rs(7 hunks)
🧰 Additional context used
📓 Path-based instructions (3)
dash-spv/**/*.rs
📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)
dash-spv/**/*.rs: Enforce Rust formatting viacargo fmt --checkon all Rust source files
All code must be clippy-clean: runcargo clippy --all-targets --all-features -- -D warnings
Files:
dash-spv/src/sync/sequential/mod.rsdash-spv/src/client/mod.rsdash-spv/src/sync/headers_with_reorg.rs
**/*.rs
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.rs: Use proper error types with thiserror and propagate errors appropriately
Use the tokio runtime for async operations in Rust
Use conditional compilation feature flags for optional features (#[cfg(feature = ...)])
Format Rust code with cargo fmt (and enforce via cargo fmt --check)
Run clippy with -D warnings and fix all lints
Adhere to MSRV Rust 1.89 (avoid features requiring newer compiler)
**/*.rs: Format Rust code with rustfmt (per rustfmt.toml); run cargo fmt --all before commits
Lint with clippy; treat warnings as errors in CI
Follow Rust naming: snake_case for functions/variables, UpperCamelCase for types/traits, SCREAMING_SNAKE_CASE for consts
Prefer async using tokio where applicable
Files:
dash-spv/src/sync/sequential/mod.rsdash-spv/src/client/mod.rsdash-spv/src/sync/headers_with_reorg.rs
**/src/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
**/src/**/*.rs: Each crate keeps sources in src/
Avoid unwrap()/expect() in library code; use proper error types (e.g., thiserror)
Place unit tests alongside code with #[cfg(test)]
Files:
dash-spv/src/sync/sequential/mod.rsdash-spv/src/client/mod.rsdash-spv/src/sync/headers_with_reorg.rs
🧠 Learnings (4)
📚 Learning: 2025-06-26T16:01:37.609Z
Learnt from: DCG-Claude
PR: dashpay/rust-dashcore#0
File: :0-0
Timestamp: 2025-06-26T16:01:37.609Z
Learning: The mempool tracking infrastructure (UnconfirmedTransaction, MempoolState, configuration, and mempool_filter.rs) is fully implemented and integrated in the Dash SPV client as of this PR, including client logic, FFI APIs, and tests.
Applied to files:
dash-spv/src/client/mod.rs
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/src/{bip32.rs,dip9.rs} : Keep Dash-specific derivation paths up to date (e.g., DIP-9) in bip32.rs and dip9.rs
Applied to files:
dash-spv/src/client/mod.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/**/*.rs : All code must be clippy-clean: run `cargo clippy --all-targets --all-features -- -D warnings`
Applied to files:
dash-spv/src/client/mod.rs
📚 Learning: 2025-08-16T04:15:57.225Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: swift-dash-core-sdk/Examples/DashHDWalletExample/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:15:57.225Z
Learning: Applies to swift-dash-core-sdk/Examples/DashHDWalletExample/**/*.swift : When accessing SPV functionality, use DashSDK public APIs instead of direct client access; add public wrapper methods to DashSDK if needed
Applied to files:
dash-spv/src/client/mod.rs
🧬 Code graph analysis (3)
dash-spv/src/sync/sequential/mod.rs (3)
dash-spv/src/client/mod.rs (2)
chain_state(2345-2348)new(174-241)dash-spv/src/client/status_display.rs (2)
chain_state(120-123)new(23-37)dash-spv/src/sync/headers_with_reorg.rs (1)
new(73-107)
dash-spv/src/client/mod.rs (3)
dash-spv/src/sync/headers_with_reorg.rs (1)
new(73-107)dash-spv/src/sync/sequential/mod.rs (1)
new(89-118)dash-spv/src/storage/disk.rs (1)
new(210-324)
dash-spv/src/sync/headers_with_reorg.rs (3)
dash-spv/src/client/mod.rs (4)
new(174-241)chain_state(2345-2348)network(408-410)network(1275-1278)dash-spv/src/sync/sequential/mod.rs (1)
new(89-118)dash-spv/src/types.rs (4)
new(184-186)new(897-917)tip_height(229-238)tip_hash(241-243)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (20)
- GitHub Check: fuzz (hashes_sha1)
- GitHub Check: fuzz (hashes_cbor)
- GitHub Check: fuzz (hashes_sha512)
- GitHub Check: fuzz (hashes_sha512_256)
- GitHub Check: fuzz (dash_outpoint_string)
- GitHub Check: fuzz (hashes_sha256)
- GitHub Check: fuzz (hashes_json)
- GitHub Check: fuzz (hashes_ripemd160)
- GitHub Check: fuzz (dash_deserialize_witness)
- GitHub Check: fuzz (dash_deserialize_script)
- GitHub Check: fuzz (dash_deserialize_block)
- GitHub Check: fuzz (dash_deserialize_address)
- GitHub Check: fuzz (dash_deser_net_msg)
- GitHub Check: fuzz (dash_deserialize_amount)
- GitHub Check: fuzz (dash_script_bytes_to_asm_fmt)
- GitHub Check: RPC Tests (stable, true)
- GitHub Check: Strict Warnings and Clippy Checks
- GitHub Check: Key Wallet Components Tests
- GitHub Check: SPV Components Tests
- GitHub Check: Core Components Tests
🔇 Additional comments (8)
dash-spv/src/sync/sequential/mod.rs (1)
120-170: Constructor wiring with shared ChainState looks goodPassing the shared
Arc<RwLock<ChainState>>through toHeaderSyncManagerWithReorgaligns with the PR objective and unifies state ownership.dash-spv/src/client/mod.rs (5)
193-199: Constructor signature update wired correctlyClient now passes the shared
stateintoSequentialSyncManager::new(...). Matches the new API and shared-state design.
2143-2187: Persisting checkpoint-initialized ChainState looks correctStoring
sync_base_heightand not persisting the checkpoint header itself matches the intended “skip” semantics.
331-362: Header preload error path: checkpoint vs normalGood differentiation: error is fatal in checkpoint mode but warns and continues in normal mode.
860-869: Absolute height calculations consistent with storage indexUsing
abs = state.sync_base_height + storage_tipfor UI/events correctly converts 0‑based storage indices to blockchain heights.Also applies to: 930-955
1-1: Run fmt and clippy before mergeSandbox couldn't run the verification script (error: "Could not locate working directory.: no /proc/self/exe available"). Run the following locally and paste the results:
#!/bin/bash set -euo pipefail # From repo root: cargo fmt --all -- --check cargo clippy --workspace --all-targets --all-features -- -D warnings # Spot-check for remaining risky arithmetic in sequential sync: rg -nP 'filter_header_tip\s*-\s*start_height' -C2 --type=rustEnsure cargo fmt and cargo clippy exit successfully and rg returns no matches.
dash-spv/src/sync/headers_with_reorg.rs (2)
320-343: Lock usage is scoped and minimal; goodShort read/write lock intervals around
chain_stateduring tip checks and header insertion are appropriate for async contexts.Also applies to: 364-372, 380-386
952-956: Checkpoint gating via absolute height uses corrected metric
is_past_checkpoints(self.total_headers_synced)will be correct once the off‑by‑one fix lands.Would you like me to add a unit test asserting correct height after checkpoint init (e.g., base H with 1 header => reported height H)?
| /// Update cached flags and totals based on an external state snapshot | ||
| pub fn update_cached_from_state_snapshot( | ||
| &mut self, | ||
| synced_from_checkpoint: bool, | ||
| sync_base_height: u32, | ||
| headers_len: u32, | ||
| ) { | ||
| self.cached_synced_from_checkpoint = synced_from_checkpoint; | ||
| self.cached_sync_base_height = sync_base_height; | ||
| if synced_from_checkpoint && sync_base_height > 0 { | ||
| self.total_headers_synced = sync_base_height + headers_len; | ||
| } else { | ||
| // For normal sync, it's just the number of headers | ||
| self.total_headers_synced = chain_state.headers.len() as u32; | ||
| self.total_headers_synced = headers_len; | ||
| } | ||
|
|
||
| self.chain_state = chain_state; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix off‑by‑one in cached absolute height computation
For both checkpoint and non-checkpoint modes, absolute tip height equals base + headers_len - 1. Current code uses + headers_len, which overstates height by 1 (e.g., checkpoint with one header set yields base+1).
- self.cached_synced_from_checkpoint = synced_from_checkpoint;
- self.cached_sync_base_height = sync_base_height;
- if synced_from_checkpoint && sync_base_height > 0 {
- self.total_headers_synced = sync_base_height + headers_len;
- } else {
- self.total_headers_synced = headers_len;
- }
+ self.cached_synced_from_checkpoint = synced_from_checkpoint;
+ self.cached_sync_base_height = sync_base_height;
+ let contributed = headers_len.saturating_sub(1);
+ self.total_headers_synced = if synced_from_checkpoint && sync_base_height > 0 {
+ sync_base_height.saturating_add(contributed)
+ } else {
+ contributed
+ };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /// Update cached flags and totals based on an external state snapshot | |
| pub fn update_cached_from_state_snapshot( | |
| &mut self, | |
| synced_from_checkpoint: bool, | |
| sync_base_height: u32, | |
| headers_len: u32, | |
| ) { | |
| self.cached_synced_from_checkpoint = synced_from_checkpoint; | |
| self.cached_sync_base_height = sync_base_height; | |
| if synced_from_checkpoint && sync_base_height > 0 { | |
| self.total_headers_synced = sync_base_height + headers_len; | |
| } else { | |
| // For normal sync, it's just the number of headers | |
| self.total_headers_synced = chain_state.headers.len() as u32; | |
| self.total_headers_synced = headers_len; | |
| } | |
| self.chain_state = chain_state; | |
| } | |
| /// Update cached flags and totals based on an external state snapshot | |
| pub fn update_cached_from_state_snapshot( | |
| &mut self, | |
| synced_from_checkpoint: bool, | |
| sync_base_height: u32, | |
| headers_len: u32, | |
| ) { | |
| self.cached_synced_from_checkpoint = synced_from_checkpoint; | |
| self.cached_sync_base_height = sync_base_height; | |
| let contributed = headers_len.saturating_sub(1); | |
| self.total_headers_synced = if synced_from_checkpoint && sync_base_height > 0 { | |
| sync_base_height.saturating_add(contributed) | |
| } else { | |
| contributed | |
| }; | |
| } |
🤖 Prompt for AI Agents
In dash-spv/src/sync/headers_with_reorg.rs around lines 1116 to 1130, the code
sets total_headers_synced to base + headers_len (or just headers_len), which is
off-by-one; change the computation so the absolute tip height is base +
headers_len - 1 in checkpoint mode and headers_len - 1 in non-checkpoint mode,
and guard against underflow when headers_len == 0 (return base when no new
headers in checkpoint mode, or 0 when non-checkpoint with zero headers). Ensure
the logic uses conditional checks on headers_len to avoid subtracting 1 from
zero.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see an issue here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@QuantumExplorer Thank you for the feedback! I appreciate you taking a closer look at this.
I was concerned about the height calculation because when headers_len represents the count of headers (including the checkpoint header), the absolute tip height would typically be base + count - 1. But you're right that I may have misunderstood the intended semantics here.
Could you help clarify what total_headers_synced is meant to represent in this context? Is it tracking the absolute blockchain height of the tip, or something else? Understanding the intended behavior would help me provide better analysis in the future.
This removes the duplicate "Initialized ChainState" logs and unifies state as a single source of truth.
Summary by CodeRabbit