-
Notifications
You must be signed in to change notification settings - Fork 10
feat: spv in det #93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: spv in det #93
Conversation
|
Important Review skippedAuto reviews are disabled on base/target branches other than the default branch. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughThis PR introduces extensive SPV, FFI, and documentation updates: new Bloom and Chain subsystems, a Sequential Sync architecture, expanded FFI (C/Swift) surfaces with async progress and mempool capabilities, platform integration bridges, builder-based client construction, tests/examples, data embedding for terminal blocks, and workspace/config updates. Changes
Sequence Diagram(s)sequenceDiagram
participant App
participant FFI as dash-spv-ffi
participant Client as DashSpvClient
participant BP as BlockProcessor
participant EVT as EventChannel
App->>FFI: client_start / set_event_callbacks
FFI->>Client: start()
BP-->>EVT: SpvEvent::{BlockProcessed, TransactionDetected, BalanceUpdate}
Client-->>FFI: emit events via event listener
FFI-->>App: on_block / on_transaction / on_balance_update (32-byte IDs)
sequenceDiagram
participant Peer
participant MH as MessageHandler
participant SSM as SequentialSyncManager
participant Wallet
participant Mempool
Peer->>MH: Headers2 / MnListDiff / CFHeaders / Block / Tx / Inv
MH->>SSM: handle_message(...)
alt Tx and mempool enabled
MH->>Mempool: filter + update state
MH-->>Wallet: notify if relevant
end
MH-->>Peer: follow-up requests (GetData, etc.)
sequenceDiagram
participant SPV as dash-spv (dashcore LLMQ)
participant MSM as MasternodeSyncManager
participant Net as Network
participant State as DKGFetchState
MSM->>SPV: compute DKG windows (LLMQType/NetworkLLMQExt)
MSM->>State: init pending/active windows
loop Adaptive fetch
MSM->>Net: request diffs for active windows
Net-->>MSM: masternode diffs
MSM->>State: advance/complete/exhaust windows
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120+ minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 52
🔭 Outside diff range comments (7)
dash-spv/src/client/config_test.rs (3)
493-514: Test references non-existent checkpoint heights.The test expects checkpoints at heights 4991 and 107996, but these don't exist in
mainnet_checkpoints(). The actual checkpoints are at heights 0, 750000, 1700000, 1900000, and 2300000.assert_eq!( - manager.last_checkpoint_before_height(5000).expect("Should find checkpoint").height, - 4991 + manager.last_checkpoint_before_height(800000).expect("Should find checkpoint").height, + 750000 ); assert_eq!( - manager.last_checkpoint_before_height(200000).expect("Should find checkpoint").height, - 107996 + manager.last_checkpoint_before_height(1750000).expect("Should find checkpoint").height, + 1700000 );
554-557: Test references non-existent checkpoint height.The test expects a checkpoint at height 4991 when overriding with height 5000, but this checkpoint doesn't exist.
-manager.set_sync_override(Some(5000)); +manager.set_sync_override(Some(800000)); let sync_checkpoint = manager.get_sync_checkpoint(None); -assert_eq!(sync_checkpoint.expect("Should have sync checkpoint").height, 4991); +assert_eq!(sync_checkpoint.expect("Should have sync checkpoint").height, 750000);
589-592: Incorrect expected height for last masternode list checkpoint.The test expects height 1900000, but the last checkpoint with a masternode list is actually at height 2300000.
let ml_checkpoint = manager.last_checkpoint_having_masternode_list(); assert!(ml_checkpoint.is_some()); assert!(ml_checkpoint.expect("Should have ML checkpoint").has_masternode_list()); -assert_eq!(ml_checkpoint.expect("Should have ML checkpoint").height, 1900000); +assert_eq!(ml_checkpoint.expect("Should have ML checkpoint").height, 2300000);dash-spv-ffi/tests/unit/test_type_conversions.rs (1)
141-154: Add assertion for new field: filter_sync_availableYou initialize SyncProgress.filter_sync_available but do not assert it after FFI conversion. Add an assertion on FFISyncProgress to guarantee the new field is wired end-to-end.
let ffi_progress = FFISyncProgress::from(progress); + assert_eq!(ffi_progress.filter_sync_available, 1); // or true, depending on FFI typedash-spv-ffi/tests/unit/test_error_handling.rs (1)
183-187: Avoid declaring extern functions inside test functionsDeclaring extern "C" functions inside a test function is unusual and could lead to linking issues. Consider using the existing FFI functions or moving this to a test-specific module if you need to test raw FFI behavior.
Since
dash_spv_ffi_config_newis already available through the FFI module, you can test invalid enum handling differently:- let config = { - extern "C" { - fn dash_spv_ffi_config_new(network: i32) -> *mut FFIClientConfig; - } - dash_spv_ffi_config_new(999) - }; + // Test with the largest valid network value + // The implementation should handle invalid values gracefully + let config = dash_spv_ffi_config_new(FFINetwork::Testnet);If you specifically need to test invalid enum values, consider adding a dedicated test FFI function rather than re-declaring existing ones.
dash-spv-ffi/tests/unit/test_async_operations.rs (1)
233-344: Reentrancy test has potential race conditions.The reentrancy test is complex and makes assumptions about timing. The deadlock detection using a 1-second timeout is fragile and could give false positives on slow systems.
Improve the deadlock detection mechanism:
-// If this takes too long, it might indicate a deadlock -if elapsed > Duration::from_secs(1) { - data.deadlock_detected.store(true, Ordering::SeqCst); -} +// Use a more robust deadlock detection +let timeout_duration = Duration::from_secs(5); // More generous timeout +match tokio::time::timeout(timeout_duration, async { + dash_spv_ffi_client_test_sync(data.client) +}).await { + Ok(result) => { + if result != 0 { + println!("Reentrant call failed with error: {}", result); + } + }, + Err(_) => { + data.deadlock_detected.store(true, Ordering::SeqCst); + println!("Deadlock detected - operation timed out"); + } +}dash-spv-ffi/src/types.rs (1)
332-339: Potential type mismatch in generic array destructionThe function reconstructs the array as
Vec<u8>regardless of the original element type. This could cause issues if the array was created with elements of different sizes (e.g.,Vec<u32>).Consider either:
- Making this function type-specific, or
- Documenting that FFIArray should only be used with byte arrays, or
- Storing element size information in FFIArray
pub unsafe extern "C" fn dash_spv_ffi_array_destroy(arr: *mut FFIArray) { if !arr.is_null() { let arr = Box::from_raw(arr); if !arr.data.is_null() && arr.capacity > 0 { - Vec::from_raw_parts(arr.data as *mut u8, arr.len, arr.capacity); + // Note: This assumes the array contains u8 elements + // For other types, the capacity would need to be adjusted + Vec::from_raw_parts(arr.data as *mut u8, arr.len, arr.capacity); } } }
🧹 Nitpick comments (49)
.gitignore (2)
29-32: Broaden ignore patterns for versioned/shared artifacts and Windows debug/import libsGood addition. To avoid accidentally tracking common variants produced by FFI builds (versioned .so/.dylib, MSVC artifacts), extend the patterns:
*.a -*.so -*.dylib -*.dll +*.so +*.so.* +*.dylib +*.dylib.* +*.dll +*.lib +*.pdb +*.exp +# macOS debug symbol bundles +*.dSYM/If you intend to keep any prebuilt binaries (e.g., vendored SDKs), add explicit exceptions nearby, for example:
!vendor/**/prebuilt/**/*
9-9: Fix macOS metadata filename casingThe typical filename is .DS_Store (case-sensitive on many systems). Adjust to ensure it’s ignored reliably.
-.DS_STORE +.DS_Storedash-spv/examples/test_initial_sync.rs (5)
12-13: Avoid potential panic when initializing tracing subscriber twiceUsing init() will panic if a global subscriber was already set (common when examples are composed). Prefer try_init().
- tracing_subscriber::fmt().with_max_level(tracing::Level::DEBUG).init(); + let _ = tracing_subscriber::fmt() + .with_max_level(tracing::Level::DEBUG) + .try_init();
26-27: Prefer tracing macros over println! for consistency with configured loggingYou configure tracing, but print to stdout. Use tracing::info!/warn!/error! for consistent formatting and filtering.
- println!("🚀 Starting Dash SPV client for initial sync test..."); + tracing::info!("🚀 Starting Dash SPV client for initial sync test...");
42-47: Make the second “headers2” observation active and visibleReplace the second fixed sleep with a short observation loop, now that headers2 is enabled, to surface ongoing progress.
- // Wait a bit more to see if headers2 kicks in after initial sync - println!("\n⏳ Waiting to see if headers2 is used after initial sync..."); - tokio::time::sleep(Duration::from_secs(10)).await; - - let final_progress = client.sync_progress().await?; + // Observe post-initial sync progress (e.g., headers2) for ~10s + for _ in 0..10 { + let p = client.sync_progress().await?; + tracing::info!( + header_height = p.header_height, + headers_synced = p.headers_synced, + peer_count = p.peer_count, + "⏳ Observing progress post-initial sync..." + ); + tokio::time::sleep(Duration::from_secs(1)).await; + } + let final_progress = client.sync_progress().await?;
49-50: Log cleanup failure instead of silently ignoring itIf removal fails, emit a warning with context.
- let _ = std::fs::remove_dir_all(data_dir); + if let Err(e) = std::fs::remove_dir_all(&data_dir) { + tracing::warn!(error = %e, path = %data_dir.display(), "Failed to remove temporary data dir"); + }
52-61: Use tracing for final output and broaden success criteria
- Use tracing macros for consistency.
- Consider headers_synced boolean in addition to header_height > 0 for success.
- println!("\n📊 Final sync progress:"); - println!(" - Headers synced: {}", final_progress.header_height); + tracing::info!("📊 Final sync progress:"); + tracing::info!(" - Headers synced: {}", final_progress.header_height); + tracing::info!(" - Headers synced (bool): {}", final_progress.headers_synced); + tracing::info!(" - Peer count: {}", final_progress.peer_count); - if final_progress.header_height > 0 { - println!("\n✅ Initial sync successful! Synced {} headers", final_progress.header_height); + if final_progress.header_height > 0 || final_progress.headers_synced { + tracing::info!("✅ Initial sync successful! Synced {} headers", final_progress.header_height); Ok(()) } else { - println!("\n❌ Initial sync failed - no headers synced"); + tracing::error!("❌ Initial sync failed - no headers synced"); Err(SpvError::Sync(dash_spv::error::SyncError::Network("No headers synced".to_string()))) }dash-spv/Cargo.toml (1)
57-57: Note: hex dependency duplicated.The
hex = "0.4"dependency is already declared in the main dependencies section (line 41). This duplication in dev-dependencies is harmless but unnecessary.-hex = "0.4"dash-spv/examples/test_headers2_fix.rs (1)
20-20: Consider making the test peer address configurable.While the hardcoded IP (192.168.1.163:19999) is fine for a test example, consider reading from an environment variable to make the test more flexible across different environments.
- let addr = "192.168.1.163:19999".parse().unwrap(); + let addr = std::env::var("DASH_SPV_PEER") + .unwrap_or_else(|_| "192.168.1.163:19999".to_string()) + .parse().unwrap();dash-spv/examples/test_headers2.rs (2)
21-22: Consider adding DNS resolution error handlingUsing DNS hostnames directly in the peer list may fail if DNS resolution is unavailable. Consider adding fallback IP addresses or error handling for DNS resolution failures.
- config.peers = - vec!["seed.dash.org:9999".parse().unwrap(), "dnsseed.dash.org:9999".parse().unwrap()]; + config.peers = vec![ + "seed.dash.org:9999".parse().unwrap_or_else(|_| { + tracing::warn!("Failed to parse seed.dash.org, using fallback"); + // Add fallback IP if available + "1.2.3.4:9999".parse().unwrap() + }), + "dnsseed.dash.org:9999".parse().unwrap_or_else(|_| { + tracing::warn!("Failed to parse dnsseed.dash.org, using fallback"); + // Add fallback IP if available + "5.6.7.8:9999".parse().unwrap() + }), + ];
39-96: Consider making monitoring thresholds configurableThe monitoring loop uses several hardcoded thresholds (60 iterations, 5 seconds for connection drop, 10 seconds for no progress, 1000 headers for success). These could be made configurable for better flexibility in different testing scenarios.
+ // Configurable monitoring parameters + const MAX_ITERATIONS: u32 = 60; + const CONNECTION_DROP_THRESHOLD: u32 = 5; + const NO_PROGRESS_THRESHOLD: u32 = 10; + const SUCCESS_HEADER_COUNT: u32 = 1000; + - for i in 0..60 { + for i in 0..MAX_ITERATIONS { tokio::time::sleep(Duration::from_secs(1)).await; // ... existing code ... // Check for connection drops - if peers == 0 && i > 5 { + if peers == 0 && i > CONNECTION_DROP_THRESHOLD { println!("❌ Connection dropped after {} seconds!", i + 1); println!(" This likely indicates a headers2 protocol issue"); break; } // ... existing code ... } else if !progress.headers_synced { no_progress_count += 1; - if no_progress_count > 10 { + if no_progress_count > NO_PROGRESS_THRESHOLD { println!("⚠️ No header progress for {} seconds", NO_PROGRESS_THRESHOLD); } } // Stop after some headers are downloaded - if progress.header_height > 1000 { + if progress.header_height > SUCCESS_HEADER_COUNT {dash-spv/src/client/consistency_test.rs (1)
224-225: Consider extracting test addresses as constantsMultiple test addresses are hardcoded throughout the tests. Consider extracting them as constants for better maintainability.
+const TEST_ADDRESS_1: &str = "XeNTGz5bVjPNZVPpwTRz6SnLbZGxLqJUg4"; +const TEST_ADDRESS_2: &str = "Xj4Ei2Sj9YAj7hMxx4XgZvGNqoqHkwqNgE"; + fn create_test_address() -> Address { - Address::from_str("XeNTGz5bVjPNZVPpwTRz6SnLbZGxLqJUg4") + Address::from_str(TEST_ADDRESS_1) .expect("Test address should be valid") .assume_checked() } // In test function: let address2 = - Address::from_str("Xj4Ei2Sj9YAj7hMxx4XgZvGNqoqHkwqNgE").unwrap().assume_checked(); + Address::from_str(TEST_ADDRESS_2).expect("Test address should be valid").assume_checked();CLAUDE.md (1)
173-178: Consider adding date or commit reference for branch informationThe current development branch reference may become outdated. Consider adding a date or commit reference for context.
### Git Workflow -- Current development branch: `v0.40-dev` +- Current development branch: `v0.40-dev` (as of January 2025) - Main branch: `master`dash-spv/src/client/block_processor.rs (1)
547-548: Implement proper pending balance calculationThe
pendingandpending_instantfields are hardcoded to zero, which appears to be a placeholder implementation.The pending balance fields are currently set to zero. These should likely be populated from the wallet's actual pending transaction data. Would you like me to help implement the proper calculation or create an issue to track this?
dash-spv/src/client/config.rs (1)
422-428: Improve error handling for regtest peer parsing.The current implementation silently ignores parse errors for the regtest peer address. This could lead to confusion if the hardcoded address format changes.
Network::Regtest => { - vec!["127.0.0.1:19899".parse::<SocketAddr>()] - .into_iter() - .filter_map(Result::ok) - .collect() + "127.0.0.1:19899" + .parse::<SocketAddr>() + .map(|addr| vec![addr]) + .unwrap_or_else(|e| { + tracing::warn!("Failed to parse default regtest peer: {}", e); + vec![] + }) }dash-spv-ffi/src/callbacks.rs (1)
159-165: Consider adjusting log levels for production use.The current logging uses
infolevel for all successful callback invocations andwarnfor missing callbacks. In production, this could be quite verbose. Consider usingdebugortracelevels for successful calls.-tracing::info!("🎯 Calling block callback: height={}, hash={}", height, hash); +tracing::debug!("🎯 Calling block callback: height={}, hash={}", height, hash); // ... callback invocation ... -tracing::info!("✅ Block callback completed"); +tracing::debug!("✅ Block callback completed");Also applies to: 177-199, 204-214
dash-spv/examples/test_terminal_blocks.rs (1)
13-16: Consider making test heights configurable.The hardcoded height values may become outdated over time. Consider accepting heights as command-line arguments or reading from a configuration file for better maintainability.
+use std::env; + fn main() { + let args: Vec<String> = env::args().collect(); + let test_heights = if args.len() > 1 { + args[1..].iter().filter_map(|s| s.parse().ok()).collect() + } else { + vec![387480, 400000, 450000, 500000, 550000, 600000, 650000, 700000, 750000, 760000, 800000, 850000, 900000] + }; - let test_heights = vec![ - 387480, 400000, 450000, 500000, 550000, 600000, 650000, 700000, 750000, 760000, 800000, - 850000, 900000, - ];dash-spv-ffi/tests/unit/test_type_conversions.rs (1)
52-55: FFIString null destroy: initialize all required fields explicitlyGood call setting length: 0 for the null FFIString being destroyed. If FFIString ever grows fields (capacity/allocator), consider adding a dedicated helper to construct a null-safe FFIString to reduce test coupling to struct shape.
dash-spv-ffi/tests/test_platform_integration.rs (2)
33-43: Enable buffer-size validation without a full client, or gate with #[ignore]Currently commented out due to lack of a valid client. Options:
- If ffi_dash_spv_get_quorum_public_key doesn’t dereference client, you can pass a non-null dummy pointer to reach buffer-size checks (safe only if no deref happens).
- Otherwise, mark the test as #[ignore] and enable it when a valid client constructor is available.
- /* + #[ignore = "Requires valid client handle"] let result = ffi_dash_spv_get_quorum_public_key( valid_client, 0, quorum_hash.as_ptr(), 0, small_buffer.as_mut_ptr(), small_buffer.len(), ); assert_eq!(result.error_code, FFIErrorCode::InvalidArgument as i32); - */Confirm whether the function dereferences client; if it doesn’t, we can safely use a non-null sentinel pointer for parameter validation.
47-56: Add explicit null output buffer test with valid clientSame gating approach as above. This covers the out_pubkey null branch.
- /* + #[ignore = "Requires valid client handle"] let result = ffi_dash_spv_get_quorum_public_key( valid_client, 0, quorum_hash.as_ptr(), 0, ptr::null_mut(), 48, ); assert_eq!(result.error_code, FFIErrorCode::NullPointer as i32); - */dash-spv/src/bloom/utils.rs (1)
8-22: Make script pattern matching more robust with opcode constantsReadable and less error-prone by using opcode constants instead of magic numbers. Also protects against inadvertent changes.
-use dashcore::OutPoint; -use dashcore::Script; +use dashcore::{OutPoint, Script}; +use dashcore::opcodes::all::{OP_CHECKSIG, OP_DUP, OP_EQUALVERIFY, OP_HASH160}; @@ - if bytes.len() == 25 - && bytes[0] == 0x76 // OP_DUP - && bytes[1] == 0xa9 // OP_HASH160 - && bytes[2] == 0x14 // Push 20 bytes - && bytes[23] == 0x88 // OP_EQUALVERIFY - && bytes[24] == 0xac + if bytes.len() == 25 + && bytes[0] == OP_DUP.into_u8() + && bytes[1] == OP_HASH160.into_u8() + && bytes[2] == 0x14 // Push 20 bytes + && bytes[23] == OP_EQUALVERIFY.into_u8() + && bytes[24] == OP_CHECKSIG.into_u8() // OP_CHECKSIGTEST_SUMMARY.md (1)
1-112: Automate or gate the counts and keep the source-of-truthNumbers drift easily. Consider generating this file from cargo test -- --list + parsing, or add a note that counts are approximate and date-stamped. Also link to CI job that reports current pass/fail.
I can provide a small script to collect counts per module and update this doc automatically.
dash-spv/data/mainnet/mod.rs (1)
35-35: Remove trailing whitespaceLine 35 has trailing whitespace after the semicolon.
- let path = entry.path(); - + let path = entry.path(); +dash-spv/SYNC_PHASE_TRACKING.md (1)
124-124: Add language specifier to fenced code blockThe fenced code block should have a language specifier for better syntax highlighting and to satisfy markdown linting rules.
-``` +```text 🔄 Phase Change: Downloading Headersdash-spv-ffi/src/error.rs (1)
39-45: Consider logging mutex poisoning errors.When the mutex lock fails (line 43), the function silently returns null. Consider logging this error condition as mutex poisoning indicates a panic in another thread while holding the lock, which could be a serious issue worth investigating.
match LAST_ERROR.lock() { Ok(guard) => guard.as_ref().map(|err| err.as_ptr()).unwrap_or(std::ptr::null()), - Err(_) => std::ptr::null(), + Err(_) => { + // Log the mutex poisoning error if logging is available + // eprintln!("Warning: LAST_ERROR mutex is poisoned"); + std::ptr::null() + } }dash-spv/src/bloom/tests.rs (1)
23-37: Consider using more realistic test data.The test helpers use all-zero pubkey hashes and hardcoded transaction IDs. Consider using more varied test data to catch potential edge cases.
fn test_address() -> Address { - // Create a simple test address from a pubkey hash - let pubkey_hash = PubkeyHash::from([0u8; 20]); + // Create a test address with more realistic data + let mut hash_bytes = [0u8; 20]; + hash_bytes[0] = 0x12; + hash_bytes[19] = 0x34; + let pubkey_hash = PubkeyHash::from(hash_bytes); Address::new(dashcore::Network::Dash, Payload::PubkeyHash(pubkey_hash)) } +const TEST_TXID: &str = "abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234"; + fn test_outpoint() -> OutPoint { OutPoint { - txid: Txid::from_hex( - "abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234abcd1234", - ) - .unwrap(), + txid: Txid::from_hex(TEST_TXID).unwrap(), vout: 0, } }dash-spv/src/chain/reorg_test.rs (1)
49-56: Consider using realistic chain work values.While using maximum work ([255u8; 32]) ensures the fork is selected, consider using more realistic chain work calculations for better test fidelity.
- chain_work: ChainWork::from_bytes([255u8; 32]), // Maximum work + // Calculate cumulative work more realistically + let fork_work = ChainWork::from_header(&block1_fork) + .add(&ChainWork::from_header(&block2_fork)) + .add(&ChainWork::from_header(&block3_fork)); + chain_work: fork_work,dash-spv/src/chain/orphan_pool_test.rs (2)
23-56: Consider using mock time for expiration tests.Using
thread::sleepin tests can be flaky in CI environments. Consider using a mock time provider or increasing the timeout margins.// Create pool with short timeout for testing -let mut pool = OrphanPool::with_config(10, Duration::from_millis(100)); +// Use longer timeout to reduce flakiness +let mut pool = OrphanPool::with_config(10, Duration::from_millis(200)); // Wait for timeout -thread::sleep(Duration::from_millis(150)); +// Add buffer for CI environments +thread::sleep(Duration::from_millis(250));
368-421: Good concurrent safety test with room for improvement.The test validates thread safety well, but the parent hash generation could be more varied to test different access patterns.
Consider using more varied parent hashes to test different concurrent access patterns:
let parent = BlockHash::from_raw_hash(dashcore_hashes::hash_x11::Hash::hash(&[ - (thread_id % 5) as u8, - (i % 20) as u8, + thread_id as u8, + i as u8, + rand::random::<u8>(), // Add randomness for better coverage ]));dash-spv-ffi/tests/unit/test_client_lifecycle.rs (1)
161-164: Consider adding safety documentationWhile the
SendableClientwrapper is necessary for the test, it bypasses Rust's safety guarantees. Consider adding documentation explaining why this is safe in this specific test context.// Wrapper to make pointer Send +// SAFETY: This is only safe in our test context where we ensure: +// 1. The FFIDashSpvClient has internal synchronization (Arc<Mutex>) +// 2. All FFI functions are designed to be called from multiple threads +// 3. The client pointer remains valid for the duration of all threads struct SendableClient(*mut FFIDashSpvClient); unsafe impl Send for SendableClient {}dash-spv/docs/utxo_rollback.md (1)
21-29: Add error handling to usage exampleThe code example should demonstrate proper error handling to guide users on best practices.
use dash_spv::wallet::{UTXORollbackManager, WalletState}; // Create wallet state with rollback support let mut wallet_state = WalletState::with_rollback(Network::Testnet, true); // Or initialize from storage -wallet_state.init_rollback_from_storage(&storage, true).await?; +match wallet_state.init_rollback_from_storage(&storage, true).await { + Ok(_) => println!("Rollback state initialized from storage"), + Err(e) => eprintln!("Failed to initialize rollback state: {}", e), +}dash-spv/src/client/builder.rs (1)
186-196: Simplify time calculation logicThe complex time calculations with
checked_subandunwrap_or_elsecan be simplified usingsaturating_sub.- cached_sync_progress: Arc::new(RwLock::new(( - SyncProgress::default(), - std::time::Instant::now() - .checked_sub(std::time::Duration::from_secs(60)) - .unwrap_or_else(std::time::Instant::now), - ))), - cached_stats: Arc::new(RwLock::new(( - SpvStats::default(), - std::time::Instant::now() - .checked_sub(std::time::Duration::from_secs(60)) - .unwrap_or_else(std::time::Instant::now), - ))), + cached_sync_progress: Arc::new(RwLock::new(( + SyncProgress::default(), + std::time::Instant::now() - std::time::Duration::from_secs(60), + ))), + cached_stats: Arc::new(RwLock::new(( + SpvStats::default(), + std::time::Instant::now() - std::time::Duration::from_secs(60), + ))),Note: Since
Instant::now()is monotonic, subtracting 60 seconds should be safe. If you need to handle the edge case where the system just started, consider documenting why the 60-second offset is needed.dash-spv-ffi/tests/test_platform_integration_safety.rs (1)
62-67: Incomplete double-free prevention testThe double-free prevention test is commented out and incomplete. This is an important safety check that should be implemented.
The test should verify that double-freeing a handle doesn't cause undefined behavior. Would you like me to help implement this test properly once a valid mock handle creation is available?
dash-spv/docs/TERMINAL_BLOCKS.md (2)
28-31: Add language specifier to code blockThe fenced code block should specify a language for proper syntax highlighting.
-``` +```text Request: Genesis (0) → Current (1,276,272) Diff size: ~500MB, covering 1.2M blocks--- `34-37`: **Add language specifier to code block** The fenced code block should specify a language for proper syntax highlighting. ```diff -``` +```text Request: Terminal Block (900,000) → Current (1,276,272) Diff size: ~100MB, covering 376K blocks</blockquote></details> <details> <summary>dash-spv/src/chain/fork_detector_test.rs (1)</summary><blockquote> `54-54`: **Consider using more realistic test hash generation** Using `hash_x11::Hash::hash` with a single-byte array might not produce realistic test hashes. Consider using a more robust test data generation approach. ```diff - let pre_checkpoint_hash = - BlockHash::from_raw_hash(dashcore_hashes::hash_x11::Hash::hash(&[99u8])); + // Use a more descriptive test hash generation + let pre_checkpoint_hash = BlockHash::from_raw_hash( + dashcore_hashes::hash_x11::Hash::hash(b"test_pre_checkpoint_block") + );And similarly for line 293:
- let phantom_hash = BlockHash::from_raw_hash(dashcore_hashes::hash_x11::Hash::hash(&[42u8])); + let phantom_hash = BlockHash::from_raw_hash( + dashcore_hashes::hash_x11::Hash::hash(b"test_phantom_block") + );Also applies to: 293-293
dash-spv/src/bloom/builder.rs (2)
29-40: Consider using a more restrictive default for bloom flags
BloomFlags::Allallows updating the filter for all matched transactions, which might be overly permissive. Consider usingBloomFlags::UpdateP2PubkeyOnlyas a safer default that only updates for P2PKH outputs.pub fn new() -> Self { Self { elements: 100, false_positive_rate: 0.001, tweak: rand::random::<u32>(), - flags: BloomFlags::All, + flags: BloomFlags::UpdateP2PubkeyOnly, addresses: Vec::new(), outpoints: Vec::new(), data_elements: Vec::new(), } }
116-151: Consider more specific error type for bloom filter creation failuresThe build logic is correct. Consider using a more specific error variant than
SpvError::Generalfor bloom filter creation failures.dash-spv/src/bloom/stats.rs (1)
156-160: Questionable bandwidth estimation logicThe bandwidth saved estimation (
tx_size * 10) appears arbitrary. This rough estimate could be misleading in statistics reports.Consider either:
- Using a more realistic estimation based on actual network traffic patterns
- Removing this estimation entirely
- Making the multiplier configurable with a documented rationale
-// Estimate bandwidth saved by not downloading unrelated transactions -// Assume average transaction size if this was a true positive -self.stats.network_impact.bandwidth_saved_bytes += (tx_size * 10) as u64; -// Rough estimate +// Track actual transaction size for true positives +// Bandwidth saved is difficult to estimate accurately without knowing +// what other transactions would have been received without filtering +self.stats.network_impact.bandwidth_saved_bytes += tx_size as u64;dash-spv/src/chain/orphan_pool.rs (1)
206-209: Remove unused variable_block_hashThe variable
_block_hashat line 207 is computed but never used.-// Remove these from the pool since we're processing them -for header in &orphans { - let _block_hash = header.block_hash(); - self.remove_orphan(&header.block_hash()); -} +// Remove these from the pool since we're processing them +for header in &orphans { + self.remove_orphan(&header.block_hash()); +}dash-spv-ffi/src/client.rs (4)
58-71: Document cleanup requirements for CallbackInfo safety.The unsafe
SendandSyncimplementations correctly document the safety requirements, but there's no mechanism to enforce or verify these requirements at runtime. Consider adding debug assertions or runtime checks in debug builds to catch violations early.Add debug assertions to verify callback safety:
unsafe impl Send for CallbackInfo {} + +#[cfg(debug_assertions)] +impl Drop for CallbackInfo { + fn drop(&mut self) { + // Log when callbacks are dropped to help debug lifetime issues + tracing::debug!("Dropping CallbackInfo"); + } +}
80-84: Add overflow protection for callback ID generation.The callback ID counter could theoretically overflow after 2^64 operations. While unlikely, this should be handled gracefully.
fn register(&mut self, info: CallbackInfo) -> u64 { - let id = CALLBACK_ID_COUNTER.fetch_add(1, Ordering::Relaxed); + let id = CALLBACK_ID_COUNTER.fetch_add(1, Ordering::Relaxed); + if id == u64::MAX { + // Reset to 1 and clean up any stale entries + CALLBACK_ID_COUNTER.store(1, Ordering::Relaxed); + // Consider logging this rare event + } self.callbacks.insert(id, info); id }
654-712: Add shutdown signal checking in sync thread.The sync thread doesn't check the shutdown signal, which could prevent clean shutdown if the sync operation is long-running.
Add periodic shutdown checks:
let sync_handle = std::thread::spawn(move || { + // Check shutdown before starting + if shutdown_signal_clone.load(Ordering::Relaxed) { + return; + } + // Run monitoring loop let monitor_result = runtime_handle.block_on(async move { let mut guard = inner.lock().unwrap(); if let Some(ref mut spv_client) = *guard { - spv_client.monitor_network().await + // Consider adding timeout or cancellation support + tokio::select! { + result = spv_client.monitor_network() => result, + _ = async { + while !shutdown_signal_clone.load(Ordering::Relaxed) { + tokio::time::sleep(Duration::from_millis(100)).await; + } + } => Err(dash_spv::SpvError::Config("Shutdown requested".to_string())) + } } else {
1538-1582: Optimize get_total_balance for better performance.Iterating through all watched addresses and making individual balance queries could be inefficient for wallets with many addresses.
Consider maintaining a cached total balance that's updated on balance change events:
pub struct FFIDashSpvClient { inner: Arc<Mutex<Option<DashSpvClient>>>, runtime: Arc<Runtime>, event_callbacks: Arc<Mutex<FFIEventCallbacks>>, active_threads: Arc<Mutex<Vec<std::thread::JoinHandle<()>>>>, sync_callbacks: Arc<Mutex<Option<SyncCallbackData>>>, shutdown_signal: Arc<AtomicBool>, + cached_total_balance: Arc<RwLock<Option<FFIBalance>>>, }Then update the cache on balance events and return cached value when fresh.
dash-spv/src/bloom/manager.rs (1)
122-128: Statistics counters could overflow.The statistics counters use u64 but increment without checking for overflow.
Use saturating arithmetic to prevent overflow:
if self.config.enable_stats { let mut stats = self.stats.write().await; - stats.recreations += 1; - stats.items_added = total_elements as u64; + stats.recreations = stats.recreations.saturating_add(1); + stats.items_added = total_elements as u64;And in other places:
-stats.items_added += 1; +stats.items_added = stats.items_added.saturating_add(1); -stats.queries += 1; +stats.queries = stats.queries.saturating_add(1); -stats.matches += 1; +stats.matches = stats.matches.saturating_add(1);Also applies to: 154-157, 180-183, 205-208, 224-228
dash-network-ffi/src/dash_network_ffi.swift (1)
1-2: Consider updating the UniFFI template to use professional languageThe comment "autogenerated by some hot garbage" is unprofessional for production code. Consider updating the UniFFI template to use more appropriate language.
dash-network-ffi/src/dash_network_ffiFFI.h (1)
1-2: Unprofessional language in auto-generated headerThe comment contains unprofessional language ("hot garbage"). This should be fixed in the UniFFI code generation templates.
dash-spv-ffi/src/platform_integration.rs (2)
63-114: Well-structured placeholder with comprehensive validationThe function has excellent parameter validation including null checks and buffer size validation. The placeholder implementation correctly returns
NotImplementederror.Would you like me to help implement the actual quorum public key retrieval logic or create an issue to track this TODO?
116-144: Clean placeholder implementation with proper validationThe function correctly validates all input parameters and returns an appropriate
NotImplementederror as a placeholder.Would you like me to help implement the platform activation height retrieval logic or create an issue to track this TODO?
| module dash_network_ffiFFI { | ||
| header "dash_network_ffiFFI.h" | ||
| export * | ||
| use "Darwin" | ||
| use "_Builtin_stdbool" | ||
| use "_Builtin_stdint" | ||
| } No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Modulemap portability: avoid referencing non-standard builtin modules
The use directives for _Builtin_stdbool/_Builtin_stdint are fragile and non-portable. Prefer having the header include <stdbool.h> and <stdint.h> directly, and drop the builtin module references. Consider also using requires objc for Darwin if needed.
module dash_network_ffiFFI {
header "dash_network_ffiFFI.h"
export *
- use "Darwin"
- use "_Builtin_stdbool"
- use "_Builtin_stdint"
+ use "Darwin"
+ // No need to 'use' builtin std* modules; include them in the header instead.
+ // If Obj-C is required, consider:
+ // requires objc
}And ensure dash_network_ffiFFI.h includes:
#include <stdbool.h>
#include <stdint.h>🤖 Prompt for AI Agents
In dash-network-ffi/src/dash_network_ffiFFI.modulemap around lines 1 to 7, the
modulemap references non-portable builtin modules
(_Builtin_stdbool/_Builtin_stdint) and should be simplified: remove the two use
"_Builtin_..." lines, keep the header "dash_network_ffiFFI.h", optionally add
requires objc for Darwin if the header needs Objective‑C, and ensure the header
file dash_network_ffiFFI.h itself includes the standard headers by adding
#include <stdbool.h> and #include <stdint.h> so the modulemap no longer depends
on fragile builtin module references.
| if let Ok(txid_parsed) = txid.parse::<dashcore::Txid>() { | ||
| callbacks.call_transaction(&txid_parsed, confirmed, amount as i64, addresses, block_height); | ||
| } else { | ||
| tracing::error!("Failed to parse transaction ID: {}", txid); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve error handling for transaction and block ID parsing.
When parsing transaction IDs and block hashes from strings, errors are logged but the events are silently dropped. Consider maintaining error metrics or propagating errors to callbacks.
Consider passing parse errors to an error callback or maintaining error statistics:
if let Ok(txid_parsed) = txid.parse::<dashcore::Txid>() {
callbacks.call_transaction(&txid_parsed, confirmed, amount as i64, addresses, block_height);
} else {
tracing::error!("Failed to parse transaction ID: {}", txid);
+ // Consider: callbacks.call_parse_error("transaction", txid);
+ // Or: self.parse_error_count.fetch_add(1, Ordering::Relaxed);
}Also applies to: 246-250
🤖 Prompt for AI Agents
In dash-spv-ffi/src/client.rs around lines 236-240 (and similarly 246-250),
transaction ID and block hash parse failures are only logged and the events
dropped; change the handling to propagate the parse error to the library caller
and/or increment error metrics: add or call an existing error callback on
callbacks (e.g., callbacks.call_error or callbacks.call_parse_error) passing the
offending string, the parse error details, and context (tx vs block) so the
caller can react; if no error callback exists, increment a counter on callbacks
(e.g., callbacks.increment_error("txid_parse") / "blockhash_parse") and include
the full error information in the call; apply this same pattern to both the txid
and block hash parsing sites so parse failures are surfaced rather than silently
discarded.
| #[no_mangle] | ||
| pub unsafe extern "C" fn dash_spv_ffi_client_cancel_sync(client: *mut FFIDashSpvClient) -> i32 { | ||
| null_check!(client); | ||
|
|
||
| let client = &(*client); | ||
|
|
||
| // Clear callbacks to stop progress updates and unregister from the registry | ||
| let mut cb_guard = client.sync_callbacks.lock().unwrap(); | ||
| if let Some(ref callback_data) = *cb_guard { | ||
| CALLBACK_REGISTRY.lock().unwrap().unregister(callback_data.callback_id); | ||
| } | ||
| *cb_guard = None; | ||
|
|
||
| // TODO: Implement proper sync task cancellation using cancellation tokens or abort handles. | ||
| // Currently, this only stops the client, but the sync task may continue running in the background. | ||
| let inner = client.inner.clone(); | ||
| let result = client.runtime.block_on(async { | ||
| let mut guard = inner.lock().unwrap(); | ||
| if let Some(ref mut spv_client) = *guard { | ||
| spv_client.stop().await | ||
| } else { | ||
| Err(dash_spv::SpvError::Config("Client not initialized".to_string())) | ||
| } | ||
| }); | ||
|
|
||
| match result { | ||
| Ok(_) => FFIErrorCode::Success as i32, | ||
| Err(e) => { | ||
| set_last_error(&e.to_string()); | ||
| FFIErrorCode::from(e) as i32 | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Complete the sync cancellation implementation.
The function acknowledges it doesn't fully cancel sync operations. This could lead to resource leaks or unexpected behavior.
The TODO indicates this needs proper implementation with cancellation tokens. Would you like me to help implement proper async cancellation using tokio::sync::watch or tokio_util::sync::CancellationToken?
🤖 Prompt for AI Agents
In dash-spv-ffi/src/client.rs around lines 732 to 764, the cancel function only
clears callbacks and calls spv_client.stop() but does not actually cancel an
in-progress async sync task; implement proper cancellation by adding a
CancellationToken and optional JoinHandle to the FFIDashSpvClient inner state
(or the SPV client wrapper), have the sync task be spawned with a clone of that
token and regularly check token.cancelled() or select! on it so it can exit
cleanly, and on cancel call token.cancel() then await the JoinHandle (with a
timeout if desired) before returning; also ensure you still unregister callbacks
and set last error on failure. Update struct definitions and places that start
the sync to create and store the token and handle, and update this function to
trigger the token, await task termination, handle errors, and free/unset the
stored token and handle.
| let callbacks = unsafe { &*(user_data as *const TestCallbacks) }; | ||
| let mut count = callbacks.mempool_added_count.lock().unwrap(); | ||
| *count += 1; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential use-after-free in callback
The callback dereferences a raw pointer to TestCallbacks without verifying its validity. If the callback is invoked after the TestCallbacks is freed, this will cause undefined behavior.
Consider using a reference counting approach or ensuring callbacks are unregistered before freeing:
extern "C" fn test_mempool_added(
_txid: *const [u8; 32],
_amount: i64,
_addresses: *const c_char,
_is_instant_send: bool,
user_data: *mut c_void,
) {
+ if user_data.is_null() {
+ return;
+ }
let callbacks = unsafe { &*(user_data as *const TestCallbacks) };
let mut count = callbacks.mempool_added_count.lock().unwrap();
*count += 1;
}Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In dash-spv-ffi/tests/test_mempool_tracking.rs around lines 25 to 28, the
callback unsafely dereferences a raw pointer to TestCallbacks which can lead to
a use-after-free if the callback fires after TestCallbacks is dropped; fix by
converting TestCallbacks to an Arc and pass a stable reference (e.g., store Arc
in the user_data and clone it in the callback) or ensure callbacks are
explicitly unregistered before freeing TestCallbacks; update setup to create and
pass a heap-stable reference (Arc/Box->into_raw) and update teardown to
drop/unregister the callback or convert stored raw pointers back to Arc via
Arc::from_raw only when ownership transfer is correct so the callback holds a
strong reference while it can be invoked.
| let test_callbacks = Box::new(TestCallbacks::default()); | ||
| let test_callbacks_ptr = Box::into_raw(test_callbacks); | ||
|
|
||
| let callbacks = FFIEventCallbacks { | ||
| on_block: None, | ||
| on_transaction: None, | ||
| on_balance_update: None, | ||
| on_mempool_transaction_added: Some(test_mempool_added), | ||
| on_mempool_transaction_confirmed: Some(test_mempool_confirmed), | ||
| on_mempool_transaction_removed: Some(test_mempool_removed), | ||
| user_data: test_callbacks_ptr as *mut c_void, | ||
| }; | ||
|
|
||
| let result = dash_spv_ffi_client_set_event_callbacks(client, callbacks); | ||
| assert_eq!(result, 0); | ||
|
|
||
| // Clean up | ||
| let _ = Box::from_raw(test_callbacks_ptr); | ||
| dash_spv_ffi_client_destroy(client); | ||
| dash_spv_ffi_config_destroy(config); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Memory management issue with callbacks
The test creates TestCallbacks, passes it as user_data, but then immediately frees it after setting callbacks. This creates a dangling pointer if the callbacks are invoked later.
The TestCallbacks should be kept alive for the duration of the test:
// Set up test callbacks
let test_callbacks = Box::new(TestCallbacks::default());
let test_callbacks_ptr = Box::into_raw(test_callbacks);
let callbacks = FFIEventCallbacks {
on_block: None,
on_transaction: None,
on_balance_update: None,
on_mempool_transaction_added: Some(test_mempool_added),
on_mempool_transaction_confirmed: Some(test_mempool_confirmed),
on_mempool_transaction_removed: Some(test_mempool_removed),
user_data: test_callbacks_ptr as *mut c_void,
};
let result = dash_spv_ffi_client_set_event_callbacks(client, callbacks);
assert_eq!(result, 0);
+ // TODO: Perform operations that trigger callbacks here
+ // For example: start client, wait for events, etc.
+
// Clean up
+ // Unregister callbacks first to prevent use-after-free
+ // dash_spv_ffi_client_clear_event_callbacks(client);
let _ = Box::from_raw(test_callbacks_ptr);
dash_spv_ffi_client_destroy(client);
dash_spv_ffi_config_destroy(config);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let test_callbacks = Box::new(TestCallbacks::default()); | |
| let test_callbacks_ptr = Box::into_raw(test_callbacks); | |
| let callbacks = FFIEventCallbacks { | |
| on_block: None, | |
| on_transaction: None, | |
| on_balance_update: None, | |
| on_mempool_transaction_added: Some(test_mempool_added), | |
| on_mempool_transaction_confirmed: Some(test_mempool_confirmed), | |
| on_mempool_transaction_removed: Some(test_mempool_removed), | |
| user_data: test_callbacks_ptr as *mut c_void, | |
| }; | |
| let result = dash_spv_ffi_client_set_event_callbacks(client, callbacks); | |
| assert_eq!(result, 0); | |
| // Clean up | |
| let _ = Box::from_raw(test_callbacks_ptr); | |
| dash_spv_ffi_client_destroy(client); | |
| dash_spv_ffi_config_destroy(config); | |
| } | |
| let test_callbacks = Box::new(TestCallbacks::default()); | |
| let test_callbacks_ptr = Box::into_raw(test_callbacks); | |
| let callbacks = FFIEventCallbacks { | |
| on_block: None, | |
| on_transaction: None, | |
| on_balance_update: None, | |
| on_mempool_transaction_added: Some(test_mempool_added), | |
| on_mempool_transaction_confirmed: Some(test_mempool_confirmed), | |
| on_mempool_transaction_removed: Some(test_mempool_removed), | |
| user_data: test_callbacks_ptr as *mut c_void, | |
| }; | |
| let result = dash_spv_ffi_client_set_event_callbacks(client, callbacks); | |
| assert_eq!(result, 0); | |
| // TODO: Perform operations that trigger callbacks here | |
| // For example: start client, wait for events, etc. | |
| // Clean up | |
| // Unregister callbacks first to prevent use-after-free | |
| // dash_spv_ffi_client_clear_event_callbacks(client); | |
| let _ = Box::from_raw(test_callbacks_ptr); | |
| dash_spv_ffi_client_destroy(client); | |
| dash_spv_ffi_config_destroy(config); |
🤖 Prompt for AI Agents
In dash-spv-ffi/tests/test_mempool_tracking.rs around lines 115 to 135, the
TestCallbacks Box is converted to a raw pointer and passed as user_data but then
immediately freed, creating a dangling pointer if callbacks fire; keep the Box
alive for the duration of the test by not calling Box::from_raw immediately —
instead either store the Box in a local variable (so it remains on the stack
until after you destroy the client and config) or call Box::leak to
intentionally leak it and then reclaim (Box::from_raw) only after
dash_spv_ffi_client_destroy(client) and dash_spv_ffi_config_destroy(config);
ensure the pointer remains valid until all callbacks are unregistered or the
client is destroyed.
| pub headers_request_rate_limit: Option<f64>, | ||
|
|
||
| /// Rate limit for masternode list requests per second (default: 5.0). | ||
| pub mnlist_request_rate_limit: Option<f64>, | ||
|
|
||
| /// Rate limit for CF header requests per second (default: 10.0). | ||
| pub cfheaders_request_rate_limit: Option<f64>, | ||
|
|
||
| /// Rate limit for filter requests per second (default: 50.0). | ||
| pub filters_request_rate_limit: Option<f64>, | ||
|
|
||
| /// Rate limit for block requests per second (default: 10.0). | ||
| pub blocks_request_rate_limit: Option<f64>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider validating rate limit values.
The rate limit fields accept Option<f64> but there's no validation to ensure positive values when set. Consider adding validation in the builder methods or validate() function to ensure rate limits are positive when Some.
pub fn validate(&self) -> Result<(), String> {
// ... existing validation ...
+
+ // Validate rate limits if set
+ if let Some(rate) = self.headers_request_rate_limit {
+ if rate <= 0.0 {
+ return Err("headers_request_rate_limit must be > 0".to_string());
+ }
+ }
+ // ... repeat for other rate limits📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub headers_request_rate_limit: Option<f64>, | |
| /// Rate limit for masternode list requests per second (default: 5.0). | |
| pub mnlist_request_rate_limit: Option<f64>, | |
| /// Rate limit for CF header requests per second (default: 10.0). | |
| pub cfheaders_request_rate_limit: Option<f64>, | |
| /// Rate limit for filter requests per second (default: 50.0). | |
| pub filters_request_rate_limit: Option<f64>, | |
| /// Rate limit for block requests per second (default: 10.0). | |
| pub blocks_request_rate_limit: Option<f64>, | |
| pub fn validate(&self) -> Result<(), String> { | |
| // ... existing validation ... | |
| // Validate rate limits if set | |
| if let Some(rate) = self.headers_request_rate_limit { | |
| if rate <= 0.0 { | |
| return Err("headers_request_rate_limit must be > 0".to_string()); | |
| } | |
| } | |
| // ... repeat for other rate limits | |
| Ok(()) | |
| } |
🤖 Prompt for AI Agents
In dash-spv/src/client/config.rs around lines 149 to 161, the Option<f64> rate
limit fields (headers_request_rate_limit, mnlist_request_rate_limit,
cfheaders_request_rate_limit, filters_request_rate_limit,
blocks_request_rate_limit) are not validated for positive values; add validation
so any Some(value) must be > 0.0. Update the builder methods or the existing
validate() function to check each rate limit field when present and
return/propagate a clear error (or Result) if a non-positive value is provided;
ensure tests or callers handle the Result and include helpful error messages
referencing the field name and invalid value.
| fn create_test_address() -> Address { | ||
| Address::from_str("XeNTGz5bVjPNZVPpwTRz6SnLbZGxLqJUg4").unwrap().assume_checked() | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling for address parsing
Using unwrap() on address parsing could panic if the address format changes or is invalid. Consider using proper error handling or a constant.
fn create_test_address() -> Address {
- Address::from_str("XeNTGz5bVjPNZVPpwTRz6SnLbZGxLqJUg4").unwrap().assume_checked()
+ Address::from_str("XeNTGz5bVjPNZVPpwTRz6SnLbZGxLqJUg4")
+ .expect("Test address should be valid")
+ .assume_checked()
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| fn create_test_address() -> Address { | |
| Address::from_str("XeNTGz5bVjPNZVPpwTRz6SnLbZGxLqJUg4").unwrap().assume_checked() | |
| } | |
| fn create_test_address() -> Address { | |
| Address::from_str("XeNTGz5bVjPNZVPpwTRz6SnLbZGxLqJUg4") | |
| .expect("Test address should be valid") | |
| .assume_checked() | |
| } |
🤖 Prompt for AI Agents
In dash-spv/src/client/consistency_test.rs around lines 18 to 20, the test
helper uses Address::from_str(...).unwrap() which can panic; change the helper
to return a Result<Address, Error> (e.g., -> Result<Address, anyhow::Error>) and
replace unwrap() with propagating the parse error
(Address::from_str(...).map_err(...)?), then update callers to handle the Result
(use ? in tests) or alternatively define the address as a validated constant
(parsed once in a test setup) to avoid runtime unwraps.
| watch_items.write().await.insert(WatchItem::address(address.clone())); | ||
| wallet.read().await.add_watched_address(address).await.unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect lock usage when adding watched address
Using read().await to call add_watched_address is incorrect since it modifies state. This should use write().await.
watch_items.write().await.insert(WatchItem::address(address.clone()));
- wallet.read().await.add_watched_address(address).await.unwrap();
+ wallet.write().await.add_watched_address(address).await.unwrap();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| watch_items.write().await.insert(WatchItem::address(address.clone())); | |
| wallet.read().await.add_watched_address(address).await.unwrap(); | |
| watch_items.write().await.insert(WatchItem::address(address.clone())); | |
| wallet.write().await.add_watched_address(address).await.unwrap(); |
🤖 Prompt for AI Agents
In dash-spv/src/client/consistency_test.rs around lines 71 to 72, the test
incorrectly uses wallet.read().await to call add_watched_address (which mutates
wallet state); change that to wallet.write().await so a write lock is acquired
before calling add_watched_address().await.unwrap(); ensure the awaited call
still uses the cloned/moved address as before.
| watch_items.write().await.insert(labeled_item); | ||
| wallet.read().await.add_watched_address(address).await.unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect lock usage when adding watched address
Same issue as earlier - using read().await to modify state.
watch_items.write().await.insert(labeled_item);
- wallet.read().await.add_watched_address(address).await.unwrap();
+ wallet.write().await.add_watched_address(address).await.unwrap();🤖 Prompt for AI Agents
In dash-spv/src/client/consistency_test.rs around lines 298 to 299, the code
uses wallet.read().await.add_watched_address(...) which wrongly acquires a read
lock while mutating state; replace the read() lock with a write() lock
(wallet.write().await) when calling add_watched_address so the method can modify
the wallet safely, and keep the .await.unwrap() behavior as before.
| // Create a message that might cause an error in sync manager | ||
| // For example, Headers2 with invalid data | ||
| let headers2 = dashcore::network::message_headers2::Headers2Message { | ||
| headers: vec![], // Empty headers might cause validation error | ||
| }; | ||
| let message = NetworkMessage::Headers2(headers2); | ||
|
|
||
| // Handle the message - error should be propagated | ||
| let result = handler.handle_network_message(message).await; | ||
| // The result depends on sync manager validation | ||
| assert!(result.is_ok() || result.is_err()); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve error test assertion
The error propagation test has a weak assertion that accepts both success and failure. This reduces the test's effectiveness in catching regressions.
// Handle the message - error should be propagated
let result = handler.handle_network_message(message).await;
- // The result depends on sync manager validation
- assert!(result.is_ok() || result.is_err());
+ // Empty headers should be handled gracefully (either accepted or rejected consistently)
+ // Document the expected behavior
+ match result {
+ Ok(_) => {
+ // Verify that empty headers are accepted (if that's the intended behavior)
+ // You might want to check side effects here
+ }
+ Err(e) => {
+ // Verify the error type is what we expect for empty headers
+ // For example: assert!(e.to_string().contains("empty headers"));
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Create a message that might cause an error in sync manager | |
| // For example, Headers2 with invalid data | |
| let headers2 = dashcore::network::message_headers2::Headers2Message { | |
| headers: vec![], // Empty headers might cause validation error | |
| }; | |
| let message = NetworkMessage::Headers2(headers2); | |
| // Handle the message - error should be propagated | |
| let result = handler.handle_network_message(message).await; | |
| // The result depends on sync manager validation | |
| assert!(result.is_ok() || result.is_err()); | |
| } | |
| // Create a message that might cause an error in sync manager | |
| // For example, Headers2 with invalid data | |
| let headers2 = dashcore::network::message_headers2::Headers2Message { | |
| headers: vec![], // Empty headers might cause validation error | |
| }; | |
| let message = NetworkMessage::Headers2(headers2); | |
| // Handle the message - error should be propagated | |
| let result = handler.handle_network_message(message).await; | |
| // Empty headers should be handled gracefully (either accepted or rejected consistently) | |
| // Document the expected behavior | |
| match result { | |
| Ok(_) => { | |
| // Verify that empty headers are accepted (if that's the intended behavior) | |
| // You might want to check side effects here | |
| } | |
| Err(e) => { | |
| // Verify the error type is what we expect for empty headers | |
| // For example: assert!(e.to_string().contains("empty headers")); | |
| } | |
| } | |
| } |
🤖 Prompt for AI Agents
In dash-spv/src/client/message_handler_test.rs around lines 584 to 595, the
test's assertion is too permissive (assert!(result.is_ok() || result.is_err())),
so change it to assert a deterministic outcome: either make the constructed
Headers2 message provably invalid so the handler must return Err and assert
result.is_err() (or use result.expect_err("expected validation error")), or
alternatively stub/mock the sync manager to return a specific error and assert
result.matches the expected Err variant (e.g.,
assert_eq!(result.unwrap_err().to_string(), "expected error message")). Ensure
the test verifies a concrete error case rather than accepting both success and
failure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review continued from previous batch...
| * Cancels the sync operation. | ||
| * | ||
| * **Note**: This function currently only stops the SPV client and clears sync callbacks, | ||
| * but does not fully abort the ongoing sync process. The sync operation may continue | ||
| * running in the background until it completes naturally. Full sync cancellation with | ||
| * proper task abortion is not yet implemented. | ||
| * | ||
| * # Safety | ||
| * The client pointer must be valid and non-null. | ||
| * | ||
| * # Returns | ||
| * Returns 0 on success, or an error code on failure. | ||
| */ | ||
| int32_t dash_spv_ffi_client_cancel_sync(struct FFIDashSpvClient *client); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sync cancellation is not fully implemented
The documentation states that dash_spv_ffi_client_cancel_sync doesn't actually abort the ongoing sync process - it only stops the client and clears callbacks. The sync may continue in the background.
This could lead to:
- Resource leaks if background tasks continue
- Unexpected behavior when users think sync is cancelled
- Race conditions if a new sync is started while the old one continues
Consider implementing proper task cancellation or clearly documenting this limitation in user-facing documentation.
🤖 Prompt for AI Agents
In dash-spv-ffi/include/dash_spv_ffi.h around lines 343 to 356, the comment for
dash_spv_ffi_client_cancel_sync admits the function only stops the client and
clears callbacks but does not abort ongoing sync tasks, which can leak resources
or cause races; update the implementation (not just docs) to perform proper
cancellation: track the running sync task handle(s) in the FFIDashSpvClient
struct, expose a cancellation token or atomic flag that the sync loop checks,
signal that token and join/await the background task(s) from
dash_spv_ffi_client_cancel_sync, clear callbacks only after the task has
terminated, and return an appropriate error code if shutdown times out;
alternatively, if you cannot implement cancellation now, make the header comment
explicit about the limitation and add a new function or flag to expose the
inability to cancel so callers can handle it.
| let msg = CString::new("Sync completed successfully") | ||
| .unwrap_or_else(|_| { | ||
| CString::new("Sync completed") | ||
| .expect("hardcoded string is safe") | ||
| }); | ||
| // SAFETY: The callback and user_data are safely managed through the registry | ||
| // The registry ensures proper lifetime management and thread safety | ||
| callback(true, msg.as_ptr(), user_data); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle CString creation failures more robustly.
CString creation can fail if the string contains interior null bytes. While unlikely with the hardcoded strings, dynamic error messages could contain nulls.
-let msg = CString::new("Sync completed successfully")
- .unwrap_or_else(|_| {
- CString::new("Sync completed")
- .expect("hardcoded string is safe")
- });
+let msg = CString::new("Sync completed successfully")
+ .unwrap_or_else(|_| {
+ // Use a guaranteed-safe fallback
+ CString::new("Sync completed").unwrap()
+ });For error messages:
-let msg = match CString::new(format!("Sync failed: {}", e)) {
- Ok(s) => s,
- Err(_) => CString::new("Sync failed")
- .expect("hardcoded string is safe"),
-};
+let error_str = e.to_string().replace('\0', "\\0"); // Escape nulls
+let msg = CString::new(format!("Sync failed: {}", error_str))
+ .unwrap_or_else(|_| CString::new("Sync failed").unwrap());Also applies to: 445-452
| println!("Starting test sync..."); | ||
|
|
||
| // Get initial height | ||
| let start_height = match spv_client.sync_progress().await { | ||
| Ok(progress) => progress.header_height, | ||
| Err(e) => { | ||
| eprintln!("Failed to get initial height: {}", e); | ||
| return Err(e); | ||
| } | ||
| }; | ||
| println!("Initial height: {}", start_height); | ||
|
|
||
| // Start sync | ||
| match spv_client.sync_to_tip().await { | ||
| Ok(_) => println!("Sync started successfully"), | ||
| Err(e) => { | ||
| eprintln!("Failed to start sync: {}", e); | ||
| return Err(e); | ||
| } | ||
| } | ||
|
|
||
| // Wait a bit for headers to download | ||
| tokio::time::sleep(Duration::from_secs(10)).await; | ||
|
|
||
| // Check if headers increased | ||
| let end_height = match spv_client.sync_progress().await { | ||
| Ok(progress) => progress.header_height, | ||
| Err(e) => { | ||
| eprintln!("Failed to get final height: {}", e); | ||
| return Err(e); | ||
| } | ||
| }; | ||
| println!("Final height: {}", end_height); | ||
|
|
||
| if end_height > start_height { | ||
| println!("✅ Sync working! Downloaded {} headers", end_height - start_height); | ||
| Ok(()) | ||
| } else { | ||
| let err = dash_spv::SpvError::Storage(dash_spv::StorageError::NotFound( | ||
| "Client not initialized".to_string(), | ||
| )); | ||
| callbacks.call_completion(false, Some(&err.to_string())); | ||
| Err(err) | ||
| let msg = "No headers downloaded".to_string(); | ||
| eprintln!("❌ {}", msg); | ||
| Err(dash_spv::SpvError::Sync(dash_spv::SyncError::SyncFailed(msg))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Replace println! with proper logging.
Using println! in library code bypasses the logging infrastructure and can't be controlled by users.
-println!("Starting test sync...");
+tracing::info!("Starting test sync...");
-println!("Initial height: {}", start_height);
+tracing::info!("Initial height: {}", start_height);
-println!("Sync started successfully")
+tracing::info!("Sync started successfully")
-eprintln!("Failed to start sync: {}", e);
+tracing::error!("Failed to start sync: {}", e);
-println!("Final height: {}", end_height);
+tracing::info!("Final height: {}", end_height);
-println!("✅ Sync working! Downloaded {} headers", end_height - start_height);
+tracing::info!("✅ Sync working! Downloaded {} headers", end_height - start_height);
-eprintln!("❌ {}", msg);
+tracing::error!("❌ {}", msg);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| println!("Starting test sync..."); | |
| // Get initial height | |
| let start_height = match spv_client.sync_progress().await { | |
| Ok(progress) => progress.header_height, | |
| Err(e) => { | |
| eprintln!("Failed to get initial height: {}", e); | |
| return Err(e); | |
| } | |
| }; | |
| println!("Initial height: {}", start_height); | |
| // Start sync | |
| match spv_client.sync_to_tip().await { | |
| Ok(_) => println!("Sync started successfully"), | |
| Err(e) => { | |
| eprintln!("Failed to start sync: {}", e); | |
| return Err(e); | |
| } | |
| } | |
| // Wait a bit for headers to download | |
| tokio::time::sleep(Duration::from_secs(10)).await; | |
| // Check if headers increased | |
| let end_height = match spv_client.sync_progress().await { | |
| Ok(progress) => progress.header_height, | |
| Err(e) => { | |
| eprintln!("Failed to get final height: {}", e); | |
| return Err(e); | |
| } | |
| }; | |
| println!("Final height: {}", end_height); | |
| if end_height > start_height { | |
| println!("✅ Sync working! Downloaded {} headers", end_height - start_height); | |
| Ok(()) | |
| } else { | |
| let err = dash_spv::SpvError::Storage(dash_spv::StorageError::NotFound( | |
| "Client not initialized".to_string(), | |
| )); | |
| callbacks.call_completion(false, Some(&err.to_string())); | |
| Err(err) | |
| let msg = "No headers downloaded".to_string(); | |
| eprintln!("❌ {}", msg); | |
| Err(dash_spv::SpvError::Sync(dash_spv::SyncError::SyncFailed(msg))) | |
| tracing::info!("Starting test sync..."); | |
| // Get initial height | |
| let start_height = match spv_client.sync_progress().await { | |
| Ok(progress) => progress.header_height, | |
| Err(e) => { | |
| - eprintln!("Failed to get initial height: {}", e); | |
| + eprintln!("Failed to get initial height: {}", e); | |
| return Err(e); | |
| } | |
| }; | |
| - println!("Initial height: {}", start_height); | |
| + tracing::info!("Initial height: {}", start_height); | |
| // Start sync | |
| match spv_client.sync_to_tip().await { | |
| - Ok(_) => println!("Sync started successfully"), | |
| + Ok(_) => tracing::info!("Sync started successfully"), | |
| Err(e) => { | |
| - eprintln!("Failed to start sync: {}", e); | |
| + tracing::error!("Failed to start sync: {}", e); | |
| return Err(e); | |
| } | |
| } | |
| // Wait a bit for headers to download | |
| tokio::time::sleep(Duration::from_secs(10)).await; | |
| // Check if headers increased | |
| let end_height = match spv_client.sync_progress().await { | |
| Ok(progress) => progress.header_height, | |
| Err(e) => { | |
| - eprintln!("Failed to get final height: {}", e); | |
| + eprintln!("Failed to get final height: {}", e); | |
| return Err(e); | |
| } | |
| }; | |
| - println!("Final height: {}", end_height); | |
| + tracing::info!("Final height: {}", end_height); | |
| if end_height > start_height { | |
| - println!("✅ Sync working! Downloaded {} headers", end_height - start_height); | |
| + tracing::info!("✅ Sync working! Downloaded {} headers", end_height - start_height); | |
| Ok(()) | |
| } else { | |
| let msg = "No headers downloaded".to_string(); | |
| - eprintln!("❌ {}", msg); | |
| + tracing::error!("❌ {}", msg); | |
| Err(dash_spv::SpvError::Sync(dash_spv::SyncError::SyncFailed(msg))) | |
| } |
🤖 Prompt for AI Agents
In dash-spv-ffi/src/client.rs around lines 495 to 535, replace all println! and
eprintln! calls with the project's logging macros (e.g., tracing::info!,
tracing::error! or log::info!/error! to match the repo convention) so output
goes through the logging infrastructure; keep the same messages and variables
(including success/failure details and header heights), and ensure errors return
after logging as before.
| pub struct CoreSDKHandle { | ||
| pub client: *mut FFIDashSpvClient, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider making the client field private to prevent external modification
The client field in CoreSDKHandle is public and contains a raw pointer. This allows external code to modify the pointer directly, which could lead to undefined behavior. Consider making this field private and providing controlled access methods if needed.
#[repr(C)]
pub struct CoreSDKHandle {
- pub client: *mut FFIDashSpvClient,
+ client: *mut FFIDashSpvClient,
}If external access is required, consider adding a getter method that returns an immutable reference or performs necessary safety checks.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub struct CoreSDKHandle { | |
| pub client: *mut FFIDashSpvClient, | |
| } | |
| #[repr(C)] | |
| pub struct CoreSDKHandle { | |
| client: *mut FFIDashSpvClient, | |
| } |
🤖 Prompt for AI Agents
In dash-spv-ffi/src/platform_integration.rs around lines 7 to 9, the
CoreSDKHandle struct exposes a public raw pointer field `client`, which allows
external code to mutate the pointer and cause UB; change the field to private
(e.g., `client: *mut FFIDashSpvClient` without pub) and add controlled accessors
instead: a safe getter that returns an Option<&FFIDashSpvClient> (and/or
Option<&mut FFIDashSpvClient>) after performing null checks and using unsafe
internally, plus explicit methods to set or replace the pointer if mutation is
required (performing necessary safety validation, ownership comments, and using
NonNull/Option to represent presence). Ensure documentation comments outline
safety guarantees and keep raw pointer manipulation confined to the impl block.
| impl FFIResult { | ||
| fn error(code: FFIErrorCode, message: &str) -> Self { | ||
| set_last_error(message); | ||
| FFIResult { | ||
| error_code: code as i32, | ||
| error_message: crate::dash_spv_ffi_get_last_error(), | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential dangling pointer in FFIResult::error_message
The error_message field stores a pointer obtained from dash_spv_ffi_get_last_error(), which points to data in the global LAST_ERROR. If another error is set after this FFIResult is created but before it's consumed, the pointer could become invalid, leading to undefined behavior.
Consider either:
- Documenting that
FFIResultmust be consumed immediately - Using a more robust error handling mechanism that ensures pointer validity
- Copying the error message instead of storing a pointer
impl FFIResult {
fn error(code: FFIErrorCode, message: &str) -> Self {
set_last_error(message);
+ // WARNING: The returned FFIResult must be consumed immediately
+ // as the error_message pointer may become invalid if another error is set
FFIResult {
error_code: code as i32,
error_message: crate::dash_spv_ffi_get_last_error(),
}
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| impl FFIResult { | |
| fn error(code: FFIErrorCode, message: &str) -> Self { | |
| set_last_error(message); | |
| FFIResult { | |
| error_code: code as i32, | |
| error_message: crate::dash_spv_ffi_get_last_error(), | |
| } | |
| } | |
| } | |
| impl FFIResult { | |
| fn error(code: FFIErrorCode, message: &str) -> Self { | |
| set_last_error(message); | |
| // WARNING: The returned FFIResult must be consumed immediately | |
| // as the error_message pointer may become invalid if another error is set | |
| FFIResult { | |
| error_code: code as i32, | |
| error_message: crate::dash_spv_ffi_get_last_error(), | |
| } | |
| } | |
| } |
| pub async fn create_filter(&self) -> Result<FilterLoad, SpvError> { | ||
| let addresses = self.addresses.read().await; | ||
| let outpoints = self.outpoints.read().await; | ||
| let data_elements = self.data_elements.read().await; | ||
|
|
||
| // Calculate total elements | ||
| let total_elements = | ||
| addresses.len() as u32 + outpoints.len() as u32 + data_elements.len() as u32; | ||
|
|
||
| let elements = std::cmp::max(self.config.elements, total_elements); | ||
|
|
||
| // Create new filter | ||
| let mut new_filter = BloomFilter::new( | ||
| elements, | ||
| self.config.false_positive_rate, | ||
| self.config.tweak, | ||
| self.config.flags, | ||
| ) | ||
| .map_err(|e| SpvError::General(format!("Failed to create bloom filter: {:?}", e)))?; | ||
|
|
||
| // Add all watched elements | ||
| for address in addresses.iter() { | ||
| self.add_address_to_filter(&mut new_filter, address)?; | ||
| } | ||
|
|
||
| for outpoint in outpoints.iter() { | ||
| new_filter.insert(&outpoint_to_bytes(outpoint)); | ||
| } | ||
|
|
||
| for data in data_elements.iter() { | ||
| new_filter.insert(data); | ||
| } | ||
|
|
||
| // Update stats | ||
| if self.config.enable_stats { | ||
| let mut stats = self.stats.write().await; | ||
| stats.recreations += 1; | ||
| stats.items_added = total_elements as u64; | ||
| stats.current_false_positive_rate = | ||
| new_filter.estimate_false_positive_rate(total_elements); | ||
| } | ||
|
|
||
| // Store the new filter | ||
| let filter_load = FilterLoad::from_bloom_filter(&new_filter); | ||
| *self.filter.write().await = Some(new_filter); | ||
|
|
||
| Ok(filter_load) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Potential deadlock from multiple lock acquisitions.
The create_filter method acquires multiple read locks simultaneously. While these are read locks, if another operation tries to acquire write locks in a different order, it could deadlock.
Consider acquiring all needed data in a single critical section:
pub async fn create_filter(&self) -> Result<FilterLoad, SpvError> {
- let addresses = self.addresses.read().await;
- let outpoints = self.outpoints.read().await;
- let data_elements = self.data_elements.read().await;
+ // Clone the data to avoid holding locks during filter creation
+ let (addresses, outpoints, data_elements) = {
+ let addr = self.addresses.read().await.clone();
+ let outp = self.outpoints.read().await.clone();
+ let data = self.data_elements.read().await.clone();
+ (addr, outp, data)
+ };📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub async fn create_filter(&self) -> Result<FilterLoad, SpvError> { | |
| let addresses = self.addresses.read().await; | |
| let outpoints = self.outpoints.read().await; | |
| let data_elements = self.data_elements.read().await; | |
| // Calculate total elements | |
| let total_elements = | |
| addresses.len() as u32 + outpoints.len() as u32 + data_elements.len() as u32; | |
| let elements = std::cmp::max(self.config.elements, total_elements); | |
| // Create new filter | |
| let mut new_filter = BloomFilter::new( | |
| elements, | |
| self.config.false_positive_rate, | |
| self.config.tweak, | |
| self.config.flags, | |
| ) | |
| .map_err(|e| SpvError::General(format!("Failed to create bloom filter: {:?}", e)))?; | |
| // Add all watched elements | |
| for address in addresses.iter() { | |
| self.add_address_to_filter(&mut new_filter, address)?; | |
| } | |
| for outpoint in outpoints.iter() { | |
| new_filter.insert(&outpoint_to_bytes(outpoint)); | |
| } | |
| for data in data_elements.iter() { | |
| new_filter.insert(data); | |
| } | |
| // Update stats | |
| if self.config.enable_stats { | |
| let mut stats = self.stats.write().await; | |
| stats.recreations += 1; | |
| stats.items_added = total_elements as u64; | |
| stats.current_false_positive_rate = | |
| new_filter.estimate_false_positive_rate(total_elements); | |
| } | |
| // Store the new filter | |
| let filter_load = FilterLoad::from_bloom_filter(&new_filter); | |
| *self.filter.write().await = Some(new_filter); | |
| Ok(filter_load) | |
| } | |
| pub async fn create_filter(&self) -> Result<FilterLoad, SpvError> { | |
| // Clone the data to avoid holding locks during filter creation | |
| let (addresses, outpoints, data_elements) = { | |
| let addr = self.addresses.read().await.clone(); | |
| let outp = self.outpoints.read().await.clone(); | |
| let data = self.data_elements.read().await.clone(); | |
| (addr, outp, data) | |
| }; | |
| // Calculate total elements | |
| let total_elements = | |
| addresses.len() as u32 + outpoints.len() as u32 + data_elements.len() as u32; | |
| let elements = std::cmp::max(self.config.elements, total_elements); | |
| // Create new filter | |
| let mut new_filter = BloomFilter::new( | |
| elements, | |
| self.config.false_positive_rate, | |
| self.config.tweak, | |
| self.config.flags, | |
| ) | |
| .map_err(|e| SpvError::General(format!("Failed to create bloom filter: {:?}", e)))?; | |
| // Add all watched elements | |
| for address in addresses.iter() { | |
| self.add_address_to_filter(&mut new_filter, address)?; | |
| } | |
| for outpoint in outpoints.iter() { | |
| new_filter.insert(&outpoint_to_bytes(outpoint)); | |
| } | |
| for data in data_elements.iter() { | |
| new_filter.insert(data); | |
| } | |
| // Update stats | |
| if self.config.enable_stats { | |
| let mut stats = self.stats.write().await; | |
| stats.recreations += 1; | |
| stats.items_added = total_elements as u64; | |
| stats.current_false_positive_rate = | |
| new_filter.estimate_false_positive_rate(total_elements); | |
| } | |
| // Store the new filter | |
| let filter_load = FilterLoad::from_bloom_filter(&new_filter); | |
| *self.filter.write().await = Some(new_filter); | |
| Ok(filter_load) | |
| } |
🤖 Prompt for AI Agents
In dash-spv/src/bloom/manager.rs around lines 88 to 135, avoid holding multiple
read locks across the entire filter-creation path: acquire each lock only to
clone the minimal data you need (e.g., addresses.clone() or
addresses.iter().cloned().collect(), outpoints.clone(), data_elements.clone()),
drop the locks immediately, then build the bloom filter from those local copies;
when updating shared state later, acquire write locks only briefly and in a
consistent order (e.g., stats write lock then filter write lock) to update stats
and replace the stored filter.
| let entry = ChainLockEntry { | ||
| chain_lock: chain_lock.clone(), | ||
| received_at: std::time::SystemTime::now(), | ||
| validated: true, | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ChainLocks loaded from storage are assumed to be validated
ChainLocks loaded from storage are automatically marked as validated: true without checking if they were actually validated before storage. This could be problematic if:
- ChainLocks were stored before validation
- Masternode data has changed since storage
- Validation rules have changed
Consider storing the validation status with the ChainLock or re-validating on load:
let entry = ChainLockEntry {
chain_lock: chain_lock.clone(),
received_at: std::time::SystemTime::now(),
- validated: true,
+ validated: false, // Mark as unvalidated and queue for validation
};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let entry = ChainLockEntry { | |
| chain_lock: chain_lock.clone(), | |
| received_at: std::time::SystemTime::now(), | |
| validated: true, | |
| }; | |
| let entry = ChainLockEntry { | |
| chain_lock: chain_lock.clone(), | |
| received_at: std::time::SystemTime::now(), | |
| validated: false, // Mark as unvalidated and queue for validation | |
| }; |
🤖 Prompt for AI Agents
In dash-spv/src/chain/chainlock_manager.rs around lines 398 to 402, the code
unconditionally sets ChainLockEntry.validated = true for ChainLocks loaded from
storage; instead, either persist the validation status with each stored
ChainLock and restore that flag here, or call the ChainLock validation routine
on load using current masternode/quorum state and set validated based on that
result; update the ChainLockEntry struct/storage schema if you choose to persist
the flag, and if re-validation fails mark validated = false (and optionally
enqueue the ChainLock for re-validation or rejection) so the runtime never
assumes stored ChainLocks are valid without verification.
| // Check reorg depth - account for checkpoint sync | ||
| let reorg_depth = if let Some(state) = chain_state { | ||
| if state.synced_from_checkpoint && state.sync_base_height > 0 { | ||
| // During checkpoint sync, both current_tip.height and fork.fork_height | ||
| // should be interpreted relative to sync_base_height | ||
|
|
||
| // For checkpoint sync: | ||
| // - current_tip.height is absolute blockchain height | ||
| // - fork.fork_height might be from genesis-based headers | ||
| // We need to compare relative depths only | ||
|
|
||
| // If the fork is from headers that started at genesis, | ||
| // we shouldn't compare against the full checkpoint height | ||
| if fork.fork_height < state.sync_base_height { | ||
| // This fork is from before our checkpoint - likely from genesis-based headers | ||
| // This scenario should be rejected at header validation level, not here | ||
| tracing::warn!( | ||
| "Fork detected from height {} which is before checkpoint base height {}. \ | ||
| This suggests headers from genesis were received during checkpoint sync.", | ||
| fork.fork_height, | ||
| state.sync_base_height | ||
| ); | ||
|
|
||
| // For now, reject forks that would reorg past the checkpoint | ||
| return Err(format!( | ||
| "Cannot reorg past checkpoint: fork height {} < checkpoint base {}", | ||
| fork.fork_height, state.sync_base_height | ||
| )); | ||
| } else { | ||
| // Normal case: both heights are relative to checkpoint | ||
| current_tip.height.saturating_sub(fork.fork_height) | ||
| } | ||
| } else { | ||
| // Normal sync mode | ||
| current_tip.height.saturating_sub(fork.fork_height) | ||
| } | ||
| } else { | ||
| // Fallback to original logic when no chain state provided | ||
| current_tip.height.saturating_sub(fork.fork_height) | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Complex checkpoint sync logic needs additional validation.
The checkpoint-aware reorg depth calculation is complex and handles multiple edge cases. The logic for determining whether a fork is from genesis-based headers vs checkpoint-based headers could be error-prone.
Consider adding more defensive checks and unit tests for edge cases:
🏁 Script executed:
#!/bin/bash
# Search for tests covering checkpoint sync scenarios
rg -A 10 "checkpoint.*sync|sync.*checkpoint" --type rust --glob "*test*.rs"
# Look for existing test coverage of the reorg depth calculation
ast-grep --pattern 'fn $TEST_NAME($$$) {
$$$
checkpoint
$$$
reorg_depth
$$$
}'Length of output: 6316
🏁 Script executed:
#!/bin/bash
# Show the checkpoint‐sync fork detector test to verify coverage of error vs. normal reorg_depth cases
rg -n -A 30 "fn test_fork_detection_with_checkpoint_sync" dash-spv/src/chain/fork_detector_test.rsLength of output: 1711
Add direct unit tests for reorg_depth paths in chain/reorg.rs
While test_fork_detection_with_checkpoint_sync in fork_detector_test.rs covers the error case when fork_height < sync_base_height, we still lack direct, focused tests for the branches in dash-spv/src/chain/reorg.rs that compute reorg_depth. Please add tests to cover:
- The fallback path when
chain_stateisNone - The normal sync path (
synced_from_checkpoint == false) - The valid checkpoint sync path (
synced_from_checkpoint == true && fork_height ≥ sync_base_height) - Boundary conditions (e.g.
fork_height == sync_base_height, very large heights)
Suggested locations and names:
- New file
dash-spv/src/chain/reorg_test.rswith tests such as:fn test_reorg_depth_no_state()fn test_reorg_depth_normal_sync()fn test_reorg_depth_checkpoint_sync_valid()fn test_reorg_depth_checkpoint_sync_boundary()
You may also add a defensive debug_assert! in the checkpoint branch to ensure state.sync_base_height > 0, catching mis-configurations early.
| ) -> Result<ReorgEvent, String> { | ||
| // Create a checkpoint of the current chain state before making any changes | ||
| let chain_state_checkpoint = chain_state.clone(); | ||
|
|
||
| // Track headers that were successfully stored for potential rollback | ||
| let mut stored_headers: Vec<BlockHeader> = Vec::new(); | ||
|
|
||
| // Perform all operations in a single atomic-like block | ||
| let result = async { | ||
| // Step 1: Rollback wallet state if UTXO rollback is available | ||
| if wallet_state.rollback_manager().is_some() { | ||
| wallet_state | ||
| .rollback_to_height(reorg_data.common_height, storage_manager) | ||
| .await | ||
| .map_err(|e| format!("Failed to rollback wallet state: {:?}", e))?; | ||
| } | ||
|
|
||
| // Step 2: Disconnect blocks from the old chain | ||
| for header in &reorg_data.disconnected_headers { | ||
| // Mark transactions as unconfirmed if rollback manager not available | ||
| if wallet_state.rollback_manager().is_none() { | ||
| for txid in &reorg_data.affected_tx_ids { | ||
| wallet_state.mark_transaction_unconfirmed(txid); | ||
| } | ||
| } | ||
|
|
||
| // Remove header from chain state | ||
| chain_state.remove_tip(); | ||
| } | ||
|
|
||
| // Step 3: Connect blocks from the new chain and store them | ||
| let mut current_height = reorg_data.common_height; | ||
| for header in &fork.headers { | ||
| current_height += 1; | ||
|
|
||
| // Add header to chain state | ||
| chain_state.add_header(*header); | ||
|
|
||
| // Store the header - if this fails, we need to rollback everything | ||
| storage_manager.store_headers(&[*header]).await.map_err(|e| { | ||
| format!("Failed to store header at height {}: {:?}", current_height, e) | ||
| })?; | ||
|
|
||
| // Only record successfully stored headers | ||
| stored_headers.push(*header); | ||
| } | ||
|
|
||
| Ok::<ReorgEvent, String>(ReorgEvent { | ||
| common_ancestor: reorg_data.common_ancestor, | ||
| common_height: reorg_data.common_height, | ||
| disconnected_headers: reorg_data.disconnected_headers, | ||
| connected_headers: fork.headers.clone(), | ||
| affected_transactions: reorg_data.affected_transactions, | ||
| }) | ||
| } | ||
| .await; | ||
|
|
||
| // If any operation failed, attempt to restore the chain state | ||
| match result { | ||
| Ok(event) => Ok(event), | ||
| Err(e) => { | ||
| // Restore the chain state to its original state | ||
| *chain_state = chain_state_checkpoint; | ||
|
|
||
| // Log the rollback attempt | ||
| tracing::error!( | ||
| "Reorg failed, restored chain state. Error: {}. \ | ||
| Successfully stored {} headers before failure.", | ||
| e, | ||
| stored_headers.len() | ||
| ); | ||
|
|
||
| // Note: We cannot easily rollback the wallet state or storage operations | ||
| // that have already been committed. This is a limitation of not having | ||
| // true database transactions. The error message will indicate this partial | ||
| // state to the caller. | ||
| Err(format!( | ||
| "Reorg failed after partial application. Chain state restored, \ | ||
| but wallet/storage may be in inconsistent state. Error: {}. \ | ||
| Consider resyncing from a checkpoint.", | ||
| e | ||
| )) | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Partial rollback could leave system in inconsistent state.
The code correctly identifies that without true database transactions, a failed reorg could leave the system partially updated. This is a significant issue that should be addressed.
Consider implementing a transaction-like mechanism:
- Batch all storage operations and apply them atomically
- Implement a write-ahead log (WAL) for crash recovery
- Add a "reorg in progress" flag to prevent concurrent operations
- Consider using a database that supports transactions (like SQLite) for critical state
| let mut iterations = 0; | ||
| const MAX_ITERATIONS: u32 = 1_000_000; // Reasonable limit for chain traversal | ||
|
|
||
| loop { | ||
| if let Ok(Some(height)) = storage.get_header_height(¤t_hash) { | ||
| // Found it in our chain | ||
| return Ok((current_hash, height)); | ||
| } | ||
|
|
||
| // Get the previous block | ||
| if let Ok(Some(header)) = storage.get_header(¤t_hash) { | ||
| current_hash = header.prev_blockhash; | ||
|
|
||
| // Safety check: don't go back too far | ||
| if current_hash == BlockHash::all_zeros() { | ||
| return Err("Reached genesis without finding common ancestor".to_string()); | ||
| } | ||
|
|
||
| // Prevent infinite loops in case of corrupted chain | ||
| iterations += 1; | ||
| if iterations > MAX_ITERATIONS { | ||
| return Err(format!("Exceeded maximum iterations ({}) while searching for common ancestor - possible corrupted chain", MAX_ITERATIONS)); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
MAX_ITERATIONS value is extremely high.
Using 1,000,000 as MAX_ITERATIONS could cause performance issues. This would mean traversing up to 1 million blocks, which is excessive.
Make the limit configurable and use a more reasonable default:
fn find_common_ancestor(
&self,
_chain_state: &ChainState,
fork_point: &BlockHash,
storage: &dyn ChainStorage,
) -> Result<(BlockHash, u32), String> {
let mut current_hash = *fork_point;
let mut iterations = 0;
- const MAX_ITERATIONS: u32 = 1_000_000; // Reasonable limit for chain traversal
+ // Use max_reorg_depth as the limit since we shouldn't go back further anyway
+ let max_iterations = self.max_reorg_depth * 2; // Some buffer for safetyCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In dash-spv/src/chain/reorg.rs around lines 401 to 423 the hardcoded
MAX_ITERATIONS = 1_000_000 is too large; make the limit configurable and use a
reasonable default. Replace the module-level const with a parameter or config
field (e.g., max_iterations: u32) passed into this function (or read from a
runtime config struct), set a sensible default such as 10_000, validate/clamp
the value to a safe range, and use that parameter in the iterations check and
error message so callers can tune traversal limits without risking OOM/CPU
storms.
This reverts commit 90084d5.
- Removed DashSPVFFI target from Package.swift - DashSPVFFI module now provided by unified SDK in dashpay-ios - Updated SwiftDashCoreSDK to have no dependencies - Added comments explaining standalone build limitations SwiftDashCoreSDK now relies on the unified SDK's DashSPVFFI module, which is available when used as a dependency in dashpay-ios but not for standalone builds. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
When no peers support compact filters, the sequential sync manager now properly transitions to the next phase instead of getting stuck. This fixes the issue where masternode lists weren't being synced to the chain tip. Changes: - Check return value of start_sync_headers and transition if it returns false - Add current_phase_needs_execution to detect phases that need execution after transition - Modified check_timeout to execute pending phases before checking for timeouts This ensures Platform SDK can fetch quorum public keys at recent heights by keeping masternode lists synced to the chain tip. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Move diff count increment to success case only to ensure accurate tracking - Add debug logging for diff completion checks and state tracking - Handle storage height exceeding tip gracefully by adjusting to available data - Fix phase completion logic to verify target height is actually reached - Re-start masternode sync if it reports complete but hasn't reached target - Add masternode engine state logging after applying diffs This improves sync reliability when dealing with partial syncs, phase transitions, and edge cases where the requested height exceeds available data. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Add test-utils to workspace members - Add dashcore-test-utils dependency to dash-spv-ffi for testing - Add log crate to dash for improved debugging 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Add DKGWindow struct to represent DKG mining windows - Implement get_dkg_window_for_height() to calculate mining windows - Implement get_dkg_windows_in_range() for efficient batch processing - Add NetworkLLMQExt trait for network-specific LLMQ operations - Fix LLMQ_100_67 interval from 2 to 24 (platform consensus quorum) - Add proper platform activation height checks for mainnet/testnet This enables smart masternode list fetching by calculating exactly which blocks can contain quorum commitments based on DKG intervals. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Add ffiClientHandle property to expose FFI client pointer - Make client property @ObservationIgnored to prevent observation issues - Add debug logging for detailed sync progress callbacks - Document that handle is nil until start() is called This enables Platform SDK to access Core chain data through the unified SDK for proof verification and other platform operations. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Add Copy, Clone, Debug, PartialEq, Eq traits to FFIErrorCode - Enables better error handling and testing patterns - Improves ergonomics when working with error codes 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Add minimal platform integration test for basic null checks - Add comprehensive safety test suite covering: - Null pointer handling for all FFI functions - Buffer overflow prevention - Memory safety (double-free, use-after-free) - Thread safety with concurrent access - Error propagation and thread-local storage - Add CLAUDE.md with FFI build and integration instructions - Add PLAN.md documenting smart quorum fetching algorithm design - Add integration tests for smart fetch algorithm - Add test script for validating smart fetch behavior These tests ensure FFI safety and document the smart fetch implementation for future maintenance and development. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
…ames Remove underscore prefixes from quorum_type and core_chain_locked_height parameters in ffi_dash_spv_get_quorum_public_key to follow standard naming conventions. The underscores were likely added to suppress unused parameter warnings but are no longer needed. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
Add TODO comments clarifying height parameter usage in storage APIs: - get_header() method currently expects storage-relative heights (0-based from sync_base_height) - Document confusion between storage indexes vs blockchain heights - Suggest future refactor to use absolute blockchain heights for better UX - Add comments to both trait definition and disk implementation This addresses a common confusion point where blockchain operations expect absolute heights but storage APIs use relative indexing. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
fc254c5 to
cbed4a1
Compare
Summary by CodeRabbit
New Features
Improvements
Documentation
Tests
Chores