-
Notifications
You must be signed in to change notification settings - Fork 10
Feat/mempool bloom filters chain management wip 2 #88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/mempool bloom filters chain management wip 2 #88
Conversation
Remove hardcoded localhost addresses from mainnet and testnet seed lists. These were causing unnecessary connection attempts during initial sync
- Add DetailedSyncProgress struct with performance metrics - Headers per second, bytes per second, ETA calculation - Sync stage tracking (Connecting, Querying, Downloading, etc.) - Add SyncStage enum for granular progress states - Add filter_sync_available field to track peer capabilities - Add supports_compact_filters() helper to PeerInfo - Add progress channel to DashSpvClient for real-time updates - Add is_filter_sync_available() method to check peer support This enables better progress reporting and performance monitoring for SPV sync operations.
- Add intermediate handshake states for better debugging: - VersionReceivedVerackSent: Version received, verack sent - VerackReceived: Verack received from peer - Add tracking flags for version_received, verack_received, version_sent - Improve logging throughout handshake process - Better error messages with handshake state information This makes handshake debugging easier and provides clearer state transitions during peer connection establishment.
- Reduce header sync timeout from 10s to 4s for faster failure detection - Change status update interval from 5s to 500ms for smoother progress - Add detailed logging for sync process including locator info - Improve empty locator handling for genesis sync These changes provide more responsive sync behavior and better real-time progress feedback.
- Enhanced status display with better formatting and metrics - Improved network connection handling and error recovery - Updated network constants for better peer management - Enhanced multi-peer connection logic with better peer selection - Improved filter sync with better error handling - General sync module improvements for reliability These changes collectively improve the stability and performance of the SPV client's network and synchronization operations.
Change header sync timeout from 4 seconds to 500 milliseconds for more responsive timeout detection during sync operations.
- Add Swift SDK package with comprehensive Dash Core functionality - Implement SPV client FFI bindings with async/await support - Add HD wallet integration via key-wallet-ffi - Create example iOS app demonstrating wallet and SPV features - Include persistent wallet storage and transaction management - Add network FFI bindings for Swift integration - Update CLAUDE.md with build instructions and project structure - Include build scripts for iOS targets (arm64, x86_64) - Add comprehensive documentation and implementation plans 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Add env_logger dependency for better debugging - Add FFIDetailedSyncProgress struct with comprehensive sync metrics - Add FFISyncStage enum for granular sync state tracking - Add sync-specific callbacks for progress and completion events - Add test_sync function for verifying sync functionality - Update cbindgen configuration to exclude internal callback types - Enhance FFI client with sync progress channels and callbacks - Update tests to handle new sync progress features This enables FFI consumers to track detailed sync progress including headers per second, ETA, sync stages, and real-time updates.
- Add detailed project overview and structure - Include build commands for Rust and FFI targets - Add test commands and environment variables - Document development commands (linting, formatting, docs) - Include key features (Dash-specific and architecture) - Add code style guidelines and constraints - Document Git workflow and current status - Add security considerations and API stability notes - Include known limitations This provides comprehensive guidance for Claude Code when working with the rust-dashcore repository.
- Simplify DashSPVFFI.swift by removing empty namespace enum - Update dash_spv_ffi.h with new FFI types matching Rust implementation - Add FFISyncStage, FFIDetailedSyncProgress types - Update function signatures for enhanced sync tracking - Add invalidArgument error case to DashSDKError - Provides better error handling for invalid inputs - Includes helpful error messages and recovery suggestions These changes align the Swift SDK with the updated Rust FFI layer and improve error handling capabilities.
Add documentation file to provide Claude Code with context and guidance when working with the Swift SDK package. This helps maintain consistency and understanding of the SDK structure.
- Add EnhancedSyncProgressView for detailed sync progress display - Add SettingsView for app configuration - Add ModelContainerHelper for SwiftData management - Update project configuration and build scripts - Improve wallet services with better error handling - Enhanced UI with platform-specific colors and layouts - Add comprehensive documentation: - DEBUG_SUMMARY.md for debugging tips - IOS_APP_SETUP_GUIDE.md for setup instructions - LINKING_FIX.md for resolving linking issues - SPM_LINKING_SOLUTION.md for Swift Package Manager setup - Add DashSPVFFI.xcframework support - Update build phase script for better library management - Improve wallet models with better persistence support This provides a more complete example app demonstrating HD wallet functionality with SPV sync capabilities and improved user experience.
Remove libdash_spv_ffi.a from version control. Binary artifacts should be built locally and not committed to the repository.
- Build and copy key_wallet_ffi libraries for both simulator and device - Create symlinks for both dash_spv_ffi and key_wallet_ffi libraries - Default symlinks point to simulator versions for development - Improve output messages with detailed library information This enables the iOS example app to use both SPV and wallet FFI functionality with proper library management.
- Add mempool_filter module for transaction filtering - Implement mempool transaction storage and callbacks - Add mempool transaction types and event handling - Update storage layer to support mempool transactions - Add FFI bindings for mempool events - Add Swift types for mempool transactions This enables SPV clients to track and filter mempool transactions, providing real-time transaction updates before blockchain confirmation. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Enhance block processor for better balance updates - Update FFI wallet bindings with improved error handling - Add immature balance tracking to Swift Balance model - Improve persistent wallet management with better error handling - Add comprehensive balance calculations in WalletManager 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Add SPV client verification methods and retry management - Implement comprehensive mempool support in Swift SDK - Add WatchStatusView for monitoring address watch status - Enhance WalletService with improved SPV integration - Update FFI bridge with new callback handlers - Improve error handling and state management - Add support for mempool transactions in example app - Update build script with improved error handling 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Update root CLAUDE.md with recent development notes - Add mempool implementation details to dash-spv CLAUDE.md - Update dash-spv README with mempool features - Enhance Swift SDK documentation with new features - Update BUILD.md with improved build instructions 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Enhance address derivation with better error handling - Update BIP32 implementation for improved compatibility - Refactor DIP9 address derivation methods - Update Dash network constants and checkpoint data - Improve examples and test coverage 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Add mempool configuration options to client config - Enhance consistency checks and filter sync - Improve network connection and message handling - Add comprehensive chainlock and instantlock validation - Refactor sync module for better modularity - Update multi-peer connection management - Add new error types for mempool operations - Update dependencies and test configurations This improves the overall reliability and performance of the SPV client with better error handling and validation mechanisms. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
Add terminal block data to support deterministic masternode list synchronization. Terminal blocks contain masternode list snapshots at specific heights, enabling efficient sync without downloading all intermediate blocks. - Add mainnet terminal blocks (heights 1088640-2000000) - Add testnet terminal blocks (heights 387480-900000) - Implement terminal block loading and validation - Add tests and examples for terminal block usage - Include script for fetching terminal block data from nodes This enables SPV clients to quickly sync masternode lists by jumping to known terminal blocks instead of processing every block from genesis. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
Add common library file extensions to prevent build artifacts from being committed: - *.a (static libraries) - *.so (Linux shared libraries) - *.dylib (macOS dynamic libraries) - *.dll (Windows dynamic libraries) 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
Add simple example that verifies genesis block generation for mainnet and testnet networks. This helps ensure the genesis block constants are correctly defined. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Enhanced sync managers for headers and masternodes - Improved error handling and logging in sync components - Added support for sequential sync coordination - Better timeout and retry logic for sync operations 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Move implementation documentation to organized structure - Add bloom filter specification - Document sequential sync design - Include peer reputation system documentation - Document fixes and implementation summaries 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Add multi-peer connection management - Improve handshake protocol handling - Enhanced message routing and error handling - Better connection resilience and retry logic - Add peer reputation tracking foundation 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
…ntegration - Add mempool transaction tracking with configurable strategies - Implement balance calculations including unconfirmed transactions - Enhanced wallet address management and UTXO tracking - Improved error handling with specific error types - Add transaction relevance checking for watched addresses 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Add mempool tracking configuration to FFI - Implement mempool-specific event callbacks - Add balance queries including mempool transactions - Enhanced error handling with specific error codes - Support for recording sent transactions 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Add BIP37 bloom filter message type definitions - Define bloom filter constants (max size, hash functions) - Add bloom filter update flags enum - Update network protocol with bloom filter support - Prepare for future bloom filter implementation 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Add mempool transaction tracking to Swift SDK - Implement mempool event types and callbacks - Enhanced balance calculations with unconfirmed transactions - Improved transaction model with mempool state - Add error handling types for better debugging - Update example app to demonstrate mempool features 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Created private async helper `calculate_filter_header_height()` in StatusDisplay - Replaced duplicated filter header height calculation logic in: - get_sync_progress() method (lines 78-103) - update_status_display() method with terminal UI (lines 159-163) - update_status_display() method without terminal UI (lines 251-255) - This refactoring improves code clarity and maintainability by eliminating duplication Fixes CodeRabbitAI review comment from PR dashpay#80 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
Extract the duplicated header height calculation logic from sync_progress() and update_status_display() into a private helper method calculate_header_height_with_logging(). This improves maintainability and reduces code repetition as identified by CodeRabbitAI in PR dashpay#80. The helper method supports optional debug logging to preserve the existing logging behavior in update_status_display().
…sing Replace the panic-inducing expect() call when parsing start_height_str with proper error handling using map_err() and the ? operator. This ensures the program gracefully exits with a descriptive error message instead of panicking when an invalid start height is provided. Fixes CodeRabbitAI review comment from PR dashpay#80
Implemented comprehensive tests to verify actual reentrancy behavior and thread safety in the FFI callback system as requested by CodeRabbitAI: - Enhanced test_callback_reentrancy to: - Actually trigger callbacks through real FFI operations - Test reentrancy by attempting to call FFI functions from within callbacks - Detect potential deadlocks with timeout monitoring - Track and verify reentrancy protection mechanisms - Added test_callback_thread_safety to: - Invoke callbacks from multiple threads concurrently - Monitor concurrent callback executions - Detect race conditions through shared state access - Verify thread-safe behavior without deadlocks or data races These tests ensure the FFI layer properly handles nested/concurrent callback invocations without issues. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
Changed valid_count update logic in dash-spv/src/storage/disk.rs to only increment when offset equals current valid_count. This ensures valid_count always reflects the count of contiguous valid headers without gaps, preventing sentinel headers from being incorrectly counted as valid when headers are stored non-sequentially. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
Previously, get_mempool_balance relied solely on the sign of net_amount to determine transaction effects on an address, which could lead to incorrect balance calculations. This fix: - Examines transaction outputs directly to calculate actual balance changes - Validates that calculated changes are consistent with transaction metadata - Handles edge cases like zero amounts and conflicting signs - Adds comprehensive unit tests covering various scenarios The implementation now: 1. Calculates incoming funds by checking outputs to the address 2. Notes that outgoing funds would require UTXO knowledge (commented) 3. Applies the validated balance change rather than trusting net_amount alone 4. Ensures no negative balances are returned 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
…iring new ones Fixed potential deadlock issue in BloomFilterManager by ensuring write locks are explicitly released before acquiring subsequent locks. This addresses the CodeRabbitAI review comment from PR dashpay#80 about nested write locks in the bloom manager. Changes: - Wrapped write lock acquisitions in explicit scopes in add_address(), add_outpoint(), and add_data() methods to ensure locks are dropped before acquiring filter lock - Refactored clear() method to use explicit scopes for each lock acquisition This ensures consistent lock ordering and prevents potential deadlocks when multiple threads access the bloom filter manager concurrently. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
Implement atomic-like behavior for wallet rollback and storage operations during chain reorganization to prevent inconsistent state. Changes: - Create chain state checkpoint before any modifications - Wrap all reorg operations in a single async block - Restore chain state on any failure - Track successfully stored headers for better error reporting - Add clear error messages indicating partial state issues This addresses the CodeRabbitAI review comment about ensuring atomicity of operations during reorganization to maintain consistency. Note: Due to StorageManager trait limitations, we cannot fully rollback wallet state or storage operations. The error messages guide users to resync from a checkpoint if needed. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
…orum_public_key Added size parameter to ffi_dash_spv_get_quorum_public_key function to prevent buffer overflow vulnerabilities. The function now validates that the output buffer is at least 48 bytes before writing to it. - Added `out_pubkey_size` parameter to function signature - Implemented size validation that returns InvalidArgument error if buffer is too small - Updated function documentation to reflect new size requirements - Added NotImplemented error code for unimplemented platform functions - Created platform_integration module with placeholder implementations - Added test to verify buffer size validation works correctly This addresses the security issue identified by CodeRabbitAI in PR dashpay#80. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
…t-aware reorg protection This commit implements several key improvements to the Dash SPV client: ## Network & Peer Discovery - Enable DNS-first peer discovery for immediate startup without hardcoded peers - Remove hardcoded peer IPs in favor of automatic DNS seed discovery - Fix network connection timeouts (increase from 15ms to 100ms) to prevent TCP partial reads - Add comprehensive timeout documentation with historical context - Change terminal UI from default-enabled to opt-in via --terminal-ui flag ## Chain Management & Sync - Add checkpoint-aware reorganization protection to prevent reorgs past checkpoint base - Improve fork detection during checkpoint sync with proper height calculations - Use blockchain height instead of storage height for checkpoint-aware progress tracking - Enhance headers locator logic for checkpoint sync (checkpoint-only, no genesis fallback) - Add validation for headers received from before checkpoint base height ## FFI Safety & SDK - Add buffer size validation to ffi_dash_spv_get_quorum_public_key for overflow protection - Update Swift SDK FFI interface with out_pubkey_size parameter - Minor formatting improvements in Swift configuration files ## Bug Fixes - Fix TCP connection timeout issues that caused message corruption and checksum failures - Resolve peer connection issues during startup when no explicit peers configured - Improve test stability with corrected storage manager initialization These changes enhance reliability, security, and performance of the SPV client, particularly for checkpoint sync scenarios and network connectivity edge cases. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
When header sync timeout triggers re-sending of requests, the same headers were being processed twice, causing false fork detection warnings. Added duplicate header check using storage.get_header_height_by_hash() to skip already processed headers, preventing unnecessary fork warnings and reorg operations during normal sync. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
…idation When fully synced, the SPV client was not requesting masternode list updates for new blocks, causing ChainLock validation to fail with "No masternode lists in engine" errors. ChainLocks require masternode lists from 8 blocks prior (per DIP24) to validate signatures. Changes: - Add masternode diff requests in handle_new_headers when masternodes enabled - Implement handle_post_sync_mnlistdiff to process diffs in FullySynced phase - Add gap detection and catch-up logic for missing masternode lists - Enhanced logging to show required masternode list heights for debugging This ensures ChainLocks can be validated immediately when they arrive for new blocks after the initial sync is complete. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Replace get_header_by_height with get_header method calls - Fix GetMnListDiff message struct naming - Comment out unimplemented chain manager functionality 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
…onization - Add FFI functions to configure start height and wallet creation time - Implement store_headers_from_height in disk storage for checkpoint-based sync - Improve header sync error handling for checkpoint scenarios with minimal headers - Add Swift SDK support for enabling checkpoint sync - Update terminal block data modules (formatting fixes) This allows SPV clients to start syncing from predefined checkpoints instead of genesis, significantly reducing initial sync time. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Update README with Unified SDK option for iOS integration - Add UNIFIED_SDK.md documenting the unified architecture approach - Update FFI crate READMEs with notes about Unified SDK availability - Modify SwiftDashCoreSDK to use static library linking for unified framework - Fix SPVClient type from OpaquePointer to UnsafeMutablePointer<FFIDashSpvClient> The Unified SDK combines Core (dash-spv-ffi, key-wallet-ffi) and Platform (dash-sdk-ffi) functionality into a single optimized XCFramework. This provides 79.4% size reduction while maintaining full API compatibility. See UNIFIED_SDK.md for integration details and platform-ios/packages/rs-sdk-ffi for the build implementation. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Add await keywords to async initializers in test setup - Update Package.swift to include macOS library paths for testing - Fix maxPeers assertion to match actual default value (12 instead of 8) - Add aarch64-apple-darwin target paths for both DashSPVFFI and KeyWalletFFI These changes allow the SwiftDashCoreSDK tests to compile and pass on macOS. All 21 tests now pass successfully. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Bloom Filter Module: Added 40 tests covering BloomFilterBuilder, BloomFilterManager, BloomFilterStats, and utility functions - Validation Module: Added 60 tests covering HeaderValidator with all ValidationModes, edge cases, and error scenarios - Chain Module: Added tests for fork_detector, orphan_pool, and checkpoints covering edge cases and thread safety - Client Module: Added tests for config, watch_manager, block_processor, consistency, and message_handler These tests significantly improve test coverage for critical SPV functionality including: - Bloom filter construction and management - Header validation with different modes (None, Basic, Full) - Fork detection and orphan block handling - Configuration validation and builder pattern - Watch item management and notifications - Block processing and consistency checking 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- transaction_processor_test.rs: Transaction relevance detection, output matching, and statistics - utxo_test.rs: UTXO management, spendability rules, and serialization - wallet_state_test.rs: State persistence, concurrent access, and rollback handling - utxo_rollback_test.rs: Rollback manager, snapshot creation, and reorganization handling Note: Tests are currently commented out in mod.rs due to API visibility issues that need resolution 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
…ests - error_handling_test.rs: Network, storage, and validation error scenarios - error_recovery_integration_test.rs: End-to-end error recovery scenarios - error_types_test.rs: Error type conversions and formatting Tests cover error propagation, recovery mechanisms, and graceful degradation across all layers Note: Some tests have compilation issues due to trait method mismatches that need resolution 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
Document the test enhancement effort including: - 163 passing tests across bloom, validation, and chain modules - Test implementation status for all modules - Known issues with client and wallet module tests - Recommendations for future improvements 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
Document that wallet test modules are provided but currently commented out due to API visibility issues that need resolution 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
Add explicit checks in validate_chain_basic and validate_chain_full to respect ValidationMode::None and always return Ok() without performing any validation checks 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
…files The data/mainnet/mod.rs file referenced 21 JSON files but only terminal_block_2000000.json exists. This caused compile-time errors with include_str\! macros. Fixed by: - Removing references to non-existent files - Adding unit tests to validate all JSON files parse correctly - Note: This file appears to be auto-generated but unused in compilation The actual terminal block loading happens in src/sync/terminal_block_data/mainnet.rs which already correctly references only the existing file. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
- Fix unsafe CString usage in test_client.rs by using CStr::from_ptr - Add error handling for missing masternode fields in fetch_terminal_blocks.py - Remove unnecessary f-string prefixes in fetch_terminal_blocks.py - Clarify error message in chain_tip.rs for single active tip - Add MAX_PENDING_CHAINLOCKS limit to prevent unbounded queue growth - Remove unnecessary header clone in orphan_pool.rs - Fix incorrectly prefixed storage parameter in filters.rs 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <[email protected]>
WalkthroughThis update introduces a comprehensive suite of new features and modules to the Dash SPV client, including full support for bloom filters, chain state management with fork detection and reorganization, checkpoint and terminal block systems, mempool tracking, enhanced error handling, and extensive documentation and testing. The changes span core logic, FFI interfaces, configuration, event handling, and supporting utilities. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant CLI
participant ClientConfig
participant DashSpvClient
participant NetworkManager
participant SyncManager
participant ChainState
participant Wallet
participant BloomFilterManager
participant EventHandler
User->>CLI: Launch with arguments
CLI->>ClientConfig: Parse/configure (incl. mempool, start height)
CLI->>DashSpvClient: Instantiate with config
DashSpvClient->>NetworkManager: Connect to peers
DashSpvClient->>SyncManager: Start sync (sequential phases)
SyncManager->>ChainState: Update headers, detect forks, handle reorgs
SyncManager->>Wallet: Update wallet state
SyncManager->>BloomFilterManager: Build/update bloom filter
SyncManager->>EventHandler: Emit events (block, tx, balance)
EventHandler->>CLI: Display status/progress
User->>CLI: Ctrl-C (shutdown)
CLI->>DashSpvClient: Stop client
DashSpvClient->>NetworkManager: Disconnect peers
Possibly related PRs
Poem
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 Clippy (1.86.0)warning: profiles for the non root package will be ignored, specify profiles at the workspace root: Caused by: Caused by: Caused by: ✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 21
🧹 Nitpick comments (43)
dash-spv/data/testnet/mod.rs (1)
1-110: Consider improving error handling in auto-generated terminal block loader.While the repetitive pattern is understandable for auto-generated code, the silent error handling on deserialization failures could mask important issues. Consider logging errors instead of silently ignoring them.
For better error visibility, consider this pattern:
if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { manager.add_state(state); + } else { + log::warn!("Failed to deserialize terminal block data for height {}", height); }Alternatively, if the auto-generation script could be modified to reduce repetition:
const TERMINAL_BLOCKS: &[u32] = &[387480, 400000, 450000, /* ... */]; pub fn load_testnet_terminal_blocks(manager: &mut TerminalBlockDataManager) { for &height in TERMINAL_BLOCKS { let data = match height { 387480 => include_str!("terminal_block_387480.json"), 400000 => include_str!("terminal_block_400000.json"), // ... other blocks _ => continue, }; if let Ok(state) = serde_json::from_str::<TerminalBlockMasternodeState>(data) { manager.add_state(state); } else { log::warn!("Failed to deserialize terminal block data for height {}", height); } } }TEST_SUMMARY.md (1)
1-112: LGTM: Comprehensive and well-structured test coverage summary.This document provides excellent visibility into the current test status with:
- Clear breakdown of 163 passing tests across critical modules
- Detailed coverage descriptions for each module
- Specific identification of API mismatch issues
- Concrete recommendations for future work
- Good organization with clear sections and status indicators
The summary effectively communicates both achievements (comprehensive coverage in bloom, validation, and chain modules) and remaining work (API fixes for client and wallet tests).
Consider these minor improvements:
- Add more specific details about the API mismatches to help with resolution
- Consider a table format for test statistics for better readability
- Add estimated effort or timeline for fixing the blocked tests
## Test Statistics - **Total Tests Written**: ~250+ tests - **Currently Passing**: 163 tests (40 bloom + 54 validation + 69 chain) - **Blocked by API Issues**: ~90+ tests (client and wallet modules) + +| Module | Tests Written | Tests Passing | Status | +|--------|---------------|---------------|---------| +| Bloom Filter | 40 | 40 | ✅ Complete | +| Validation | 54 | 54 | ✅ Complete | +| Chain | 69 | 69 | ✅ Complete | +| Client | ~50 | 0 | ⚠️ API Mismatch | +| Wallet | ~40 | 0 | ⚠️ API Mismatch |dash-spv/examples/test_initial_sync.rs (4)
22-25: Consider making peer addresses configurable or using DNS seeds.The hardcoded IP addresses may become stale over time, potentially causing test failures. Consider using DNS seeds or making these configurable via environment variables.
- config.peers = vec![ - "54.68.235.201:19999".parse().unwrap(), - "52.40.219.41:19999".parse().unwrap(), - ]; + // Use DNS seeds or environment variables for peer discovery + if let Ok(peers_env) = std::env::var("DASH_TEST_PEERS") { + config.peers = peers_env.split(',').map(|p| p.parse().unwrap()).collect(); + } else { + config.peers = vec![ + "54.68.235.201:19999".parse().unwrap(), + "52.40.219.41:19999".parse().unwrap(), + ]; + }
18-18: Potential race condition in temporary directory naming.Using
std::process::id()for temporary directory names may cause conflicts if multiple instances run simultaneously. Consider using a more unique identifier.- let data_dir = PathBuf::from(format!("/tmp/dash-spv-initial-sync-{}", std::process::id())); + let data_dir = PathBuf::from(format!("/tmp/dash-spv-initial-sync-{}-{}", std::process::id(), std::time::SystemTime::now().duration_since(std::time::UNIX_EPOCH).unwrap().as_nanos()));
37-37: Consider making sync timing configurable.The hardcoded 10-second wait times may not be suitable for all environments. Consider making these configurable or using a more dynamic approach.
- tokio::time::sleep(Duration::from_secs(10)).await; + let sync_timeout = std::env::var("DASH_SYNC_TIMEOUT") + .ok() + .and_then(|s| s.parse().ok()) + .unwrap_or(10); + tokio::time::sleep(Duration::from_secs(sync_timeout)).await;Also applies to: 48-48
54-54: Handle cleanup errors more gracefully.The cleanup operation uses
let _ = std::fs::remove_dir_all(data_dir);which silently ignores errors. Consider logging cleanup failures.- let _ = std::fs::remove_dir_all(data_dir); + if let Err(e) = std::fs::remove_dir_all(&data_dir) { + eprintln!("Warning: Failed to clean up test directory {:?}: {}", data_dir, e); + }dash-spv/src/bloom/utils.rs (2)
7-22: Consider supporting additional script types beyond P2PKH.The function only handles P2PKH scripts. Consider extending support for other common script types like P2SH, P2WPKH, or documenting the limitation clearly.
-/// Extract pubkey hash from P2PKH script -pub fn extract_pubkey_hash(script: &Script) -> Option<Vec<u8>> { +/// Extract pubkey hash from P2PKH script +/// Currently only supports P2PKH (Pay-to-Public-Key-Hash) scripts +/// TODO: Add support for P2SH, P2WPKH, and other script types +pub fn extract_pubkey_hash(script: &Script) -> Option<Vec<u8>> {
25-30: Optimize memory allocation for outpoint serialization.The function pre-allocates capacity but could be more efficient by avoiding the intermediate vector operations.
-/// Convert outpoint to bytes for bloom filter -pub fn outpoint_to_bytes(outpoint: &OutPoint) -> Vec<u8> { - let mut bytes = Vec::with_capacity(36); - bytes.extend_from_slice(&outpoint.txid[..]); - bytes.extend_from_slice(&outpoint.vout.to_le_bytes()); - bytes -} +/// Convert outpoint to bytes for bloom filter +pub fn outpoint_to_bytes(outpoint: &OutPoint) -> [u8; 36] { + let mut bytes = [0u8; 36]; + bytes[..32].copy_from_slice(&outpoint.txid[..]); + bytes[32..].copy_from_slice(&outpoint.vout.to_le_bytes()); + bytes +}This avoids heap allocation for a fixed-size result.
dash-spv/examples/test_header_count.rs (2)
19-19: Consider using a unique test directory name.The hardcoded directory name "test-header-count-data" may conflict with concurrent test runs. Consider using a unique identifier.
- let storage_dir = "test-header-count-data"; + let storage_dir = format!("test-header-count-data-{}", std::process::id());
46-46: Consider making timing configurable.Similar to the previous example, hardcoded timing may not be suitable for all environments.
- tokio::time::sleep(Duration::from_secs(20)).await; + let sync_duration = std::env::var("DASH_SYNC_DURATION") + .ok() + .and_then(|s| s.parse().ok()) + .unwrap_or(20); + tokio::time::sleep(Duration::from_secs(sync_duration)).await;Also applies to: 77-77
dash-spv/src/chain/chainlock_test.rs (2)
17-21: Consider using more realistic test data.The ChainLock signatures are all zeros or simple patterns. Consider using more realistic test data or at least documenting that these are test signatures.
- signature: dashcore::bls_sig_utils::BLSSignature::from([0; 96]), + signature: dashcore::bls_sig_utils::BLSSignature::from([0; 96]), // Test signature - not validAlso applies to: 48-52, 86-92
96-96: Use consistent error message formatting.The error message uses
format!macro, but could be simplified or made more consistent.- .expect(&format!("ChainLock at height {} should process successfully", height)); + .unwrap_or_else(|e| panic!("ChainLock at height {} should process successfully: {}", height, e));dash-spv/src/network/message_handler.rs (1)
78-81: Address the TODO comment for GetData message handling.The TODO comment indicates incomplete functionality for GetData messages, which could affect protocol compliance.
GetData message handling is important for peer protocol compliance. The current implementation just marks these as unhandled, which may cause issues with peers expecting responses.
Would you like me to help implement proper GetData message handling or open an issue to track this task?
dash-spv/src/client/config_test.rs (1)
129-131: Consider using a more realistic script for testingAn empty script might not adequately test the watch script functionality. Consider using a realistic script like a P2PKH or P2SH script.
-let script = dashcore::ScriptBuf::new(); +// Create a realistic P2PKH script for testing +let pubkey_hash = [0u8; 20]; // Example hash +let script = dashcore::ScriptBuf::new_p2pkh(&dashcore::PubkeyHash::from_byte_array(pubkey_hash));dash-spv/examples/test_headers2_fix.rs (1)
64-69: Consider using network genesis constant for clarityWhile the hardcoded genesis hash is correct for testnet, using the network's genesis block constant would be clearer and more maintainable.
-let genesis_hash = BlockHash::from_byte_array([ - 0x2c, 0xbc, 0xf8, 0x3b, 0x62, 0x91, 0x3d, 0x56, - 0xf6, 0x05, 0xc0, 0xe5, 0x81, 0xa4, 0x88, 0x72, - 0x83, 0x94, 0x28, 0xc9, 0x2e, 0x5e, 0xb7, 0x6c, - 0xd7, 0xad, 0x94, 0xbc, 0xaf, 0x0b, 0x00, 0x00 -]); +// Use the testnet genesis block hash +let genesis_hash = dashcore::blockdata::constants::genesis_block(Network::Testnet).header.block_hash();dash-spv/src/client/message_handler_test.rs (2)
25-72: Consider using a struct for test components.The function returns a large tuple with 11 elements, which could be error-prone and hard to maintain. Consider creating a
TestComponentsstruct to improve readability and type safety.struct TestComponents { network: Box<dyn NetworkManager>, storage: Box<dyn StorageManager>, sync_manager: SequentialSyncManager, config: ClientConfig, stats: Arc<RwLock<SpvStats>>, filter_processor: Option<FilterNotificationSender>, block_processor_tx: mpsc::UnboundedSender<BlockProcessingTask>, wallet: Arc<RwLock<Wallet>>, mempool_filter: Option<Arc<MempoolFilter>>, mempool_state: Arc<RwLock<MempoolState>>, event_tx: mpsc::UnboundedSender<SpvEvent>, }
74-116: Consider testing with non-empty headers.While the test correctly verifies error-free handling, it only tests with empty headers. Consider adding a test case with actual header data to ensure proper processing.
dash-spv/src/client/status_display.rs (1)
117-121: Multiple TODOs indicate incomplete implementation.Several fields are hardcoded or marked as TODO. Consider tracking these in issues for proper implementation.
Would you like me to create GitHub issues to track these TODO items for proper implementation of peer count, sync status flags, and sync timing?
dash-spv/docs/TERMINAL_BLOCKS.md (2)
28-31: Add language specifier to code block-``` +```text Request: Genesis (0) → Current (1,276,272) Diff size: ~500MB, covering 1.2M blocks--- `34-37`: **Add language specifier to code block** ```diff -``` +```text Request: Terminal Block (900,000) → Current (1,276,272) Diff size: ~100MB, covering 376K blocks</blockquote></details> <details> <summary>dash-spv/src/chain/orphan_pool.rs (2)</summary><blockquote> `200-213`: **Remove unused variable.** The variable `_block_hash` is declared but never used. Apply this diff to remove the unused variable: ```diff pub fn process_new_block(&mut self, block_hash: &BlockHash) -> Vec<BlockHeader> { let orphans = self.get_orphans_by_prev(block_hash); // Remove these from the pool since we're processing them for header in &orphans { - let _block_hash = header.block_hash(); self.remove_orphan(&header.block_hash()); } orphans }
234-370: Excellent test coverage!The tests comprehensively cover all orphan pool functionality. Consider removing the trailing empty line at line 370 for consistency.
dash-spv/src/chain/chain_tip.rs (1)
195-327: Good test coverage with honest documentation.The tests cover the main functionality well, and the comment about the difficulty of testing atomic failure scenarios is appreciated. Consider removing the trailing empty line at line 327.
dash-spv/src/bloom/manager.rs (1)
218-318: Well-implemented bloom filter operations.The methods properly handle transaction processing, statistics tracking, and address extraction. Consider removing the trailing empty line at line 318.
dash-spv/src/network/mod.rs (1)
310-314: Track the Headers2 protocol fix.Headers2 support is disabled with a TODO comment. This should be tracked properly to ensure it gets fixed.
Would you like me to create an issue to track the Headers2 decompression fix? This seems like an important protocol feature that should be re-enabled once the issues are resolved.
dash-spv/docs/utxo_rollback.md (1)
151-155: Consider adding example test output.The test command is provided, but it would be helpful to show expected output or test results to help developers verify their implementation is working correctly.
Add example output after the test command:
```bash cargo test utxo_rollback
+Example output:
++running 6 tests +test tests::utxo_rollback::test_basic_rollback ... ok +test tests::utxo_rollback::test_transaction_status_tracking ... ok +test tests::utxo_rollback::test_complex_reorganization ... ok +test tests::utxo_rollback::test_snapshot_persistence ... ok +test tests::utxo_rollback::test_conflicting_transactions ... ok +test tests::utxo_rollback::test_consistency_validation ... ok + +test result: ok. 6 passed; 0 failed; 0 ignored; 0 measured; 0 filtered out +</blockquote></details> <details> <summary>dash-spv/src/mempool_filter.rs (2)</summary><blockquote> `71-88`: **Consider optimizing address extraction.** The current implementation extracts all addresses from transaction outputs even when only checking specific watch items. For large transactions, this could be inefficient. Consider lazy evaluation or early returns: ```diff pub fn is_transaction_relevant(&self, tx: &Transaction, network: Network) -> bool { let txid = tx.txid(); - // Check if any input or output affects our watched addresses - let mut addresses = HashSet::new(); - - // Extract addresses from outputs - for (idx, output) in tx.output.iter().enumerate() { - if let Ok(address) = Address::from_script(&output.script_pubkey, network) { - addresses.insert(address.clone()); - tracing::trace!("Transaction {} output {} has address: {}", txid, idx, address); - } - } - - tracing::debug!( - "Transaction {} has {} addresses from outputs, checking against {} watched items", - txid, - addresses.len(), - self.watch_items.len() - ); - // Check against watched items for item in &self.watch_items { match item { WatchItem::Address { address, .. } => { - tracing::trace!( - "Checking if transaction {} contains watched address: {}", - txid, - address - ); - if addresses.contains(address) { - tracing::debug!( - "Transaction {} is relevant: contains watched address {}", - txid, - address - ); - return true; + // Check outputs for this specific address + for output in &tx.output { + if let Ok(addr) = Address::from_script(&output.script_pubkey, network) { + if addr == *address { + tracing::debug!( + "Transaction {} is relevant: contains watched address {}", + txid, + address + ); + return true; + } + } } }
574-574: Document integration test requirements.Several tests are marked as
#[ignore]with the reason "requires real Wallet implementation". Consider adding more context about what specific Wallet features are needed.Update the ignore attributes with more specific information:
- #[ignore = "requires real Wallet implementation"] + #[ignore = "requires real Wallet implementation with InstantSend lock checking and fee calculation"]This helps developers understand what needs to be implemented to enable these tests.
Also applies to: 618-618, 652-652
dash-spv/src/error.rs (1)
71-84: Consider using structured error types instead of String.While the new NetworkError variants are useful, using
Stringfor error details inAddressParseandSystemTimeloses type information. Consider using more structured error types:- #[error("Address parse error: {0}")] - AddressParse(String), + #[error("Address parse error: {0}")] + AddressParse(#[from] std::net::AddrParseError), - #[error("System time error: {0}")] - SystemTime(String), + #[error("System time error: {0}")] + SystemTime(#[from] std::time::SystemTimeError),dash-spv-ffi/tests/unit/test_async_operations.rs (1)
240-301: Comprehensive reentrancy detection logic.The reentrancy test effectively uses atomic operations to detect concurrent callback invocations. The deadlock detection timeout (1 second) is reasonable for testing.
Minor suggestion: Consider adding a comment explaining what
dash_spv_ffi_client_test_syncdoes and why it's suitable for reentrancy testing.dash-spv/src/chain/reorg.rs (1)
197-215: Consider the visibility of collect_reorg_data method.The conditional compilation makes
collect_reorg_datapublic in test builds but private otherwise. This dual visibility could lead to confusion. Consider making it consistently private and using a test-only wrapper method instead.- #[cfg(test)] - pub async fn collect_reorg_data( - &self, - chain_state: &ChainState, - fork: &Fork, - storage_manager: &dyn StorageManager, - ) -> Result<ReorgData, String> { - self.collect_reorg_data_internal(chain_state, fork, storage_manager).await - } - - #[cfg(not(test))] async fn collect_reorg_data( &self, chain_state: &ChainState, fork: &Fork, storage_manager: &dyn StorageManager, ) -> Result<ReorgData, String> { self.collect_reorg_data_internal(chain_state, fork, storage_manager).await } + + #[cfg(test)] + pub async fn test_collect_reorg_data( + &self, + chain_state: &ChainState, + fork: &Fork, + storage_manager: &dyn StorageManager, + ) -> Result<ReorgData, String> { + self.collect_reorg_data(chain_state, fork, storage_manager).await + }dash-spv/src/chain/chainlock_manager.rs (2)
278-294: Efficient cache eviction strategy.The cache size enforcement uses IndexMap's ordered nature to remove oldest entries first. The batch removal approach is efficient.
Consider adding metrics/logging for cache evictions to monitor cache effectiveness.
// Batch remove from both maps for (height, block_hash) in keys_to_remove { by_height.shift_remove(&height); by_hash.shift_remove(&block_hash); } + debug!("Evicted {} ChainLock entries from cache", entries_to_remove);
421-452: Consider extracting the repeated error handling pattern.The get_stats method has duplicated error handling code for lock acquisition. Consider extracting this into a helper method.
+ /// Helper to create empty stats on lock error + fn empty_stats(&self) -> ChainLockStats { + ChainLockStats { + total_chain_locks: 0, + cached_by_height: 0, + cached_by_hash: 0, + highest_locked_height: None, + lowest_locked_height: None, + enforce_chain_locks: self.enforce_chain_locks, + } + } + pub fn get_stats(&self) -> ChainLockStats { let by_height = match self.chain_locks_by_height.read() { Ok(guard) => guard, - Err(_) => return ChainLockStats { - total_chain_locks: 0, - cached_by_height: 0, - cached_by_hash: 0, - highest_locked_height: None, - lowest_locked_height: None, - enforce_chain_locks: self.enforce_chain_locks, - }, + Err(_) => return self.empty_stats(), }; let by_hash = match self.chain_locks_by_hash.read() { Ok(guard) => guard, - Err(_) => return ChainLockStats { - total_chain_locks: 0, - cached_by_height: 0, - cached_by_hash: 0, - highest_locked_height: None, - lowest_locked_height: None, - enforce_chain_locks: self.enforce_chain_locks, - }, + Err(_) => return self.empty_stats(), };dash-spv/src/bloom/stats.rs (1)
156-160: Consider making bandwidth estimation configurableThe bandwidth saved estimation uses a hardcoded multiplier of 10, which may not reflect actual network conditions.
Consider making this configurable or document the rationale:
+ /// Estimated average number of unrelated transactions per relevant transaction + const BANDWIDTH_MULTIPLIER: u64 = 10; + pub fn record_transaction(&mut self, is_false_positive: bool, tx_size: usize) { self.stats.network_impact.transactions_received += 1; if is_false_positive { self.stats.network_impact.false_positive_transactions += 1; } else { - // Estimate bandwidth saved by not downloading unrelated transactions - // Assume average transaction size if this was a true positive - self.stats.network_impact.bandwidth_saved_bytes += (tx_size * 10) as u64; - // Rough estimate + // Estimate bandwidth saved by filtering out unrelated transactions + // Based on empirical observation that ~90% of transactions are unrelated + self.stats.network_impact.bandwidth_saved_bytes += + (tx_size as u64) * Self::BANDWIDTH_MULTIPLIER; } }dash-spv/src/network/connection.rs (1)
401-404: Consider using standard logging format.While enthusiasm for successful headers2 reception is understandable, the celebratory emoji might be too informal for production logs.
- tracing::info!("🎉 Received Headers2 message from {}!", self.address); + tracing::info!("Received Headers2 message from {}", self.address);dash-spv-ffi/src/callbacks.rs (1)
159-165: Consider standardizing the logging format.While the detailed logging is helpful for debugging, the extensive use of emojis (🎯, ✅,
⚠️ ) might be too informal for production logs. Consider using standard log prefixes instead.- tracing::info!("🎯 Calling block callback: height={}, hash={}", height, hash); + tracing::info!("Calling block callback: height={}, hash={}", height, hash); - tracing::info!("✅ Block callback completed"); + tracing::info!("Block callback completed successfully"); - tracing::warn!("⚠️ Block callback not set"); + tracing::warn!("Block callback not set");Also applies to: 176-199, 203-214, 225-242
dash-spv/src/chain/checkpoints.rs (1)
54-55: Handle protocol version parsing errors gracefully.The protocol version extraction from masternode list name could fail if the format is unexpected.
self.masternode_list_name.as_ref().and_then(|name| { // Format: "ML{height}__{protocol_version}" - name.split("__").nth(1).and_then(|s| s.parse().ok()) + name.split("__") + .nth(1) + .and_then(|s| s.parse().ok()) + .or_else(|| { + tracing::warn!("Failed to parse protocol version from masternode list name: {}", name); + None + }) })dash-spv-ffi/src/client.rs (4)
81-81: Consider using stronger memory ordering for ID generationWhile
Relaxedordering works for unique ID generation, usingSeqCstorAcqRelwould provide stronger guarantees and better document the intent that IDs must be globally unique across all threads.- let id = CALLBACK_ID_COUNTER.fetch_add(1, Ordering::Relaxed); + let id = CALLBACK_ID_COUNTER.fetch_add(1, Ordering::SeqCst);
1833-1841: Simplify balance field calculationsThe balance calculations are correct but could be clearer by avoiding redundant additions.
let balance = FFIBalance { confirmed: 0, // No confirmed balance in mempool pending: mempool_balance.pending.to_sat(), instantlocked: 0, // No confirmed instantlocked in mempool - mempool: mempool_balance.pending.to_sat() - + mempool_balance.pending_instant.to_sat(), + mempool: mempool_balance.pending.to_sat() + mempool_balance.pending_instant.to_sat(), mempool_instant: mempool_balance.pending_instant.to_sat(), - total: mempool_balance.pending.to_sat() + mempool_balance.pending_instant.to_sat(), + total: mempool_balance.pending.to_sat() + mempool_balance.pending_instant.to_sat(), };
619-636: Potential lock contention during callback executionThe registry lock is held while executing callbacks, which could cause contention if callbacks are slow or attempt to register/unregister other callbacks (deadlock risk).
Consider cloning the callback info before execution to minimize lock duration:
- let registry = CALLBACK_REGISTRY.lock().unwrap(); - if let Some(CallbackInfo::Detailed { - progress_callback: Some(callback), - user_data, - .. - }) = registry.get(callback_data.callback_id) - { - callback(ffi_progress.as_ref(), *user_data); - } + let callback_info = { + let registry = CALLBACK_REGISTRY.lock().unwrap(); + registry.get(callback_data.callback_id).cloned() + }; + + if let Some(CallbackInfo::Detailed { + progress_callback: Some(callback), + user_data, + .. + }) = callback_info + { + callback(ffi_progress.as_ref(), user_data); + }Note: This would require implementing
CloneforCallbackInfoor extracting just the needed fields.
483-544: Test function included in production codeThis appears to be a diagnostic function with hardcoded delays and console output. Consider moving it to a test module or adding clear documentation about its purpose.
+/// Test function for diagnostic purposes only. +/// Not intended for production use due to hardcoded delays. +#[cfg(feature = "testing")] #[no_mangle] pub unsafe extern "C" fn dash_spv_ffi_client_test_sync(client: *mut FFIDashSpvClient) -> i32 {dash-spv/src/chain/fork_detector.rs (1)
13-30: Consider using a proper error type instead of static string.While the validation is correct, returning
Result<Self, &'static str>is unconventional. Consider defining a proper error type for better error handling integration.dash-spv/src/network/handshake.rs (1)
252-255: Consider using a constant for the local address.Instead of parsing a hardcoded string at runtime, consider defining a constant or constructing the address directly.
- // Parse the local address safely - let local_addr = "127.0.0.1:0" - .parse() - .map_err(|_| NetworkError::AddressParse("Failed to parse local address".to_string()))?; + // Use localhost address for SPV client + let local_addr = SocketAddr::from(([127, 0, 0, 1], 0));
| dash_spv_ffi_string_destroy(FFIString { ptr: ptr::null_mut() }); | ||
| dash_spv_ffi_array_destroy(FFIArray { data: ptr::null_mut(), len: 0 }); | ||
| dash_spv_ffi_string_destroy(FFIString { ptr: ptr::null_mut(), length: 0 }); | ||
| dash_spv_ffi_array_destroy(FFIArray { data: ptr::null_mut(), len: 0, capacity: 0 }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix incorrect FFIArray destruction call.
The dash_spv_ffi_array_destroy function expects a *mut FFIArray parameter, not an FFIArray by value. This will cause a compilation error.
Apply this fix:
- dash_spv_ffi_array_destroy(FFIArray { data: ptr::null_mut(), len: 0, capacity: 0 });
+ let null_array = Box::new(FFIArray { data: ptr::null_mut(), len: 0, capacity: 0 });
+ dash_spv_ffi_array_destroy(Box::into_raw(null_array));🤖 Prompt for AI Agents
In dash-spv-ffi/tests/security/test_security.rs at line 68, the call to
dash_spv_ffi_array_destroy incorrectly passes an FFIArray by value instead of a
pointer. Change the argument to pass a mutable pointer to the FFIArray instance,
ensuring the function receives a *mut FFIArray as required to fix the
compilation error.
| #[test] | ||
| fn test_load_mainnet_terminal_blocks() { | ||
| let mut manager = TerminalBlockDataManager::new(); | ||
|
|
||
| // Should start empty | ||
| assert_eq!(manager.states.len(), 0); | ||
|
|
||
| // Load terminal blocks | ||
| load_mainnet_terminal_blocks(&mut manager); | ||
|
|
||
| // Should have at least one block loaded | ||
| assert!(manager.states.len() > 0, "No terminal blocks were loaded"); | ||
|
|
||
| // Verify the loaded block | ||
| let state = manager.states.first().expect("Should have at least one state"); | ||
| assert_eq!(state.height, 2000000); | ||
| assert!(!state.block_hash.is_empty()); | ||
| } | ||
| } No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify test assumptions about terminal block data.
The test assumes specific details about the loaded terminal block (height 2000000) and uses manager.states.first() to access the data. Ensure these assumptions remain valid if the terminal block data changes.
🏁 Script executed:
#!/bin/bash
# Description: Verify the structure of TerminalBlockDataManager and the terminal block data
# Expected: Find the states field structure and verify terminal block height
# Check the TerminalBlockDataManager structure
ast-grep --pattern 'struct TerminalBlockDataManager {
$$$
}'
# Check for the states field and its type
ast-grep --pattern 'states: $_'
# Check the terminal block JSON content
cat dash-spv/data/mainnet/terminal_block_2000000.json | head -20Length of output: 1127
Ensure test robustness for terminal block loading
Rather than relying on states.first(), which can break if more blocks are added or the load order changes, assert that any loaded state matches your expectations and then verify its fields.
Suggested update in dash-spv/data/mainnet/mod.rs (lines 69–87):
- // Verify the loaded block
- let state = manager.states.first().expect("Should have at least one state");
- assert_eq!(state.height, 2000000);
- assert!(!state.block_hash.is_empty());
+ // Verify the expected terminal block is present
+ assert!(
+ manager.states.iter().any(|s| s.height == 2000000),
+ "Expected a terminal block at height 2000000"
+ );
+
+ // Check its contents
+ let state = manager
+ .states
+ .iter()
+ .find(|s| s.height == 2000000)
+ .unwrap();
+ assert!(!state.block_hash.is_empty(), "Block hash should not be empty");Tagging this as requiring a test update to guard against future data changes.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #[test] | |
| fn test_load_mainnet_terminal_blocks() { | |
| let mut manager = TerminalBlockDataManager::new(); | |
| // Should start empty | |
| assert_eq!(manager.states.len(), 0); | |
| // Load terminal blocks | |
| load_mainnet_terminal_blocks(&mut manager); | |
| // Should have at least one block loaded | |
| assert!(manager.states.len() > 0, "No terminal blocks were loaded"); | |
| // Verify the loaded block | |
| let state = manager.states.first().expect("Should have at least one state"); | |
| assert_eq!(state.height, 2000000); | |
| assert!(!state.block_hash.is_empty()); | |
| } | |
| } | |
| #[test] | |
| fn test_load_mainnet_terminal_blocks() { | |
| let mut manager = TerminalBlockDataManager::new(); | |
| // Should start empty | |
| assert_eq!(manager.states.len(), 0); | |
| // Load terminal blocks | |
| load_mainnet_terminal_blocks(&mut manager); | |
| // Should have at least one block loaded | |
| assert!(manager.states.len() > 0, "No terminal blocks were loaded"); | |
| // Verify the expected terminal block is present | |
| assert!( | |
| manager.states.iter().any(|s| s.height == 2000000), | |
| "Expected a terminal block at height 2000000" | |
| ); | |
| // Check its contents | |
| let state = manager | |
| .states | |
| .iter() | |
| .find(|s| s.height == 2000000) | |
| .unwrap(); | |
| assert!(!state.block_hash.is_empty(), "Block hash should not be empty"); | |
| } |
🤖 Prompt for AI Agents
In dash-spv/data/mainnet/mod.rs around lines 69 to 87, the test currently checks
only the first loaded terminal block, which can fail if the order or number of
blocks changes. Update the test to assert that at least one loaded state matches
the expected height (e.g., 2000000) and then verify the fields of that matching
state instead of relying on the first element. This ensures the test remains
robust against future data changes.
| #[test] | ||
| fn test_builder_pattern() { | ||
| let path = PathBuf::from("/test/storage"); | ||
| let addr: SocketAddr = "1.2.3.4:9999".parse().unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove unused variable
The addr variable is declared but never used in this test.
-let addr: SocketAddr = "1.2.3.4:9999".parse().unwrap();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let addr: SocketAddr = "1.2.3.4:9999".parse().unwrap(); |
🤖 Prompt for AI Agents
In dash-spv/src/client/config_test.rs at line 65, the variable `addr` is
declared but never used. Remove the declaration of `addr` entirely to clean up
the code and eliminate the unused variable warning.
|
|
||
| println!("\n🧪 Testing headers2 fix...\n"); | ||
|
|
||
| let addr = "192.168.1.163:19999".parse().unwrap(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Replace hardcoded IP with configurable peer address
The hardcoded IP address 192.168.1.163:19999 appears to be a private network address and won't work for other users. Consider using a command-line argument or a well-known testnet peer.
-let addr = "192.168.1.163:19999".parse().unwrap();
+// Use a well-known testnet peer or make it configurable
+let addr = std::env::args().nth(1)
+ .unwrap_or_else(|| "testnet-seed.dashdot.io:19999".to_string())
+ .parse()
+ .expect("Invalid peer address");📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let addr = "192.168.1.163:19999".parse().unwrap(); | |
| // Use a well-known testnet peer or make it configurable | |
| let addr = std::env::args().nth(1) | |
| .unwrap_or_else(|| "testnet-seed.dashdot.io:19999".to_string()) | |
| .parse() | |
| .expect("Invalid peer address"); |
🤖 Prompt for AI Agents
In dash-spv/examples/test_headers2_fix.rs at line 20, replace the hardcoded IP
address "192.168.1.163:19999" with a configurable peer address by accepting it
as a command-line argument or using a predefined testnet peer address. Modify
the code to parse the peer address from the input arguments or configuration to
make it flexible for different users and environments.
| #[tokio::test] | ||
| async fn test_error_propagation() { | ||
| let ( | ||
| mut network, | ||
| mut storage, | ||
| mut sync_manager, | ||
| config, | ||
| stats, | ||
| filter_processor, | ||
| block_processor_tx, | ||
| wallet, | ||
| mempool_filter, | ||
| mempool_state, | ||
| event_tx, | ||
| ) = setup_test_components().await; | ||
|
|
||
| let mut handler = MessageHandler::new( | ||
| &mut sync_manager, | ||
| &mut *storage, | ||
| &mut *network, | ||
| &config, | ||
| &stats, | ||
| &filter_processor, | ||
| &block_processor_tx, | ||
| &wallet, | ||
| &mempool_filter, | ||
| &mempool_state, | ||
| &event_tx, | ||
| ); | ||
|
|
||
| // Create a message that might cause an error in sync manager | ||
| // For example, Headers2 with invalid data | ||
| let headers2 = dashcore::network::message_headers2::Headers2Message { | ||
| headers: vec![], // Empty headers might cause validation error | ||
| }; | ||
| let message = NetworkMessage::Headers2(headers2); | ||
|
|
||
| // Handle the message - error should be propagated | ||
| let result = handler.handle_network_message(message).await; | ||
| // The result depends on sync manager validation | ||
| assert!(result.is_ok() || result.is_err()); | ||
| } | ||
| } No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve error propagation test effectiveness.
The test accepts both Ok and Err results, which doesn't effectively verify error propagation. Consider creating a specific scenario that guarantees an error to properly test error handling.
- // The result depends on sync manager validation
- assert!(result.is_ok() || result.is_err());
+ // For this test, we should verify that errors are properly propagated
+ // Consider creating a specific invalid message that will definitely fail
+ match result {
+ Ok(_) => {
+ // Empty headers might be valid in some cases
+ }
+ Err(e) => {
+ // Verify the error is the expected validation error
+ assert!(e.to_string().contains("validation") || e.to_string().contains("invalid"));
+ }
+ }🤖 Prompt for AI Agents
In dash-spv/src/client/message_handler_test.rs around lines 547 to 589, the
test_error_propagation function currently accepts both Ok and Err results, which
does not effectively verify error propagation. Modify the test to create a
specific input scenario that is guaranteed to cause an error in the sync
manager, then assert that the result is an Err variant to properly validate
error handling behavior.
| let runtime_handle = runtime.handle().clone(); | ||
| let sync_callbacks_clone = sync_callbacks.clone(); | ||
| let shutdown_signal_clone = client.shutdown_signal.clone(); | ||
| let sync_handle = std::thread::spawn(move || { | ||
| // Run monitoring loop | ||
| let monitor_result = runtime_handle.block_on(async move { | ||
| let mut guard = inner.lock().unwrap(); | ||
| if let Some(ref mut spv_client) = *guard { | ||
| spv_client.monitor_network().await | ||
| } else { | ||
| Err(dash_spv::SpvError::Config("Client not initialized".to_string())) | ||
| } | ||
| }); | ||
|
|
||
| // Send completion callback and cleanup | ||
| { | ||
| let mut cb_guard = sync_callbacks_clone.lock().unwrap(); | ||
| if let Some(ref callback_data) = *cb_guard { | ||
| let mut registry = CALLBACK_REGISTRY.lock().unwrap(); | ||
| if let Some(CallbackInfo::Detailed { | ||
| completion_callback: Some(callback), | ||
| user_data, | ||
| .. | ||
| }) = registry.unregister(callback_data.callback_id) | ||
| { | ||
| match monitor_result { | ||
| Ok(_) => { | ||
| let msg = CString::new("Sync completed successfully") | ||
| .unwrap_or_else(|_| CString::new("Sync completed").expect("hardcoded string is safe")); | ||
| // SAFETY: The callback and user_data are safely managed through the registry. | ||
| // The registry ensures proper lifetime management and thread safety. | ||
| // The string pointer is only valid for the duration of the callback. | ||
| callback(true, msg.as_ptr(), user_data); | ||
| // CString is automatically dropped here, which is safe because the callback | ||
| // should not store or use the pointer after it returns | ||
| } | ||
| Err(e) => { | ||
| let msg = match CString::new(format!("Sync failed: {}", e)) { | ||
| Ok(s) => s, | ||
| Err(_) => CString::new("Sync failed").expect("hardcoded string is safe"), | ||
| }; | ||
| // SAFETY: Same as above | ||
| callback(false, msg.as_ptr(), user_data); | ||
| // CString is automatically dropped here, which is safe because the callback | ||
| // should not store or use the pointer after it returns | ||
| } | ||
| } | ||
| } | ||
| } | ||
| // Clear the callbacks after completion | ||
| *cb_guard = None; | ||
| } | ||
| }); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Incomplete cancellation support for long-running sync operations
The sync operation uses monitor_network() which may run indefinitely, but there's no proper cancellation mechanism. The TODO comment in dash_spv_ffi_client_cancel_sync confirms this limitation.
Consider implementing proper cancellation using tokio's cancellation tokens or abort handles:
// In FFIDashSpvClient struct, add:
sync_cancel_token: Arc<Mutex<Option<tokio_util::sync::CancellationToken>>>,
// In sync_to_tip_with_progress:
let cancel_token = tokio_util::sync::CancellationToken::new();
*client.sync_cancel_token.lock().unwrap() = Some(cancel_token.clone());
// Use select! to handle cancellation:
tokio::select! {
result = spv_client.monitor_network() => { /* handle result */ }
_ = cancel_token.cancelled() => { /* handle cancellation */ }
}🤖 Prompt for AI Agents
In dash-spv-ffi/src/client.rs between lines 650 and 703, the sync operation
using monitor_network() lacks proper cancellation support, which can cause
indefinite blocking. To fix this, add a CancellationToken field
(sync_cancel_token) to the FFIDashSpvClient struct wrapped in
Arc<Mutex<Option<>>>, initialize a new CancellationToken before starting the
sync, and store it in this field. Then, modify the async block running
monitor_network() to use tokio::select! to await either the monitor_network()
future or the cancellation token's cancelled() future, handling cancellation by
returning early or cleaning up as needed. This will enable proper cancellation
of the long-running sync operation.
| if let Ok(txid_parsed) = txid.parse::<dashcore::Txid>() { | ||
| callbacks.call_transaction(&txid_parsed, confirmed, amount as i64, addresses, block_height); | ||
| } else { | ||
| tracing::error!("Failed to parse transaction ID: {}", txid); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider notifying callbacks of parsing errors
When transaction IDs or block hashes fail to parse, the errors are logged but callbacks are not notified. This could lead to missed events from the caller's perspective.
Consider adding an error callback or modifying existing callbacks to handle parsing errors:
- if let Ok(txid_parsed) = txid.parse::<dashcore::Txid>() {
- callbacks.call_transaction(&txid_parsed, confirmed, amount as i64, addresses, block_height);
- } else {
- tracing::error!("Failed to parse transaction ID: {}", txid);
- }
+ match txid.parse::<dashcore::Txid>() {
+ Ok(txid_parsed) => {
+ callbacks.call_transaction(&txid_parsed, confirmed, amount as i64, addresses, block_height);
+ }
+ Err(e) => {
+ tracing::error!("Failed to parse transaction ID: {} - {}", txid, e);
+ // Consider calling an error callback here
+ // callbacks.call_error(&format!("Invalid transaction ID: {}", txid));
+ }
+ }Also applies to: 245-249
🤖 Prompt for AI Agents
In dash-spv-ffi/src/client.rs around lines 235 to 239 and similarly at 245 to
249, parsing errors for transaction IDs and block hashes are only logged but not
communicated to callbacks, which may cause missed event handling. Modify the
callback interface to include an error handling method or extend existing
callbacks to notify the caller when parsing fails. Then, invoke this error
callback with relevant error details whenever parsing of transaction IDs or
block hashes fails.
| // Check if this extends a known fork | ||
| // Need to find a fork whose tip matches our prev_hash | ||
| let matching_fork = self | ||
| .forks | ||
| .iter() | ||
| .find(|(_, fork)| fork.tip_hash == prev_hash) | ||
| .map(|(_, fork)| fork.clone()); | ||
|
|
||
| if let Some(mut fork) = matching_fork { | ||
| // Remove the old entry (indexed by old tip) | ||
| self.forks.remove(&fork.tip_hash); | ||
|
|
||
| // Update the fork | ||
| fork.headers.push(*header); | ||
| fork.tip_hash = header_hash; | ||
| fork.tip_height += 1; | ||
| fork.chain_work = fork.chain_work.add_header(header); | ||
|
|
||
| // Re-insert with new tip hash | ||
| let result_fork = fork.clone(); | ||
| self.forks.insert(header_hash, fork); | ||
|
|
||
| return ForkDetectionResult::ExtendsFork(result_fork); | ||
| } | ||
|
|
||
| // Check if this connects to the main chain (creates new fork) | ||
| if let Ok(Some(height)) = storage.get_header_height(&prev_hash) { | ||
| // Check if this would create a fork from before our checkpoint | ||
| if chain_state.synced_from_checkpoint && chain_state.sync_base_height > 0 { | ||
| if height < chain_state.sync_base_height { | ||
| tracing::warn!( | ||
| "Rejecting header that would create fork from height {} (before checkpoint base {}). \ | ||
| This likely indicates headers from genesis were received during checkpoint sync.", | ||
| height, chain_state.sync_base_height | ||
| ); | ||
| return ForkDetectionResult::Orphan; | ||
| } | ||
| } | ||
|
|
||
| // Found connection point - this creates a new fork | ||
| let fork_height = height; | ||
| let fork = Fork { | ||
| fork_point: prev_hash, | ||
| fork_height, | ||
| tip_hash: header_hash, | ||
| tip_height: fork_height + 1, | ||
| headers: vec![*header], | ||
| chain_work: ChainWork::from_height_and_header(fork_height, header), | ||
| }; | ||
|
|
||
| self.add_fork(fork.clone()); | ||
| return ForkDetectionResult::CreatesNewFork(fork); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Potential issue with chain work calculation.
The code uses ChainWork::from_height_and_header(fork_height, header) which according to the relevant code snippet only returns the work from the current header, not cumulative work. This could lead to incorrect fork strength comparisons.
🏁 Script executed:
#!/bin/bash
# Check how ChainWork is calculated and used in fork comparison
ast-grep --pattern 'ChainWork::from_height_and_header($_, $_)'
# Look for ChainWork comparison usage
rg -A 5 'chain_work.*cmp|chain_work.*max|chain_work.*min' Length of output: 878
I’d like to confirm how ChainWork::from_height_and_header and add_header are implemented to see if they track cumulative work or only the single header’s work. Can you run:
🏁 Script executed:
#!/bin/bash
# Find where ChainWork is defined
rg 'pub struct ChainWork' -n dash-spv/src/chain
# Show impl of from_height_and_header
rg -C5 'fn from_height_and_header' -n dash-spv/src/chain
# Show impl of add_header
rg -C5 'fn add_header' -n dash-spv/src/chainLength of output: 2007
Incorrect cumulative work for new forks
The calls to ChainWork::from_height_and_header(fork_height, header) (in fork_detector.rs at lines ~130 and ~160) only capture the proof‐of‐work of the single header, dropping all prior work up to the fork point. Since fork selection in chain_tip.rs sorts tips by chain_work, this means new forks are compared based on just one block’s work, not the entire chain’s cumulative work.
We need to refactor fork creation to include the previous cumulative work, for example:
- Fetch the cumulative
ChainWorkat the fork point (from storage orchain_state). - Call
.add_header(header)on that to yield the full chain work.
Locations to update:
dash-spv/src/chain/fork_detector.rsaround line 130 (new fork case)dash-spv/src/chain/fork_detector.rsaround line 160 (reorg header case)
If there’s no existing API to load cumulative work at a height, introduce one or change from_height_and_header to accept the previous ChainWork.
| pub fn new(s: &str) -> Self { | ||
| let c_string = CString::new(s).unwrap_or_else(|_| CString::new("").unwrap()); | ||
| let length = s.len(); | ||
| FFIString { | ||
| ptr: c_string.into_raw(), | ||
| length, | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix length inconsistency when string contains null bytes.
The length field is set from the original string, but if the string contains null bytes, CString::new will fail and fall back to an empty string. This creates an inconsistency where length may be non-zero while ptr points to an empty string.
pub fn new(s: &str) -> Self {
let c_string = CString::new(s).unwrap_or_else(|_| CString::new("").unwrap());
- let length = s.len();
+ let length = c_string.as_bytes().len();
FFIString {
ptr: c_string.into_raw(),
length,
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn new(s: &str) -> Self { | |
| let c_string = CString::new(s).unwrap_or_else(|_| CString::new("").unwrap()); | |
| let length = s.len(); | |
| FFIString { | |
| ptr: c_string.into_raw(), | |
| length, | |
| } | |
| } | |
| pub fn new(s: &str) -> Self { | |
| let c_string = CString::new(s).unwrap_or_else(|_| CString::new("").unwrap()); | |
| let length = c_string.as_bytes().len(); | |
| FFIString { | |
| ptr: c_string.into_raw(), | |
| length, | |
| } | |
| } |
🤖 Prompt for AI Agents
In dash-spv-ffi/src/types.rs around lines 15 to 22, the length field is set from
the original string even when CString::new fails and falls back to an empty
string, causing inconsistency. Fix this by setting the length based on the
actual CString used, not the original string, so that length matches the content
pointed to by ptr.
| /// associated with this struct. | ||
| #[repr(C)] | ||
| pub struct FFIUnconfirmedTransaction { | ||
| pub txid: FFIString, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use fixed-size array for transaction ID instead of FFIString.
According to best practices, transaction IDs in Dash are always 32 bytes and should be passed as fixed-size byte arrays in FFI interfaces, not as strings, to avoid inefficiency and encoding issues.
pub struct FFIUnconfirmedTransaction {
- pub txid: FFIString,
+ pub txid: [u8; 32],
pub raw_tx: *mut u8,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub txid: FFIString, | |
| pub struct FFIUnconfirmedTransaction { | |
| // … other fields … | |
| // Replace the FFIString txid with a fixed-size 32-byte array | |
| pub txid: [u8; 32], | |
| pub raw_tx: *mut u8, | |
| // … other fields … | |
| } |
🤖 Prompt for AI Agents
In dash-spv-ffi/src/types.rs at line 425, replace the txid field type from
FFIString to a fixed-size byte array of length 32. This means changing the
declaration to use [u8; 32] instead of FFIString to represent the transaction ID
as a 32-byte array, ensuring efficient and correct handling in the FFI
interface.
Summary by CodeRabbit
New Features
Bug Fixes
Tests
Documentation
Chores