Skip to content

Conversation

@QuantumExplorer
Copy link
Member

@QuantumExplorer QuantumExplorer commented Sep 9, 2025

Summary by CodeRabbit

  • New Features

    • Option to restrict connections to configured peers only.
    • Peer address input accepts IP-only forms (defaults to network port).
    • Background disk I/O worker and new store_headers_from_height API for checkpoint-style header storage.
    • Expanded FFI/SDK: opaque client type, FFIArray, client lifecycle, checkpoint queries, mempool controls, wallet access, memory destructors, and broad key-derivation helpers (standard, BLS, EdDSA).
  • Refactor

    • Devnet default P2P port changed to 29999.
    • DIP9-specific derivation removed in favor of index-based derivation APIs.
  • Documentation

    • Clarified address formats, safety/memory semantics, and required destroy/cleanup calls.

@github-actions
Copy link

github-actions bot commented Sep 9, 2025

⚠️ FFI Documentation Update Required

The FFI API documentation is out of date. Please regenerate it by running:

For key-wallet-ffi:

cd key-wallet-ffi
make update-docs

For dash-spv-ffi:

cd dash-spv-ffi
make update-docs

Then commit the changes:

git add */FFI_API.md
git commit -m "docs: update FFI API documentation"

This ensures the documentation stays in sync with the actual FFI functions.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 9, 2025

Walkthrough

Adds support for IP-only peer addresses (uses network default port), introduces a restrict-to-configured-peers flag through FFI and core config, updates multi-peer networking to honor exclusive mode, expands FFI/Swift headers and docs with a new config setter and other APIs, adds disk background saving/caching, updates Swift devnet port and tests, and adds a .gitignore rule.

Changes

Cohort / File(s) Summary of Changes
FFI docs & C headers
dash-spv-ffi/FFI_API.md, dash-spv-ffi/include/dash_spv_ffi.h, dash-spv-ffi/README.md, swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
Document config_add_peer accepts full socket addresses or IP-only strings (IP-only → default network P2P port). Add dash_spv_ffi_config_set_restrict_to_configured_peers(...). Expand FFI surface (new functions, destructors, ownership notes) and adjust docs/safety.
FFI config implementation & tests (Rust)
dash-spv-ffi/src/config.rs, dash-spv-ffi/tests/unit/test_configuration.rs
Enhance config_add_peer parsing: trim/UTF‑8 validate, try SocketAddrIpAddr+default port → DNS resolution; add network default ports and improved errors. Add dash_spv_ffi_config_set_restrict_to_configured_peers and update tests to include IP-only inputs and call the new setter.
Client config (Rust)
dash-spv/src/client/config.rs, dash-spv/src/network/tests.rs
Add restrict_to_configured_peers: bool to ClientConfig (default false) and builder with_restrict_to_configured_peers; update tests to initialize the flag.
Multi-peer network manager (Rust)
dash-spv/src/network/multi_peer.rs
Add exclusive_mode: bool (computed from `restrict_to_configured_peers
Disk storage (Rust)
dash-spv/src/storage/disk.rs
Add async background worker for saving segments/index, in-memory segment cache with states (Clean/Dirty/Saving) and metadata, eviction that saves dirty segments, worker lifecycle start/stop, sentinel handling, and public store_headers_from_height API; adapt header/filter storage to checkpoint-aware indexing.
Swift Key Wallet FFI header
swift-dash-core-sdk/Sources/KeyWalletFFI/include/key_wallet_ffi.h
Remove DIP9 derivation enum and dip9_derive_identity_key; add many at-index derivation APIs for standard, BLS, and EdDSA accounts (seed/mnemonic variants) and extended/WIF derivation helpers.
Swift network model & tests
swift-dash-core-sdk/Sources/SwiftDashCoreSDK/Models/Network.swift, swift-dash-core-sdk/Tests/SwiftDashCoreSDKTests/DashSDKTests.swift
Change DashNetwork.devnet.defaultPort to 29999 and update corresponding test assertion; minor EOF newline fix.
Misc / build
.gitignore
Ignore /dash-spv/peer_reputation.json.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor App
  participant FFI as "FFI Config/API"
  participant Client as "ClientConfig"
  participant Net as "MultiPeerNetworkManager"
  participant DNS as "DNS Resolver"
  participant Store as "Peer Store"

  App->>FFI: config_new + config_add_peer("127.0.0.1")
  FFI->>Client: parse addr → SocketAddr? no → IpAddr + default port → add peer

  App->>FFI: config_set_restrict_to_configured_peers(true)
  FFI->>Client: set restrict_to_configured_peers = true

  App->>Net: start(Client)
  Net->>Net: exclusive_mode = restrict_to_configured_peers || !peers.is_empty()
  alt exclusive_mode = true
    Note right of Net: Skip peer store & DNS discovery
    Net--xStore: (no load)
    Net--xDNS: (no query)
    Net->>Net: connect only to configured peers
  else exclusive_mode = false
    Net->>Store: load cached peers
    Net->>DNS: discover peers
    Net->>Net: connect to discovered peers
  end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60–90 minutes

Possibly related PRs

Suggested reviewers

  • ogabrielides

Pre-merge checks (3 passed)

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title correctly references the FFI enhancement to peer-address handling by allowing local IP addresses but fails to capture the broader set of new API functions and configuration options introduced; it also misleadingly uses “localhost” even though hostnames without an explicit port remain unsupported.
Docstring Coverage ✅ Passed Docstring coverage is 83.33% which is sufficient. The required threshold is 80.00%.

Poem

I nibble code beneath the moon,
I hop to peers both near and soon.
No DNS tonight — just trusted lanes,
IPs and ports in tidy chains.
A carrot patch, the network blooms. 🥕

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/AllowLocalhostOnFFI

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
dash-spv/src/network/multi_peer.rs (1)

931-944: Avoid peer-store writes on shutdown in exclusive mode to match policy.

Shutdown currently saves peers and reputation unconditionally; skip when exclusive to fully honor “no peer store”.

Apply:

-        // Save known peers before shutdown
-        let addresses = self.addrv2_handler.get_addresses_for_peer(MAX_ADDR_TO_STORE).await;
-        if !addresses.is_empty() {
-            if let Err(e) = self.peer_store.save_peers(&addresses).await {
-                log::warn!("Failed to save peers on shutdown: {}", e);
-            }
-        }
-
-        // Save reputation data before shutdown
-        let reputation_path = self.data_dir.join("peer_reputation.json");
-        if let Err(e) = self.reputation_manager.save_to_storage(&reputation_path).await {
-            log::warn!("Failed to save reputation data on shutdown: {}", e);
-        }
+        // In exclusive mode, do not persist peer-store or reputation data
+        if !self.exclusive_mode {
+            // Save known peers before shutdown
+            let addresses = self.addrv2_handler.get_addresses_for_peer(MAX_ADDR_TO_STORE).await;
+            if !addresses.is_empty() {
+                if let Err(e) = self.peer_store.save_peers(&addresses).await {
+                    log::warn!("Failed to save peers on shutdown: {}", e);
+                }
+            }
+            // Save reputation data before shutdown
+            let reputation_path = self.data_dir.join("peer_reputation.json");
+            if let Err(e) = self.reputation_manager.save_to_storage(&reputation_path).await {
+                log::warn!("Failed to save reputation data on shutdown: {}", e);
+            }
+        } else {
+            log::info!("Exclusive mode active: skipping peer-store and reputation persistence on shutdown");
+        }
🧹 Nitpick comments (31)
swift-dash-core-sdk/Tests/SwiftDashCoreSDKTests/DashSDKTests.swift (1)

282-282: Trailing newline/brace fix.

Minor formatting improvement acknowledged.

dash-spv-ffi/README.md (1)

85-85: Clarify that hostnames (e.g., “localhost”) are accepted, not only IPs.

The PR title mentions localhost; the doc currently mentions IP-only. Suggest expanding wording and adding examples.

-- `dash_spv_ffi_config_add_peer(config, addr)` - Add a peer address. Accepts `"ip:port"`, `[ipv6]:port`, or IP-only (defaults to the network port).
+- `dash_spv_ffi_config_add_peer(config, addr)` - Add a peer address. Accepts `"host:port"`, `"ip:port"`, `[ipv6]:port`, or host/IP-only (defaults to the network port).
+  Examples:
+  - `"localhost"` → defaults to the network’s P2P port
+  - `"127.0.0.1"` → defaults to the network’s P2P port
+  - `"[::1]:19999"` → explicit IPv6 with port
swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h (12)

31-47: FFIArray ownership contract is good; add post-destroy expectations.

Consider stating that after destroy, the caller must not use arr->data and that the function may zero len/cap/data.

  * - Calling `dash_spv_ffi_array_destroy` to properly deallocate the memory
  * - Ensuring the data, len, and capacity fields remain consistent
+ *
+ * After calling `dash_spv_ffi_array_destroy(arr)`, `arr->data` must not be used.
+ * The implementation may set `arr->data = NULL` and zero `len/cap`.

196-238: Checkpoint APIs: document error retrieval on non-zero return.

Add note to call dash_spv_ffi_get_last_error() when return != 0 for consistency with the rest of the FFI.

  * - `out_hash` must point to at least 32 writable bytes.
+ * - On non-zero return, call `dash_spv_ffi_get_last_error()` for details.

(Apply the same sentence to all three checkpoint functions.)


248-258: Update-config: clarify network-change behavior.

State that changing network at runtime is unsupported and returns an error; specify that details are available via dash_spv_ffi_get_last_error().

- * - The network in `config` must match the client's network; changing networks at runtime is not supported.
+ * - The network in `config` must match the client's network; changing networks at runtime is not supported and returns an error (see `dash_spv_ffi_get_last_error()`).

366-372: Add destructor guidance for FFISyncProgress.

Mention the paired destroy function to prevent leaks.

  * - `client` must be a valid, non-null pointer.
+ * - The returned pointer must be freed with `dash_spv_ffi_sync_progress_destroy`.

374-380: Add destructor guidance for FFISpvStats.

Same rationale as above.

  * - `client` must be a valid, non-null pointer.
+ * - The returned pointer must be freed with `dash_spv_ffi_spv_stats_destroy`.

390-396: Event callbacks: callout that callbacks may fire on any thread.

This helps C/Swift callers ensure user_data synchronization.

- * - `client` must be a valid, non-null pointer.
+ * - `client` must be a valid, non-null pointer.
+ * - Callbacks may be invoked from arbitrary threads; `user_data` must be thread-safe.

423-431: Rescan not implemented: document expected error code.

Clarify that the function returns a non-zero NotImplemented error (and to read it via dash_spv_ffi_get_last_error()).

- * Request a rescan of the blockchain from a given height (not yet implemented).
+ * Request a rescan of the blockchain from a given height (not yet implemented).
+ * Returns a non-zero NotImplemented error; use `dash_spv_ffi_get_last_error()` for details.

441-448: Txid as C string: add a 32-byte variant to avoid hex parsing overhead.

To align with our FFI guidance, keep this API for compatibility but add a bytes variant that takes a 32-byte array.

 int32_t dash_spv_ffi_client_record_send(struct FFIDashSpvClient *client, const char *txid);
+/**
+ * Record that we attempted to send a transaction by its txid (binary form).
+ *
+ * # Safety
+ * - `client` must be valid and non-null.
+ * - `txid32` must point to exactly 32 bytes (little-endian txid).
+ */
+int32_t dash_spv_ffi_client_record_send_bytes(struct FFIDashSpvClient *client,
+                                              const uint8_t (*txid32)[32]);

If you want, I can follow up with the Rust/C implementations.


509-523: Add hostnames (e.g., “localhost”) to accepted addr formats.

Matches the PR objective and reduces ambiguity.

- * Accepts either a full socket address (e.g., "192.168.1.1:9999" or "[::1]:19999")
- * or an IP-only string (e.g., "127.0.0.1" or "2001:db8::1"). When an IP-only
- * string is given, the default P2P port for the configured network is used.
+ * Accepts either a full socket address (e.g., "192.168.1.1:9999" or "[::1]:19999"),
+ * a hostname (e.g., "localhost"), or an IP-only string (e.g., "127.0.0.1" or "2001:db8::1").
+ * When no port is specified, the default P2P port for the configured network is used.

555-563: Restrict-to-configured-peers: specify behavior when no peers are configured.

Avoids surprises in “exclusive” mode startup.

- * Restrict connections strictly to configured peers (disable DNS discovery and peer store)
+ * Restrict connections strictly to configured peers (disables DNS discovery and peer store).
+ * If enabled with an empty peer list, the client will not connect to any peers and startup may fail.

757-763: dash_spv_ffi_string_destroy: document NULL tolerance.

Matches implementation that no-ops on null pointers.

- * - It must not be used after this call.
+ * - It must not be used after this call.
+ * - Safe to call with `s.ptr == NULL` (no-op).

823-830: init_logging: list accepted levels and default.

Helps callers avoid silent fallbacks.

- * - If non-null, the pointer must remain valid for the duration of this call.
+ * - If non-null, the pointer must remain valid for the duration of this call.
+ * - Accepted levels: "trace", "debug", "info", "warn", "error" (case-insensitive). Defaults to "info".
swift-dash-core-sdk/Sources/KeyWalletFFI/include/key_wallet_ffi.h (2)

1497-1524: Fix doc: “chain/index” is misleading; API accepts only index

Both account_derive_private_key_at and account_derive_private_key_as_wif_at take just an index. Drop “chain/” and mirror the chain-agnostic note (errors on accounts with ext/int split).

Example patch (update the source Rust doc so cbindgen regenerates this):

- Derive a private key (secp256k1) from an account at a given chain/index, using the provided master xpriv.
+ Derive a private key (secp256k1) from an account at a given index, using the provided master xpriv.
+ 
+ Notes:
+ - This is chain-agnostic. For accounts with internal/external chains, this returns an error.

Also applies to: 1512-1524


1416-1421: Consider documenting handling of sensitive return values (hex-encoded private keys)

BLS/EdDSA derivations return private keys as heap-allocated hex strings. Clarify in the header that callers should immediately zeroize and free secrets (or provide a secure free variant that zeroizes before deallocation).

Have we implemented string_free to zeroize buffers for clearly sensitive outputs? If not, do you want a proposal for string_secure_free that wipes memory before free?

Also applies to: 1441-1446, 1489-1494

dash-spv-ffi/include/dash_spv_ffi.h (2)

512-519: Clarify accepted address forms and examples

Docs look good; consider adding that hostnames like “localhost” are accepted (if supported), and explicitly note bracket requirements for IPv6 when a port is present. Optionally list default ports per network to avoid ambiguity.


555-563: Document behavior note and add symmetric getter

  • Specify in dash-spv-ffi/include/dash_spv_ffi.h that when restrict=true and no peers are configured, the client makes no outbound connections.
  • Add a dash_spv_ffi_config_get_restrict_to_configured_peers(FFIClientConfig *config) function for symmetry.

Implementation in MultiPeerNetworkManager correctly honors restrict_to_configured_peers via its exclusive_mode flag (only configured peers are used, DNS discovery and peer persistence are disabled).

dash-spv/src/client/config.rs (1)

403-444: Optional: warn when restricting with no peers

Validate currently allows restrict_to_configured_peers=true with peers.is_empty(), resulting in no outbound connections. Consider logging a warning here (or during startup) to aid diagnostics.

dash-spv-ffi/FFI_API.md (1)

536-549: Document exact effects of “restrict_to_configured_peers”.

State that this disables both DNS discovery and any peer-store usage (load and save). Current implementation still writes peers/reputation on shutdown; either document that nuance or change the code to avoid peer-store writes when restricted (see suggestion in multi_peer.rs).

dash-spv-ffi/tests/unit/test_configuration.rs (4)

61-61: PR title vs. test expectation: plain “localhost” without port marked invalid.

The PR title suggests “allow localhost on FFI”. If the intent is to accept “localhost” (no port) using the network default, this assertion should move to the valid set. If the intent is only IP-only defaults (no hostnames), please update the PR title/docs to avoid confusion and add a positive test for “localhost:19999” on testnet.

Apply one of:

-                "localhost",       // hostname without port should be rejected
+                "localhost",       // accepted if hostname-only is supported; otherwise keep invalid and clarify docs

And, if hostnames-with-port are supported, also add in the valid list:

+                "localhost:19999",

77-85: Use Dash testnet ports to reduce confusion; optionally add localhost:19999.

You’re creating a testnet config but include 8333 (Bitcoin). Prefer Dash ports to make intent clear, and (optionally) add localhost with port if supported.

Apply:

-                "192.168.1.1:8333",
+                "192.168.1.1:19999",
-                "[2001:db8::1]:8333",
+                "[2001:db8::1]:19999",
+                // If hostnames-with-port are accepted:
+                "localhost:19999",

211-214: Good: covers new FFI API. Add null-handling case for completeness.

Add a NullPointer assertion for this new setter to match other null tests.

You can append in test_config_null_handling():

             assert_eq!(
                 dash_spv_ffi_config_set_filter_load(std::ptr::null_mut(), false),
                 FFIErrorCode::NullPointer as i32
             );
+
+            assert_eq!(
+                dash_spv_ffi_config_set_restrict_to_configured_peers(std::ptr::null_mut(), true),
+                FFIErrorCode::NullPointer as i32
+            );

103-116: Large loop (1000 peers) may inflate test time with little added value.

Consider trimming to a lower count (e.g., 200) unless this stresses a known boundary.

dash-spv-ffi/src/config.rs (7)

119-122: Doc: clarify hostname and IPv6 bracket behavior; confirm “localhost” intent.

Current text allows IP-only (default port) but requires a port for hostnames. If the PR goal includes bare “localhost” (no port), either document that it’s rejected or accept it with the default port. Also mention bracketed IPv6 without port (“[::1]”) behavior.

Do you want bare “localhost”/“[::1]” to work without an explicit port?

Also applies to: 125-125


135-143: Avoid drift: centralize default port mapping (verify regtest).

Hardcoding ports risks divergence from chainparams. Prefer a single helper or using existing params if available. Also please double-check the regtest port.

Apply within-range change to call a helper:

-    let default_port = match cfg.network {
-        dashcore::Network::Dash => 9999,
-        dashcore::Network::Testnet => 19999,
-        dashcore::Network::Regtest => 19899,
-        dashcore::Network::Devnet => 29999,
-        _ => 9999,
-    };
+    let default_port = default_p2p_port_for(cfg.network);

Then add this helper (outside the changed lines, e.g., near the bottom of the module):

#[inline]
fn default_p2p_port_for(net: dashcore::Network) -> u16 {
    match net {
        dashcore::Network::Dash => 9999,
        dashcore::Network::Testnet => 19999,
        dashcore::Network::Regtest => 19899, // verify
        dashcore::Network::Devnet => 29999,
        _ => 9999,
    }
}

152-156: Reject explicit port 0.

Port 0 is not connectable; fail fast with a clear error.

Apply this diff:

-    if let Ok(sock) = addr_str.parse::<SocketAddr>() {
-        cfg.peers.push(sock);
-        return FFIErrorCode::Success as i32;
-    }
+    if let Ok(sock) = addr_str.parse::<SocketAddr>() {
+        if sock.port() == 0 {
+            set_last_error("Port 0 is not allowed");
+            return FFIErrorCode::InvalidArgument as i32;
+        }
+        cfg.peers.push(sock);
+        return FFIErrorCode::Success as i32;
+    }

158-163: Support “[ipv6]” without port; reject unspecified IPs.

Helpful UX and avoids accepting 0.0.0.0/::.

Apply this diff:

-    if let Ok(ip) = addr_str.parse::<IpAddr>() {
-        let sock = SocketAddr::new(ip, default_port);
-        cfg.peers.push(sock);
-        return FFIErrorCode::Success as i32;
-    }
+    // Try bare IP; if bracketed IPv6 without port, strip brackets.
+    let ip_opt = addr_str
+        .parse::<IpAddr>()
+        .ok()
+        .or_else(|| {
+            if addr_str.starts_with('[') && addr_str.ends_with(']') {
+                addr_str[1..addr_str.len() - 1].parse::<IpAddr>().ok()
+            } else {
+                None
+            }
+        });
+    if let Some(ip) = ip_opt {
+        if ip.is_unspecified() {
+            set_last_error("Unspecified IP is not allowed (0.0.0.0/::)");
+            return FFIErrorCode::InvalidArgument as i32;
+        }
+        let sock = SocketAddr::new(ip, default_port);
+        cfg.peers.push(sock);
+        return FFIErrorCode::Success as i32;
+    }

165-170: Optional: accept bare “localhost” with default port.

Matches the PR’s intent and common dev workflows.

Apply this diff:

-    if !addr_str.contains(':') {
-        set_last_error("Missing port for hostname; supply 'host:port' or IP only");
-        return FFIErrorCode::InvalidArgument as i32;
-    }
+    if !addr_str.contains(':') {
+        if addr_str.eq_ignore_ascii_case("localhost") {
+            let sock = SocketAddr::new(IpAddr::from([127, 0, 0, 1]), default_port);
+            cfg.peers.push(sock);
+            return FFIErrorCode::Success as i32;
+        }
+        set_last_error("Missing port for hostname; supply 'host:port' or IP only");
+        return FFIErrorCode::InvalidArgument as i32;
+    }

Confirm that accepting bare “localhost” aligns with expected API behavior across SDKs.


171-186: Push all resolved sockets, not just the first.

Improves connectivity and mirrors multi-address DNS records.

Apply this diff:

-    match addr_str.to_socket_addrs() {
-        Ok(mut iter) => match iter.next() {
-            Some(sock) => {
-                cfg.peers.push(sock);
-                FFIErrorCode::Success as i32
-            }
-            None => {
-                set_last_error(&format!("Failed to resolve address: {}", addr_str));
-                FFIErrorCode::InvalidArgument as i32
-            }
-        },
+    match addr_str.to_socket_addrs() {
+        Ok(iter) => {
+            let mut any = false;
+            for sock in iter {
+                cfg.peers.push(sock);
+                any = true;
+            }
+            if any {
+                FFIErrorCode::Success as i32
+            } else {
+                set_last_error(&format!("Failed to resolve address: {}", addr_str));
+                FFIErrorCode::InvalidArgument as i32
+            }
+        }
         Err(e) => {
             set_last_error(&format!("Invalid address {} ({})", addr_str, e));
             FFIErrorCode::InvalidArgument as i32
         }
     }

252-266: Setter LGTM; clarify interaction with default peers.

When restricting to configured peers, do we still include the default peers seeded by ClientConfig::new()? If not, expose an FFI to clear peers.

I can add dash_spv_ffi_config_clear_peers(config: *mut FFIClientConfig) if desired.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cb46f84 and ac085b2.

📒 Files selected for processing (11)
  • dash-spv-ffi/FFI_API.md (5 hunks)
  • dash-spv-ffi/README.md (2 hunks)
  • dash-spv-ffi/include/dash_spv_ffi.h (2 hunks)
  • dash-spv-ffi/src/config.rs (4 hunks)
  • dash-spv-ffi/tests/unit/test_configuration.rs (4 hunks)
  • dash-spv/src/client/config.rs (4 hunks)
  • dash-spv/src/network/multi_peer.rs (7 hunks)
  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h (10 hunks)
  • swift-dash-core-sdk/Sources/KeyWalletFFI/include/key_wallet_ffi.h (1 hunks)
  • swift-dash-core-sdk/Sources/SwiftDashCoreSDK/Models/Network.swift (2 hunks)
  • swift-dash-core-sdk/Tests/SwiftDashCoreSDKTests/DashSDKTests.swift (2 hunks)
🧰 Additional context used
📓 Path-based instructions (14)
*-ffi/**

📄 CodeRabbit inference engine (AGENTS.md)

FFI bindings live in *-ffi crates

Files:

  • dash-spv-ffi/README.md
  • dash-spv-ffi/src/config.rs
  • dash-spv-ffi/include/dash_spv_ffi.h
  • dash-spv-ffi/tests/unit/test_configuration.rs
  • dash-spv-ffi/FFI_API.md
swift-dash-core-sdk/Sources/**/*.swift

📄 CodeRabbit inference engine (swift-dash-core-sdk/CLAUDE.md)

swift-dash-core-sdk/Sources/**/*.swift: Use actors for state management to maintain thread safety in SDK components
Use @available checks for features that require newer OS versions (e.g., SwiftData on iOS 17+)

Files:

  • swift-dash-core-sdk/Sources/SwiftDashCoreSDK/Models/Network.swift
{dash-network-ffi,dash-spv-ffi,key-wallet-ffi,swift-dash-core-sdk}/**/*.{rs,c,h,swift}

📄 CodeRabbit inference engine (CLAUDE.md)

Be careful with memory management at FFI boundaries (C/Swift interop)

Files:

  • swift-dash-core-sdk/Sources/SwiftDashCoreSDK/Models/Network.swift
  • swift-dash-core-sdk/Tests/SwiftDashCoreSDKTests/DashSDKTests.swift
  • dash-spv-ffi/src/config.rs
  • dash-spv-ffi/include/dash_spv_ffi.h
  • dash-spv-ffi/tests/unit/test_configuration.rs
  • swift-dash-core-sdk/Sources/KeyWalletFFI/include/key_wallet_ffi.h
  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
{key-wallet,swift-dash-core-sdk}/**/*.{rs,swift}

📄 CodeRabbit inference engine (CLAUDE.md)

Never log or expose private keys

Files:

  • swift-dash-core-sdk/Sources/SwiftDashCoreSDK/Models/Network.swift
  • swift-dash-core-sdk/Tests/SwiftDashCoreSDKTests/DashSDKTests.swift
swift-dash-core-sdk/**

📄 CodeRabbit inference engine (AGENTS.md)

Mobile SDK code resides in swift-dash-core-sdk/

Files:

  • swift-dash-core-sdk/Sources/SwiftDashCoreSDK/Models/Network.swift
  • swift-dash-core-sdk/Tests/SwiftDashCoreSDKTests/DashSDKTests.swift
  • swift-dash-core-sdk/Sources/KeyWalletFFI/include/key_wallet_ffi.h
  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
dash-spv-ffi/src/**/*.rs

📄 CodeRabbit inference engine (dash-spv-ffi/CLAUDE.md)

dash-spv-ffi/src/**/*.rs: FFI functions must be declared with #[no_mangle] extern "C"
Expose complex Rust types to C as opaque pointers (e.g., FFIDashSpvClient*)
Provide corresponding _destroy() functions for all FFI-owned types to enforce explicit memory management
Return Rust strings to C as *const c_char; allocate so the caller can free with dash_string_free
Accept input strings from C as *const c_char (borrowed; do not free inputs)
Functions returning pointers transfer ownership to the caller; functions taking pointers only borrow
Add cbindgen annotations for complex types so headers are generated correctly

Files:

  • dash-spv-ffi/src/config.rs
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.rs: Use proper error types with thiserror and propagate errors appropriately
Use the tokio runtime for async operations in Rust
Use conditional compilation feature flags for optional features (#[cfg(feature = ...)])
Format Rust code with cargo fmt (and enforce via cargo fmt --check)
Run clippy with -D warnings and fix all lints
Adhere to MSRV Rust 1.89 (avoid features requiring newer compiler)

**/*.rs: Format Rust code with rustfmt (per rustfmt.toml); run cargo fmt --all before commits
Lint with clippy; treat warnings as errors in CI
Follow Rust naming: snake_case for functions/variables, UpperCamelCase for types/traits, SCREAMING_SNAKE_CASE for consts
Prefer async using tokio where applicable

Files:

  • dash-spv-ffi/src/config.rs
  • dash-spv-ffi/tests/unit/test_configuration.rs
  • dash-spv/src/network/multi_peer.rs
  • dash-spv/src/client/config.rs
**/src/**/*.rs

📄 CodeRabbit inference engine (AGENTS.md)

**/src/**/*.rs: Each crate keeps sources in src/
Avoid unwrap()/expect() in library code; use proper error types (e.g., thiserror)
Place unit tests alongside code with #[cfg(test)]

Files:

  • dash-spv-ffi/src/config.rs
  • dash-spv/src/network/multi_peer.rs
  • dash-spv/src/client/config.rs
dash-spv-ffi/include/dash_spv_ffi.h

📄 CodeRabbit inference engine (dash-spv-ffi/CLAUDE.md)

The public C header is generated; ensure it reflects FFI changes after builds

Files:

  • dash-spv-ffi/include/dash_spv_ffi.h
dash-spv-ffi/tests/unit/**/*.rs

📄 CodeRabbit inference engine (dash-spv-ffi/CLAUDE.md)

Add a corresponding Rust unit test for each new FFI function

Files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
**/tests/**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

Use proptest for property-based testing where appropriate

**/tests/**/*.rs: Integration tests must live under tests/ (e.g., rpc-integration-test)
Use descriptive test names (e.g., test_parse_address_mainnet) for integration tests
Mark network-dependent or long-running tests with #[ignore] and run with -- --ignored
Add regression tests for fixes; consider property tests (e.g., proptest) where valuable
Keep test vectors deterministic

Files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
dash-spv/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

dash-spv/**/*.rs: Enforce Rust formatting via cargo fmt --check on all Rust source files
All code must be clippy-clean: run cargo clippy --all-targets --all-features -- -D warnings

Files:

  • dash-spv/src/network/multi_peer.rs
  • dash-spv/src/client/config.rs
dash-spv/src/network/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

Implement DNS-first peer discovery with the specified seeds, immediate startup (no initial discovery delay), and exclusive peer mode when explicit peers are provided

Files:

  • dash-spv/src/network/multi_peer.rs
swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h

📄 CodeRabbit inference engine (swift-dash-core-sdk/CLAUDE.md)

After updating dash-spv-ffi, synchronize the generated header into the Swift SDK using ./sync-headers.sh so Sources/DashSPVFFI/include/dash_spv_ffi.h stays up to date

Files:

  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
🧠 Learnings (31)
📓 Common learnings
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/network/**/*.rs : Implement DNS-first peer discovery with the specified seeds, immediate startup (no initial discovery delay), and exclusive peer mode when explicit peers are provided
📚 Learning: 2025-08-16T04:14:07.718Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv-ffi/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:14:07.718Z
Learning: Applies to dash-spv-ffi/include/dash_spv_ffi.h : The public C header is generated; ensure it reflects FFI changes after builds

Applied to files:

  • dash-spv-ffi/README.md
  • dash-spv-ffi/include/dash_spv_ffi.h
  • dash-spv-ffi/FFI_API.md
  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-08-16T04:15:57.225Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: swift-dash-core-sdk/Examples/DashHDWalletExample/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:15:57.225Z
Learning: Applies to swift-dash-core-sdk/Examples/DashHDWalletExample/**/*.swift : When accessing SPV functionality, use DashSDK public APIs instead of direct client access; add public wrapper methods to DashSDK if needed

Applied to files:

  • swift-dash-core-sdk/Tests/SwiftDashCoreSDKTests/DashSDKTests.swift
  • swift-dash-core-sdk/Sources/KeyWalletFFI/include/key_wallet_ffi.h
  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-08-29T04:00:14.154Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-29T04:00:14.154Z
Learning: Applies to {dash-spv,rpc-integration-test}/tests/**/*.rs : Provide integration tests for network operations

Applied to files:

  • swift-dash-core-sdk/Tests/SwiftDashCoreSDKTests/DashSDKTests.swift
  • dash-spv-ffi/src/config.rs
  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/network/**/*.rs : Implement DNS-first peer discovery with the specified seeds, immediate startup (no initial discovery delay), and exclusive peer mode when explicit peers are provided

Applied to files:

  • dash-spv-ffi/src/config.rs
  • dash-spv-ffi/tests/unit/test_configuration.rs
  • dash-spv/src/network/multi_peer.rs
  • dash-spv/src/client/config.rs
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/src/ffi/**/*.rs : Expose safe C FFI with #[no_mangle] extern "C" functions only for vetted APIs

Applied to files:

  • dash-spv-ffi/src/config.rs
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/src/**/*.rs : Always validate network consistency; never mix mainnet and testnet operations

Applied to files:

  • dash-spv-ffi/src/config.rs
📚 Learning: 2025-08-16T04:15:29.335Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: swift-dash-core-sdk/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:15:29.335Z
Learning: Applies to swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h : After updating dash-spv-ffi, synchronize the generated header into the Swift SDK using ./sync-headers.sh so Sources/DashSPVFFI/include/dash_spv_ffi.h stays up to date

Applied to files:

  • dash-spv-ffi/include/dash_spv_ffi.h
  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/tests/integration_real_node_test.rs : Integration tests that connect to a live Dash Core node must gracefully skip when the node at 127.0.0.1:9999 is unavailable

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/tests/address_tests.rs : Place address generation and validation tests in tests/address_tests.rs

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-09-01T14:44:53.683Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: AGENTS.md:0-0
Timestamp: 2025-09-01T14:44:53.683Z
Learning: Applies to **/tests/**/*.rs : Use descriptive test names (e.g., test_parse_address_mainnet) for integration tests

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-16T04:14:07.718Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv-ffi/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:14:07.718Z
Learning: Applies to dash-spv-ffi/tests/unit/**/*.rs : Add a corresponding Rust unit test for each new FFI function

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
  • dash-spv-ffi/FFI_API.md
  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/tests/**/*.rs : Keep test vectors in sync with Dash Core releases

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/tests/**/*.rs : Place integration tests under the `tests/` directory

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-16T04:14:07.718Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv-ffi/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:14:07.718Z
Learning: Applies to dash-spv-ffi/src/**/*.rs : FFI functions must be declared with #[no_mangle] extern "C"

Applied to files:

  • dash-spv-ffi/FFI_API.md
  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-08-16T04:14:07.718Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv-ffi/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:14:07.718Z
Learning: Applies to dash-spv-ffi/src/**/*.rs : Expose complex Rust types to C as opaque pointers (e.g., FFIDashSpvClient*)

Applied to files:

  • dash-spv-ffi/FFI_API.md
  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-08-29T04:00:14.154Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-29T04:00:14.154Z
Learning: Applies to {dash-network-ffi,dash-spv-ffi,key-wallet-ffi,swift-dash-core-sdk}/**/*.{rs,c,h,swift} : Be careful with memory management at FFI boundaries (C/Swift interop)

Applied to files:

  • dash-spv-ffi/FFI_API.md
  • swift-dash-core-sdk/Sources/KeyWalletFFI/include/key_wallet_ffi.h
  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/src/{bip32.rs,dip9.rs} : Keep Dash-specific derivation paths up to date (e.g., DIP-9) in bip32.rs and dip9.rs

Applied to files:

  • swift-dash-core-sdk/Sources/KeyWalletFFI/include/key_wallet_ffi.h
📚 Learning: 2025-08-16T04:15:29.335Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: swift-dash-core-sdk/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:15:29.335Z
Learning: Applies to swift-dash-core-sdk/Sources/**/FFIBridge.swift : Add a Swift wrapper in FFIBridge.swift for each new FFI function, handling type conversions and cross-language error mapping

Applied to files:

  • swift-dash-core-sdk/Sources/KeyWalletFFI/include/key_wallet_ffi.h
📚 Learning: 2025-08-16T04:15:57.225Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: swift-dash-core-sdk/Examples/DashHDWalletExample/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:15:57.225Z
Learning: Applies to swift-dash-core-sdk/Examples/DashHDWalletExample/**/SPVClient.swift : Do not place type definitions in SPVClient.swift; keep types in dedicated files instead

Applied to files:

  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-08-16T04:14:07.718Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv-ffi/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:14:07.718Z
Learning: Applies to dash-spv-ffi/src/**/*.rs : Provide corresponding _destroy() functions for all FFI-owned types to enforce explicit memory management

Applied to files:

  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-06-26T16:01:37.609Z
Learnt from: DCG-Claude
PR: dashpay/rust-dashcore#0
File: :0-0
Timestamp: 2025-06-26T16:01:37.609Z
Learning: The mempool tracking infrastructure (UnconfirmedTransaction, MempoolState, configuration, and mempool_filter.rs) is fully implemented and integrated in the Dash SPV client as of this PR, including client logic, FFI APIs, and tests.

Applied to files:

  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-08-21T05:01:58.949Z
Learnt from: QuantumExplorer
PR: dashpay/rust-dashcore#108
File: key-wallet-ffi/src/wallet_manager.rs:270-318
Timestamp: 2025-08-21T05:01:58.949Z
Learning: In the key-wallet-ffi design, wallets retrieved from the wallet manager via lookup functions should return const pointers (*const FFIWallet) to enforce read-only access and prevent unintended modifications. The wallet manager should control wallet lifecycle and mutations through specific APIs rather than allowing external mutation of retrieved wallet references.

Applied to files:

  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-08-16T04:14:07.718Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv-ffi/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:14:07.718Z
Learning: Applies to dash-spv-ffi/src/**/*.rs : Return Rust strings to C as *const c_char; allocate so the caller can free with dash_string_free

Applied to files:

  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-06-26T15:47:37.438Z
Learnt from: DCG-Claude
PR: dashpay/rust-dashcore#0
File: :0-0
Timestamp: 2025-06-26T15:47:37.438Z
Learning: Transaction IDs (txids) and block hashes in Dash are always 32 bytes and should be passed as fixed-size byte arrays in FFI interfaces, not as C strings, to avoid inefficiency and encoding issues.

Applied to files:

  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-06-26T15:46:56.854Z
Learnt from: DCG-Claude
PR: dashpay/rust-dashcore#0
File: :0-0
Timestamp: 2025-06-26T15:46:56.854Z
Learning: Transaction IDs (txids) in Dash are always 32 bytes and should be passed as fixed-size byte arrays in FFI interfaces, not as C strings, to avoid inefficiency and encoding issues.

Applied to files:

  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-08-16T04:14:07.718Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv-ffi/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:14:07.718Z
Learning: Applies to dash-spv-ffi/src/**/*.rs : Functions returning pointers transfer ownership to the caller; functions taking pointers only borrow

Applied to files:

  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-08-16T04:14:07.718Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv-ffi/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:14:07.718Z
Learning: Applies to dash-spv-ffi/src/callbacks.rs : Use C function pointers for async progress/completion callbacks; callbacks may be invoked from any thread

Applied to files:

  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-08-16T04:14:07.718Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv-ffi/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:14:07.718Z
Learning: Applies to dash-spv-ffi/tests/c_tests/**/*.c : In C code, free all strings returned by the FFI using dash_string_free

Applied to files:

  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-08-16T04:14:07.718Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv-ffi/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:14:07.718Z
Learning: Applies to dash-spv-ffi/src/**/*.rs : Accept input strings from C as *const c_char (borrowed; do not free inputs)

Applied to files:

  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/**/*.rs : All code must be clippy-clean: run `cargo clippy --all-targets --all-features -- -D warnings`

Applied to files:

  • swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h
🧬 Code graph analysis (6)
swift-dash-core-sdk/Tests/SwiftDashCoreSDKTests/DashSDKTests.swift (2)
swift-dash-core-sdk/Sources/SwiftDashCoreSDK/Core/SPVClientConfiguration.swift (1)
  • devnet (159-163)
swift-dash-core-sdk/Sources/SwiftDashCoreSDK/DashSDK.swift (1)
  • devnet (283-286)
dash-spv-ffi/src/config.rs (2)
dash-spv-ffi/src/error.rs (1)
  • set_last_error (26-31)
dash-spv/src/client/config.rs (1)
  • new (250-257)
dash-spv-ffi/include/dash_spv_ffi.h (1)
dash-spv-ffi/src/config.rs (1)
  • dash_spv_ffi_config_set_restrict_to_configured_peers (257-266)
dash-spv-ffi/tests/unit/test_configuration.rs (1)
dash-spv-ffi/src/config.rs (1)
  • dash_spv_ffi_config_set_restrict_to_configured_peers (257-266)
swift-dash-core-sdk/Sources/KeyWalletFFI/include/key_wallet_ffi.h (1)
key-wallet-ffi/src/account_derivation.rs (11)
  • account_derive_extended_private_key_at (31-68)
  • bls_account_derive_private_key_from_seed (87-136)
  • bls_account_derive_private_key_from_mnemonic (155-225)
  • eddsa_account_derive_private_key_from_seed (244-285)
  • eddsa_account_derive_private_key_from_mnemonic (303-373)
  • account_derive_private_key_at (382-419)
  • account_derive_private_key_as_wif_at (428-483)
  • account_derive_extended_private_key_from_seed (493-522)
  • account_derive_private_key_from_seed (532-561)
  • account_derive_extended_private_key_from_mnemonic (572-631)
  • account_derive_private_key_from_mnemonic (642-701)
swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h (4)
dash-spv-ffi/src/checkpoints.rs (4)
  • dash_spv_ffi_checkpoint_latest (29-54)
  • dash_spv_ffi_checkpoint_before_height (62-88)
  • dash_spv_ffi_checkpoint_before_timestamp (96-122)
  • dash_spv_ffi_checkpoints_between_heights (129-162)
dash-spv-ffi/src/client.rs (12)
  • dash_spv_ffi_client_new (128-183)
  • dash_spv_ffi_client_update_config (351-388)
  • dash_spv_ffi_client_start (395-430)
  • dash_spv_ffi_client_get_sync_progress (904-937)
  • dash_spv_ffi_client_get_stats (944-977)
  • dash_spv_ffi_client_is_filter_sync_available (984-1005)
  • dash_spv_ffi_client_set_event_callbacks (1012-1040)
  • dash_spv_ffi_client_destroy (1047-1085)
  • dash_spv_ffi_sync_progress_destroy (1092-1096)
  • dash_spv_ffi_spv_stats_destroy (1103-1107)
  • dash_spv_ffi_client_rescan_blockchain (1116-1144)
  • dash_spv_ffi_client_enable_mempool_tracking (1151-1187)
dash-spv-ffi/src/config.rs (2)
  • dash_spv_ffi_config_set_restrict_to_configured_peers (257-266)
  • dash_spv_ffi_config_set_masternode_sync_enabled (274-283)
dash-spv-ffi/src/types.rs (1)
  • dash_spv_ffi_string_destroy (307-311)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
  • GitHub Check: fuzz (hashes_sha256)
  • GitHub Check: fuzz (hashes_cbor)
  • GitHub Check: fuzz (hashes_ripemd160)
  • GitHub Check: fuzz (hashes_sha512)
  • GitHub Check: fuzz (dash_deserialize_amount)
  • GitHub Check: fuzz (dash_deser_net_msg)
  • GitHub Check: RPC Tests (stable, true)
  • GitHub Check: Key Wallet Components Tests
  • GitHub Check: Core Components Tests
  • GitHub Check: Strict Warnings and Clippy Checks
  • GitHub Check: Clippy (Non-strict)
🔇 Additional comments (20)
swift-dash-core-sdk/Sources/SwiftDashCoreSDK/Models/Network.swift (1)

19-19: Approve devnet default port update. Default port 29999 is consistently used across code and tests, with no remaining 19799 references found.

swift-dash-core-sdk/Tests/SwiftDashCoreSDKTests/DashSDKTests.swift (1)

52-52: Test updated for devnet port 29999 — LGTM.

dash-spv-ffi/README.md (1)

126-126: EOF newline — OK.

swift-dash-core-sdk/Sources/DashSPVFFI/include/dash_spv_ffi.h (5)

28-29: Opaque FFIDashSpvClient is the right FFI pattern.

Hides internals and reduces ABI coupling.


589-591: Get data dir: destructor note — good.


764-770: dash_spv_ffi_array_destroy: contract is clear — LGTM.


778-783: dash_spv_ffi_string_array_destroy: contract is clear — LGTM.


246-246: Headers are in sync between dash-spv-ffi and Swift SDK. Diff verification passed; no action required.

swift-dash-core-sdk/Sources/KeyWalletFFI/include/key_wallet_ffi.h (1)

1378-1397: New xpriv-at-index API looks correct

Signature, safety notes, and free-function reference align with the Rust implementation. No issues.

dash-spv/src/client/config.rs (4)

33-39: Good addition: explicit “restrict_to_configured_peers”

Clear docs and appropriate placement in the config struct. Matches the networking intent.


188-195: Default remains non-restrictive (false) — sensible

Keeps current behavior by default; avoids surprise opt-in to exclusive mode.


248-256: new() initializes the flag explicitly — good

Explicit init alongside default_peers_for_network maintains clarity.


280-284: Builder method LGTM

Signature follows existing builder style; no API smell.

dash-spv-ffi/FFI_API.md (1)

7-7: Verify FFI API docs and exports sync

  • Update the Total Functions count and TOC in dash-spv-ffi/FFI_API.md to match the actual exported functions in the generated C header after adding dash_spv_ffi_config_set_restrict_to_configured_peers.
  • Confirm that dash_spv_ffi_config_set_restrict_to_configured_peers is declared with #[no_mangle] pub unsafe extern "C" fn in dash-spv-ffi/src and appears in dash_spv_ffi.h (and is synced into the Swift SDK header).
dash-spv/src/network/multi_peer.rs (4)

71-73: Field addition LGTM.

Clear docstring; encapsulates mode cleanly.


102-104: Exclusive mode semantics align with prior guidance.

OR with non-empty peers matches “exclusive when explicit peers are provided”.


135-164: Startup path correctly honors exclusive mode.

Skips disk/DNS and sizes connections appropriately.

Also applies to: 166-174


696-710: Nice: peer-store writes are skipped during maintenance in exclusive mode.

Consistent with “no peer store” semantics.

dash-spv-ffi/src/config.rs (2)

6-6: Imports LGTM.

Brings in IpAddr/SocketAddr/ToSocketAddrs needed for the new parsing flow.


144-151: UTF-8/trim handling LGTM.

|----------|-------------|--------|
| `dash_spv_ffi_client_update_config` | Update the running client's configuration | client |
| `dash_spv_ffi_config_add_peer` | Adds a peer address to the configuration # Safety - `config` must be a valid... | config |
| `dash_spv_ffi_config_add_peer` | Adds a peer address to the configuration Accepts either a full socket addres... | config |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Clarify address parsing and fix typo (“addres” → “address”); state hostname policy explicitly.

The docs say “full socket addres” and mention “IP-only string” but don’t state whether plain hostnames (e.g., “localhost”) without a port are accepted. Given PR title implies allowing localhost, please make this explicit: either document that hostnames are accepted (with or without port and what default is used) or that only IP literals are accepted unless a port is provided. Also correct “addres” → “address”.

Since this file is auto-generated, adjust the Rust doc comments at the source and re-generate.

Also applies to: 251-255

Comment on lines +1378 to +1594
/*
Derive an extended private key from an account at a given index, using the provided master xpriv.
Returns an opaque FFIExtendedPrivateKey pointer that must be freed with `extended_private_key_free`.
Notes:
- This is chain-agnostic. For accounts with internal/external chains, this returns an error.
- For hardened-only account types (e.g., EdDSA), a hardened index is used.
# Safety
- `account` and `master_xpriv` must be valid, non-null pointers allocated by this library.
- `error` must be a valid pointer to an FFIError or null.
- The caller must free the returned pointer with `extended_private_key_free`.
*/

FFIExtendedPrivateKey *account_derive_extended_private_key_at(const FFIAccount *account,
const FFIExtendedPrivateKey *master_xpriv,
unsigned int index,
FFIError *error)
;

/*
Derive a BLS private key from a raw seed buffer at the given index.
Returns a newly allocated hex string of the 32-byte private key. The caller must free
it with `string_free`.
Notes:
- Uses the account's network for master key creation.
- Chain-agnostic; may return an error for accounts with internal/external chains.
# Safety
- `account` must be a valid, non-null pointer to an `FFIBLSAccount` (only when `bls` feature is enabled).
- `seed` must point to a readable buffer of length `seed_len` (1..=64 bytes expected).
- `error` must be a valid pointer to an FFIError or null.
- Returned string must be freed with `string_free`.
*/

char *bls_account_derive_private_key_from_seed(const FFIBLSAccount *account,
const uint8_t *seed,
size_t seed_len,
unsigned int index,
FFIError *error)
;

/*
Derive a BLS private key from a mnemonic + optional passphrase at the given index.
Returns a newly allocated hex string of the 32-byte private key. The caller must free
it with `string_free`.
Notes:
- Uses the English wordlist for parsing the mnemonic.
- Chain-agnostic; may return an error for accounts with internal/external chains.
# Safety
- `account` must be a valid, non-null pointer to an `FFIBLSAccount` (only when `bls` feature is enabled).
- `mnemonic` must be a valid, null-terminated UTF-8 C string.
- `passphrase` may be null; if not null, must be a valid UTF-8 C string.
- `error` must be a valid pointer to an FFIError or null.
- Returned string must be freed with `string_free`.
*/

char *bls_account_derive_private_key_from_mnemonic(const FFIBLSAccount *account,
const char *mnemonic,
const char *passphrase,
unsigned int index,
FFIError *error)
;

/*
Derive an EdDSA (ed25519) private key from a raw seed buffer at the given index.
Returns a newly allocated hex string of the 32-byte private key. The caller must free
it with `string_free`.
Notes:
- EdDSA only supports hardened derivation; the index will be used accordingly.
- Chain-agnostic; EdDSA accounts typically do not have internal/external split.
# Safety
- `account` must be a valid, non-null pointer to an `FFIEdDSAAccount` (only when `eddsa` feature is enabled).
- `seed` must point to a readable buffer of length `seed_len` (1..=64 bytes expected).
- `error` must be a valid pointer to an FFIError or null.
- Returned string must be freed with `string_free`.
*/

char *eddsa_account_derive_private_key_from_seed(const FFIEdDSAAccount *account,
const uint8_t *seed,
size_t seed_len,
unsigned int index,
FFIError *error)
;

/*
Derive an EdDSA (ed25519) private key from a mnemonic + optional passphrase at the given index.
Returns a newly allocated hex string of the 32-byte private key. The caller must free
it with `string_free`.
Notes:
- Uses the English wordlist for parsing the mnemonic.
# Safety
- `account` must be a valid, non-null pointer to an `FFIEdDSAAccount` (only when `eddsa` feature is enabled).
- `mnemonic` must be a valid, null-terminated UTF-8 C string.
- `passphrase` may be null; if not null, must be a valid UTF-8 C string.
- `error` must be a valid pointer to an FFIError or null.
- Returned string must be freed with `string_free`.
*/

char *eddsa_account_derive_private_key_from_mnemonic(const FFIEdDSAAccount *account,
const char *mnemonic,
const char *passphrase,
unsigned int index,
FFIError *error)
;

/*
Derive a private key (secp256k1) from an account at a given chain/index, using the provided master xpriv.
Returns an opaque FFIPrivateKey pointer that must be freed with `private_key_free`.
# Safety
- `account` and `master_xpriv` must be valid pointers allocated by this library
- `error` must be a valid pointer to an FFIError or null
*/

FFIPrivateKey *account_derive_private_key_at(const FFIAccount *account,
const FFIExtendedPrivateKey *master_xpriv,
unsigned int index,
FFIError *error)
;

/*
Derive a private key from an account at a given chain/index and return as WIF string.
Caller must free the returned string with `string_free`.
# Safety
- `account` and `master_xpriv` must be valid pointers allocated by this library
- `error` must be a valid pointer to an FFIError or null
*/

char *account_derive_private_key_as_wif_at(const FFIAccount *account,
const FFIExtendedPrivateKey *master_xpriv,
unsigned int index,
FFIError *error)
;

/*
Derive an extended private key from a raw seed buffer at the given index.
Returns an opaque FFIExtendedPrivateKey pointer that must be freed with `extended_private_key_free`.
# Safety
- `account` must be a valid pointer to an FFIAccount
- `seed` must point to a valid buffer of length `seed_len`
- `error` must be a valid pointer to an FFIError or null
*/

FFIExtendedPrivateKey *account_derive_extended_private_key_from_seed(const FFIAccount *account,
const uint8_t *seed,
size_t seed_len,
unsigned int index,
FFIError *error)
;

/*
Derive a private key from a raw seed buffer at the given index.
Returns an opaque FFIPrivateKey pointer that must be freed with `private_key_free`.
# Safety
- `account` must be a valid pointer to an FFIAccount
- `seed` must point to a valid buffer of length `seed_len`
- `error` must be a valid pointer to an FFIError or null
*/

FFIPrivateKey *account_derive_private_key_from_seed(const FFIAccount *account,
const uint8_t *seed,
size_t seed_len,
unsigned int index,
FFIError *error)
;

/*
Derive an extended private key from a mnemonic + optional passphrase at the given index.
Returns an opaque FFIExtendedPrivateKey pointer that must be freed with `extended_private_key_free`.
# Safety
- `account` must be a valid pointer to an FFIAccount
- `mnemonic` must be a valid, null-terminated C string
- `passphrase` may be null; if not null, must be a valid C string
- `error` must be a valid pointer to an FFIError or null
*/

FFIExtendedPrivateKey *account_derive_extended_private_key_from_mnemonic(const FFIAccount *account,
const char *mnemonic,
const char *passphrase,
unsigned int index,
FFIError *error)
;

/*
Derive a private key from a mnemonic + optional passphrase at the given index.
Returns an opaque FFIPrivateKey pointer that must be freed with `private_key_free`.
# Safety
- `account` must be a valid pointer to an FFIAccount
- `mnemonic` must be a valid, null-terminated C string
- `passphrase` may be null; if not null, must be a valid C string
- `error` must be a valid pointer to an FFIError or null
*/

FFIPrivateKey *account_derive_private_key_from_mnemonic(const FFIAccount *account,
const char *mnemonic,
const char *passphrase,
unsigned int index,
FFIError *error)
;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Add Swift wrappers for new FFI entries

Per project conventions, each new FFI must be bridged in FFIBridge.swift with proper conversions and error mapping.

Run to verify wrappers exist for all newly added symbols:


🏁 Script executed:

#!/bin/bash
rg -nP 'bls_account_derive_private_key_from_(seed|mnemonic)|eddsa_account_derive_private_key_from_(seed|mnemonic)|account_derive_(extended_)?private_key_(at|from_(seed|mnemonic))' swift-dash-core-sdk/Sources -C2

Length of output: 6045


Add missing Swift wrappers for new FFI functions in FFIBridge.swift
Implement bridging methods in swift-dash-core-sdk/Sources/FFIBridge.swift with proper type conversions and DashSDKError mapping for all of the following FFI entries:

  • bls_account_derive_private_key_from_seed
  • bls_account_derive_private_key_from_mnemonic
  • eddsa_account_derive_private_key_from_seed
  • eddsa_account_derive_private_key_from_mnemonic
  • account_derive_extended_private_key_at
  • account_derive_private_key_at
  • account_derive_extended_private_key_from_seed
  • account_derive_private_key_from_seed
  • account_derive_extended_private_key_from_mnemonic
  • account_derive_private_key_from_mnemonic
🤖 Prompt for AI Agents
In swift-dash-core-sdk/Sources/KeyWalletFFI/include/key_wallet_ffi.h lines
1378-1594, new FFI functions were added but FFIBridge.swift lacks Swift wrappers
for them; implement Swift bridging methods in
swift-dash-core-sdk/Sources/FFIBridge.swift for the following FFI symbols:
bls_account_derive_private_key_from_seed,
bls_account_derive_private_key_from_mnemonic,
eddsa_account_derive_private_key_from_seed,
eddsa_account_derive_private_key_from_mnemonic,
account_derive_extended_private_key_at, account_derive_private_key_at,
account_derive_extended_private_key_from_seed,
account_derive_private_key_from_seed,
account_derive_extended_private_key_from_mnemonic, and
account_derive_private_key_from_mnemonic — each wrapper should convert Swift
types to the expected C pointers/lengths, call the FFI function, convert
returned C strings/opaque pointers into Swift types (or typed opaque wrapper
objects), map and throw DashSDKError on FFI error using the existing error
utilities, and ensure proper ownership and memory management (free returned C
strings and use the library free functions for opaque pointers when needed);
follow existing FFIBridge patterns for signatures, optional passphrase handling,
hardened-index logic where applicable, and add unit tests mirroring other
wrapper tests.

Comment on lines +1465 to +1470
char *eddsa_account_derive_private_key_from_seed(const FFIEdDSAAccount *account,
const uint8_t *seed,
size_t seed_len,
unsigned int index,
FFIError *error)
;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add seed length validation for EdDSA seed derivation

Unlike the BLS variant, eddsa_account_derive_private_key_from_seed doesn’t validate seed_len. Add a bounds check at the FFI boundary to avoid undefined behavior and early-return consistent error codes.

Rust-side fix (key-wallet-ffi/src/account_derivation.rs):

 pub unsafe extern "C" fn eddsa_account_derive_private_key_from_seed(
     account: *const FFIEdDSAAccount,
     seed: *const u8,
     seed_len: usize,
     index: c_uint,
     error: *mut FFIError,
 ) -> *mut c_char {
-    if account.is_null() || seed.is_null() {
+    if account.is_null() || seed.is_null() {
         FFIError::set_error(error, FFIErrorCode::InvalidInput, "Null pointer provided".to_string());
         return ptr::null_mut();
     }
     let account = &*account;
-    let seed_slice = std::slice::from_raw_parts(seed, seed_len);
+    if seed_len == 0 || seed_len > 64 {
+        FFIError::set_error(
+            error,
+            FFIErrorCode::InvalidInput,
+            "Seed length must be between 1 and 64 bytes".to_string(),
+        );
+        return ptr::null_mut();
+    }
+    let seed_slice = std::slice::from_raw_parts(seed, seed_len);

Comment on lines +1536 to +1541
FFIExtendedPrivateKey *account_derive_extended_private_key_from_seed(const FFIAccount *account,
const uint8_t *seed,
size_t seed_len,
unsigned int index,
FFIError *error)
;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Validate seed length for xpriv-from-seed

account_derive_extended_private_key_from_seed accepts any seed_len. BIP32 seeds are typically 16..=64 bytes—validate and return InvalidInput consistently.

Rust-side fix:

 pub unsafe extern "C" fn account_derive_extended_private_key_from_seed(
     account: *const FFIAccount,
     seed: *const u8,
     seed_len: usize,
     index: c_uint,
     error: *mut FFIError,
 ) -> *mut FFIExtendedPrivateKey {
-    if account.is_null() || seed.is_null() {
+    if account.is_null() || seed.is_null() {
         FFIError::set_error(error, FFIErrorCode::InvalidInput, "Null pointer provided".to_string());
         return ptr::null_mut();
     }
 
     let account = &*account;
-    let seed_slice = std::slice::from_raw_parts(seed, seed_len);
+    if seed_len < 16 || seed_len > 64 {
+        FFIError::set_error(
+            error,
+            FFIErrorCode::InvalidInput,
+            "Seed length must be between 16 and 64 bytes".to_string(),
+        );
+        return ptr::null_mut();
+    }
+    let seed_slice = std::slice::from_raw_parts(seed, seed_len);

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In swift-dash-core-sdk/Sources/KeyWalletFFI/include/key_wallet_ffi.h around
lines 1536 to 1541, the function account_derive_extended_private_key_from_seed
currently accepts any seed_len; add validation to require seed_len between 16
and 64 bytes (inclusive) before performing derivation, and if outside that range
set the FFIError to InvalidInput and return NULL (or the equivalent error return
for this API) so callers get a consistent InvalidInput error; implement this
check early in the function on the Rust side and ensure the FFI error
code/message is populated exactly as other InvalidInput cases.

Comment on lines +1553 to +1558
FFIPrivateKey *account_derive_private_key_from_seed(const FFIAccount *account,
const uint8_t *seed,
size_t seed_len,
unsigned int index,
FFIError *error)
;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Validate seed length for privkey-from-seed

Mirror the same BIP32 seed length checks for account_derive_private_key_from_seed for parity and safety.

Rust-side fix:

 pub unsafe extern "C" fn account_derive_private_key_from_seed(
     account: *const FFIAccount,
     seed: *const u8,
     seed_len: usize,
     index: c_uint,
     error: *mut FFIError,
 ) -> *mut FFIPrivateKey {
-    if account.is_null() || seed.is_null() {
+    if account.is_null() || seed.is_null() {
         FFIError::set_error(error, FFIErrorCode::InvalidInput, "Null pointer provided".to_string());
         return ptr::null_mut();
     }
 
     let account = &*account;
-    let seed_slice = std::slice::from_raw_parts(seed, seed_len);
+    if seed_len < 16 || seed_len > 64 {
+        FFIError::set_error(
+            error,
+            FFIErrorCode::InvalidInput,
+            "Seed length must be between 16 and 64 bytes".to_string(),
+        );
+        return ptr::null_mut();
+    }
+    let seed_slice = std::slice::from_raw_parts(seed, seed_len);

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
swift-dash-core-sdk/Sources/KeyWalletFFI/include/key_wallet_ffi.h around lines
1553-1558: the account_derive_private_key_from_seed FFI entry lacks BIP32 seed
length validation; add checks (and a null-pointer check for seed) at the start
of the function to mirror the other BIP32-safe functions: if seed is NULL or
seed_len is outside allowed range (e.g., < 16 or > 64 bytes) set the provided
FFIError with an appropriate error code/message (e.g., InvalidSeedLength) and
return NULL without attempting derivation; ensure the same error-setting pattern
used by other FFI functions is followed so the Rust side receives consistent
errors.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
dash-spv-ffi/tests/unit/test_configuration.rs (1)

97-99: Remove stray #[test]/#[serial] attributes (compile error).

These attributes are orphaned and will fail to compile.

-    #[test]
-    #[serial]
-    // removed: host_port API tests (consolidated into add_peer behavior)
+    // removed: host_port API tests (consolidated into add_peer behavior)
🧹 Nitpick comments (4)
dash-spv-ffi/tests/unit/test_configuration.rs (3)

61-61: Reconcile 'localhost' with PR intent ("allow localhost on ffi").

If localhost without port is now accepted (port inferred), it shouldn’t be in invalid_addrs. Swap it for a generic hostname and add localhost to the valid list below.

-                "localhost",       // hostname without port should be rejected
+                "example.com",     // generic hostname without port should be rejected

77-85: Use Dash testnet ports (19999) and include localhost as valid; clarify inference.

Avoid Bitcoin’s 8333 and mixed 9999 while using testnet config; also add localhost to exercise the new allowance.

-            // Test valid addresses including IP-only forms (port inferred from network)
+            // Test valid addresses including host/IP-only forms (port inferred from network)
             let valid_addrs = [
-                "127.0.0.1:9999",
-                "192.168.1.1:8333",
-                "[::1]:9999",
-                "[2001:db8::1]:8333",
+                "127.0.0.1:19999",
+                "192.168.1.1:19999",
+                "[::1]:19999",
+                "[2001:db8::1]:19999",
                 "127.0.0.1",   // IP-only v4
                 "2001:db8::1", // IP-only v6
+                "localhost",   // hostname-only
             ];

210-213: Good: exercising new FFI setter; add null-handling + (if available) getter check.

Add a NullPointer assertion for this API in test_config_null_handling and, if a getter exists, assert the flag actually changes.

Add to test_config_null_handling:

assert_eq!(
    dash_spv_ffi_config_set_restrict_to_configured_peers(std::ptr::null_mut(), false),
    FFIErrorCode::NullPointer as i32
);

If there is a dash_spv_ffi_config_get_restrict_to_configured_peers, also assert it returns true after setting.

dash-spv/src/network/tests.rs (1)

127-134: Cover both exclusive-mode branches in tests
This test’s config (peers = vec![...], restrict_to_configured_peers = false) always yields exclusive_mode = true (via the non-empty peers branch). If you need to exercise the non-exclusive path, set peers: vec![]; to cover the “restrict-only” exclusive branch, add a companion test with peers: vec![] and restrict_to_configured_peers: true.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ac085b2 and 0622f8c.

📒 Files selected for processing (3)
  • dash-spv-ffi/src/config.rs (4 hunks)
  • dash-spv-ffi/tests/unit/test_configuration.rs (4 hunks)
  • dash-spv/src/network/tests.rs (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • dash-spv-ffi/src/config.rs
🧰 Additional context used
📓 Path-based instructions (8)
dash-spv/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

dash-spv/**/*.rs: Enforce Rust formatting via cargo fmt --check on all Rust source files
All code must be clippy-clean: run cargo clippy --all-targets --all-features -- -D warnings

Files:

  • dash-spv/src/network/tests.rs
dash-spv/src/network/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

Implement DNS-first peer discovery with the specified seeds, immediate startup (no initial discovery delay), and exclusive peer mode when explicit peers are provided

Files:

  • dash-spv/src/network/tests.rs
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.rs: Use proper error types with thiserror and propagate errors appropriately
Use the tokio runtime for async operations in Rust
Use conditional compilation feature flags for optional features (#[cfg(feature = ...)])
Format Rust code with cargo fmt (and enforce via cargo fmt --check)
Run clippy with -D warnings and fix all lints
Adhere to MSRV Rust 1.89 (avoid features requiring newer compiler)

**/*.rs: Format Rust code with rustfmt (per rustfmt.toml); run cargo fmt --all before commits
Lint with clippy; treat warnings as errors in CI
Follow Rust naming: snake_case for functions/variables, UpperCamelCase for types/traits, SCREAMING_SNAKE_CASE for consts
Prefer async using tokio where applicable

Files:

  • dash-spv/src/network/tests.rs
  • dash-spv-ffi/tests/unit/test_configuration.rs
**/src/**/*.rs

📄 CodeRabbit inference engine (AGENTS.md)

**/src/**/*.rs: Each crate keeps sources in src/
Avoid unwrap()/expect() in library code; use proper error types (e.g., thiserror)
Place unit tests alongside code with #[cfg(test)]

Files:

  • dash-spv/src/network/tests.rs
dash-spv-ffi/tests/unit/**/*.rs

📄 CodeRabbit inference engine (dash-spv-ffi/CLAUDE.md)

Add a corresponding Rust unit test for each new FFI function

Files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
{dash-network-ffi,dash-spv-ffi,key-wallet-ffi,swift-dash-core-sdk}/**/*.{rs,c,h,swift}

📄 CodeRabbit inference engine (CLAUDE.md)

Be careful with memory management at FFI boundaries (C/Swift interop)

Files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
**/tests/**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

Use proptest for property-based testing where appropriate

**/tests/**/*.rs: Integration tests must live under tests/ (e.g., rpc-integration-test)
Use descriptive test names (e.g., test_parse_address_mainnet) for integration tests
Mark network-dependent or long-running tests with #[ignore] and run with -- --ignored
Add regression tests for fixes; consider property tests (e.g., proptest) where valuable
Keep test vectors deterministic

Files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
*-ffi/**

📄 CodeRabbit inference engine (AGENTS.md)

FFI bindings live in *-ffi crates

Files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
🧠 Learnings (10)
📓 Common learnings
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/network/**/*.rs : Implement DNS-first peer discovery with the specified seeds, immediate startup (no initial discovery delay), and exclusive peer mode when explicit peers are provided
📚 Learning: 2025-08-29T04:00:14.154Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-29T04:00:14.154Z
Learning: Applies to {dash-spv,rpc-integration-test}/tests/**/*.rs : Provide integration tests for network operations

Applied to files:

  • dash-spv/src/network/tests.rs
  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-16T04:14:07.718Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv-ffi/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:14:07.718Z
Learning: Applies to dash-spv-ffi/tests/unit/**/*.rs : Add a corresponding Rust unit test for each new FFI function

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/tests/integration_real_node_test.rs : Integration tests that connect to a live Dash Core node must gracefully skip when the node at 127.0.0.1:9999 is unavailable

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/tests/address_tests.rs : Place address generation and validation tests in tests/address_tests.rs

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-09-01T14:44:53.683Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: AGENTS.md:0-0
Timestamp: 2025-09-01T14:44:53.683Z
Learning: Applies to **/tests/**/*.rs : Use descriptive test names (e.g., test_parse_address_mainnet) for integration tests

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/tests/**/*.rs : Keep test vectors in sync with Dash Core releases

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/tests/**/*.rs : Place integration tests under the `tests/` directory

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/network/**/*.rs : Implement DNS-first peer discovery with the specified seeds, immediate startup (no initial discovery delay), and exclusive peer mode when explicit peers are provided

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/tests/psbt.rs : Place PSBT serialization/deserialization tests in tests/psbt.rs

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
🧬 Code graph analysis (1)
dash-spv-ffi/tests/unit/test_configuration.rs (1)
dash-spv-ffi/src/config.rs (1)
  • dash_spv_ffi_config_set_restrict_to_configured_peers (257-266)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
  • GitHub Check: Key Wallet Components Tests
  • GitHub Check: fuzz (hashes_sha512_256)
  • GitHub Check: fuzz (dash_deserialize_amount)
  • GitHub Check: fuzz (dash_deserialize_block)
  • GitHub Check: fuzz (dash_outpoint_string)
  • GitHub Check: fuzz (dash_deser_net_msg)
  • GitHub Check: fuzz (dash_deserialize_address)

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
dash-spv-ffi/tests/unit/test_configuration.rs (1)

77-85: Add a valid localhost case; avoid Bitcoin port literals.

  • Include "localhost:" in valid_addrs to exercise the new allowance.
  • Optional: swap 8333 (Bitcoin) for a Dash-typical port to avoid confusion.

Suggested change:

-            let valid_addrs = [
+            let valid_addrs = [
                 "127.0.0.1:9999",
-                "192.168.1.1:8333",
+                "192.168.1.1:9999",
                 "[::1]:9999",
-                "[2001:db8::1]:8333",
+                "[2001:db8::1]:9999",
                 "127.0.0.1",   // IP-only v4
                 "2001:db8::1", // IP-only v6
+                "localhost:9999", // hostname with explicit port
             ];
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0622f8c and ca5f517.

📒 Files selected for processing (1)
  • dash-spv-ffi/tests/unit/test_configuration.rs (3 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
dash-spv-ffi/tests/unit/**/*.rs

📄 CodeRabbit inference engine (dash-spv-ffi/CLAUDE.md)

Add a corresponding Rust unit test for each new FFI function

Files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.rs: Use proper error types with thiserror and propagate errors appropriately
Use the tokio runtime for async operations in Rust
Use conditional compilation feature flags for optional features (#[cfg(feature = ...)])
Format Rust code with cargo fmt (and enforce via cargo fmt --check)
Run clippy with -D warnings and fix all lints
Adhere to MSRV Rust 1.89 (avoid features requiring newer compiler)

**/*.rs: Format Rust code with rustfmt (per rustfmt.toml); run cargo fmt --all before commits
Lint with clippy; treat warnings as errors in CI
Follow Rust naming: snake_case for functions/variables, UpperCamelCase for types/traits, SCREAMING_SNAKE_CASE for consts
Prefer async using tokio where applicable

Files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
{dash-network-ffi,dash-spv-ffi,key-wallet-ffi,swift-dash-core-sdk}/**/*.{rs,c,h,swift}

📄 CodeRabbit inference engine (CLAUDE.md)

Be careful with memory management at FFI boundaries (C/Swift interop)

Files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
**/tests/**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

Use proptest for property-based testing where appropriate

**/tests/**/*.rs: Integration tests must live under tests/ (e.g., rpc-integration-test)
Use descriptive test names (e.g., test_parse_address_mainnet) for integration tests
Mark network-dependent or long-running tests with #[ignore] and run with -- --ignored
Add regression tests for fixes; consider property tests (e.g., proptest) where valuable
Keep test vectors deterministic

Files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
*-ffi/**

📄 CodeRabbit inference engine (AGENTS.md)

FFI bindings live in *-ffi crates

Files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
🧠 Learnings (10)
📓 Common learnings
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/network/**/*.rs : Implement DNS-first peer discovery with the specified seeds, immediate startup (no initial discovery delay), and exclusive peer mode when explicit peers are provided
📚 Learning: 2025-08-29T04:00:14.154Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: CLAUDE.md:0-0
Timestamp: 2025-08-29T04:00:14.154Z
Learning: Applies to {dash-spv,rpc-integration-test}/tests/**/*.rs : Provide integration tests for network operations

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/tests/integration_real_node_test.rs : Integration tests that connect to a live Dash Core node must gracefully skip when the node at 127.0.0.1:9999 is unavailable

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/tests/address_tests.rs : Place address generation and validation tests in tests/address_tests.rs

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-09-01T14:44:53.683Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: AGENTS.md:0-0
Timestamp: 2025-09-01T14:44:53.683Z
Learning: Applies to **/tests/**/*.rs : Use descriptive test names (e.g., test_parse_address_mainnet) for integration tests

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-16T04:14:07.718Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv-ffi/CLAUDE.md:0-0
Timestamp: 2025-08-16T04:14:07.718Z
Learning: Applies to dash-spv-ffi/tests/unit/**/*.rs : Add a corresponding Rust unit test for each new FFI function

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/tests/**/*.rs : Keep test vectors in sync with Dash Core releases

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/network/**/*.rs : Implement DNS-first peer discovery with the specified seeds, immediate startup (no initial discovery delay), and exclusive peer mode when explicit peers are provided

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/tests/**/*.rs : Place integration tests under the `tests/` directory

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
📚 Learning: 2025-08-29T04:01:18.111Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: key-wallet/CLAUDE.md:0-0
Timestamp: 2025-08-29T04:01:18.111Z
Learning: Applies to key-wallet/tests/psbt.rs : Place PSBT serialization/deserialization tests in tests/psbt.rs

Applied to files:

  • dash-spv-ffi/tests/unit/test_configuration.rs
🧬 Code graph analysis (1)
dash-spv-ffi/tests/unit/test_configuration.rs (1)
dash-spv-ffi/src/config.rs (1)
  • dash_spv_ffi_config_set_restrict_to_configured_peers (257-266)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (15)
  • GitHub Check: Integration Tests (stable, 22.1.3)
  • GitHub Check: fuzz (hashes_sha256)
  • GitHub Check: SPV Components Tests
  • GitHub Check: Core Components Tests
  • GitHub Check: fuzz (hashes_sha512)
  • GitHub Check: fuzz (hashes_json)
  • GitHub Check: fuzz (hashes_cbor)
  • GitHub Check: fuzz (dash_deserialize_address)
  • GitHub Check: fuzz (dash_deser_net_msg)
  • GitHub Check: fuzz (hashes_sha512_256)
  • GitHub Check: fuzz (hashes_ripemd160)
  • GitHub Check: fuzz (dash_deserialize_witness)
  • GitHub Check: fuzz (dash_deserialize_block)
  • GitHub Check: fuzz (dash_deserialize_amount)
  • GitHub Check: fuzz (dash_deserialize_script)
🔇 Additional comments (2)
dash-spv-ffi/tests/unit/test_configuration.rs (2)

61-61: Clarify localhost acceptance and add a positive case.

PR title mentions allowing localhost via FFI. Rejecting bare "localhost" is fine (no port), but please add an explicit valid case (e.g., "localhost:") to prevent regressions and make intent clear.


207-210: LGTM: exercises new FFI setter.

Good to see this covered in tests. No further action from my side.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
dash-spv/src/storage/disk.rs (5)

229-297: Remove duplicate worker initialization in new()

This entire block duplicates the worker initialization logic that should be handled by start_worker().

Replace the duplicate code with a call to start_worker():

-        // Create background worker channels
-        let (worker_tx, mut worker_rx) = mpsc::channel::<WorkerCommand>(100);
-        let (notification_tx, notification_rx) = mpsc::channel::<WorkerNotification>(100);
-
-        // Start background worker
-        let worker_base_path = base_path.clone();
-        let worker_notification_tx = notification_tx.clone();
-        let worker_handle = tokio::spawn(async move {
-            // ... duplicate code ...
-        });

         let mut storage = Self {
             base_path,
             active_segments: Arc::new(RwLock::new(HashMap::new())),
             active_filter_segments: Arc::new(RwLock::new(HashMap::new())),
             header_hash_index: Arc::new(RwLock::new(HashMap::new())),
-            worker_tx: Some(worker_tx),
-            worker_handle: Some(worker_handle),
-            notification_rx: Arc::new(RwLock::new(notification_rx)),
+            worker_tx: None,
+            worker_handle: None,
+            notification_rx: Arc::new(RwLock::new(mpsc::channel(1).1)), // Placeholder
             cached_tip_height: Arc::new(RwLock::new(None)),
             cached_filter_tip_height: Arc::new(RwLock::new(None)),
             sync_base_height: Arc::new(RwLock::new(0)),
             mempool_transactions: Arc::new(RwLock::new(HashMap::new())),
             mempool_state: Arc::new(RwLock::new(None)),
         };
+
+        // Start the background worker
+        storage.start_worker().await;

516-546: Potential deadlock in evict_oldest_segment

Calling save_segment_to_disk synchronously while holding the write lock on segments could cause deadlocks if other operations need to acquire the same lock.

Move the synchronous save outside of the critical section:

     async fn evict_oldest_segment(
         &self,
         segments: &mut HashMap<u32, SegmentCache>,
     ) -> StorageResult<()> {
         if let Some(oldest_id) =
             segments.iter().min_by_key(|(_, s)| s.last_accessed).map(|(id, _)| *id)
         {
-            // Get the segment to check if it needs saving
-            if let Some(oldest_segment) = segments.get(&oldest_id) {
-                // Save if dirty or saving before evicting - do it synchronously to ensure data consistency
-                if oldest_segment.state != SegmentState::Clean {
-                    tracing::debug!(
-                        "Synchronously saving segment {} before eviction (state: {:?})",
-                        oldest_segment.segment_id,
-                        oldest_segment.state
-                    );
-                    let segment_path = self
-                        .base_path
-                        .join(format!("headers/segment_{:04}.dat", oldest_segment.segment_id));
-                    save_segment_to_disk(&segment_path, &oldest_segment.headers).await?;
-                    tracing::debug!(
-                        "Successfully saved segment {} to disk",
-                        oldest_segment.segment_id
-                    );
-                }
-            }
-
-            segments.remove(&oldest_id);
+            // Extract the segment if it needs saving
+            let segment_to_save = segments.get(&oldest_id)
+                .filter(|s| s.state != SegmentState::Clean)
+                .cloned();
+            
+            // Remove from cache first
+            segments.remove(&oldest_id);
+            
+            // Save outside the critical section if needed
+            if let Some(segment) = segment_to_save {
+                tracing::debug!(
+                    "Synchronously saving segment {} before eviction (state: {:?})",
+                    segment.segment_id,
+                    segment.state
+                );
+                let segment_path = self
+                    .base_path
+                    .join(format!("headers/segment_{:04}.dat", segment.segment_id));
+                save_segment_to_disk(&segment_path, &segment.headers).await?;
+                tracing::debug!(
+                    "Successfully saved segment {} to disk",
+                    segment.segment_id
+                );
+            }
         }
 
         Ok(())
     }

893-897: Incorrect valid_count update logic

The condition for updating valid_count only works correctly for sequential writes. If headers are written out of order or with gaps, this could lead to incorrect valid_count values.

Fix the valid_count update logic to handle all cases:

                     segment.headers[offset] = *header;
-                    // Only increment valid_count when offset equals the current valid_count
-                    // This ensures valid_count represents contiguous valid headers without gaps
-                    if offset == segment.valid_count {
-                        segment.valid_count += 1;
-                    }
+                    // Update valid_count to be the maximum offset + 1 of non-sentinel headers
+                    segment.valid_count = segment.valid_count.max(offset + 1);

837-948: Missing validation in store_headers_from_height

The function doesn't validate that headers are being stored at the correct blockchain height relative to existing headers.

Add validation to ensure headers are appended correctly:

     pub async fn store_headers_from_height(
         &mut self,
         headers: &[BlockHeader],
         start_height: u32,
     ) -> StorageResult<()> {
         // Early return if no headers to store
         if headers.is_empty() {
             tracing::trace!("DiskStorage: no headers to store");
             return Ok(());
         }
 
         // Acquire write locks for the entire operation to prevent race conditions
         let mut cached_tip = self.cached_tip_height.write().await;
         let mut reverse_index = self.header_hash_index.write().await;
+        
+        // Validate that we're not creating gaps in the blockchain
+        if let Some(current_tip) = *cached_tip {
+            let sync_base = *self.sync_base_height.read().await;
+            let expected_next_blockchain_height = sync_base + current_tip + 1;
+            if start_height != expected_next_blockchain_height {
+                return Err(StorageError::InvalidOperation(format!(
+                    "Cannot store headers at height {}, expected height {}",
+                    start_height, expected_next_blockchain_height
+                )));
+            }
+        }

674-752: The background save marking logic might skip segments

When collecting segments to save, the code reads the segments collection multiple times with write locks acquired at different points. This could lead to race conditions where segments become dirty after being checked but before being marked as Saving.

Consolidate the segment collection and marking into a single critical section:

     async fn save_dirty_segments(&self) -> StorageResult<()> {
         if let Some(tx) = &self.worker_tx {
-            // Collect segments to save (only dirty ones)
-            let (segments_to_save, segment_ids_to_mark) = {
+            // Collect and mark header segments in one operation
+            let segments_to_save = {
                 let segments = self.active_segments.read().await;
-                let to_save: Vec<_> = segments
+                let mut segments = self.active_segments.write().await;
+                let mut to_save = Vec::new();
+                for segment in segments.values_mut() {
+                    if segment.state == SegmentState::Dirty {
+                        segment.state = SegmentState::Saving;
+                        segment.last_saved = Instant::now();
+                        to_save.push((segment.segment_id, segment.headers.clone()));
+                    }
+                }
-                    .values()
-                    .filter(|s| s.state == SegmentState::Dirty)
-                    .map(|s| (s.segment_id, s.headers.clone()))
-                    .collect();
-                let ids_to_mark: Vec<_> = to_save.iter().map(|(id, _)| *id).collect();
-                (to_save, ids_to_mark)
+                to_save
             };
 
             // Send header segments to worker
             for (segment_id, headers) in segments_to_save {
                 let _ = tx
                     .send(WorkerCommand::SaveHeaderSegment {
                         segment_id,
                         headers,
                     })
                     .await;
             }
-
-            // Mark ONLY the header segments we're actually saving as Saving
-            {
-                let mut segments = self.active_segments.write().await;
-                for segment_id in &segment_ids_to_mark {
-                    if let Some(segment) = segments.get_mut(segment_id) {
-                        segment.state = SegmentState::Saving;
-                        segment.last_saved = Instant::now();
-                    }
-                }
-            }

Apply the same pattern for filter segments.

♻️ Duplicate comments (1)
dash-spv/src/storage/disk.rs (1)

1093-1097: Duplicate valid_count update issue

Same issue as in store_headers_from_height - the valid_count update logic is incorrect for non-sequential writes.

The valid_count update logic has the same issue here. Apply the same fix as suggested for line 893-897.

🧹 Nitpick comments (5)
dash-spv/src/storage/disk.rs (5)

593-620: Inefficient cloning in evict_oldest_filter_segment

The entire segment is cloned unnecessarily when only checking if it needs saving.

Optimize by extracting only necessary data:

     async fn evict_oldest_filter_segment(
         &self,
         segments: &mut HashMap<u32, FilterSegmentCache>,
     ) -> StorageResult<()> {
-        if let Some((oldest_id, oldest_segment)) =
-            segments.iter().min_by_key(|(_, s)| s.last_accessed).map(|(id, s)| (*id, s.clone()))
+        if let Some(oldest_id) =
+            segments.iter().min_by_key(|(_, s)| s.last_accessed).map(|(id, _)| *id)
         {
+            // Extract the segment if it needs saving
+            let segment_to_save = segments.get(&oldest_id)
+                .filter(|s| s.state != SegmentState::Clean)
+                .cloned();
+            
+            // Remove from cache first
+            segments.remove(&oldest_id);
+            
             // Save if dirty or saving before evicting - do it synchronously to ensure data consistency
-            if oldest_segment.state != SegmentState::Clean {
+            if let Some(oldest_segment) = segment_to_save {
                 tracing::trace!(
                     "Synchronously saving filter segment {} before eviction (state: {:?})",
                     oldest_segment.segment_id,
                     oldest_segment.state
                 );
                 let segment_path = self
                     .base_path
                     .join(format!("filters/filter_segment_{:04}.dat", oldest_segment.segment_id));
                 save_filter_segment_to_disk(&segment_path, &oldest_segment.filter_headers).await?;
                 tracing::debug!(
                     "Successfully saved filter segment {} to disk",
                     oldest_segment.segment_id
                 );
             }
-
-            segments.remove(&oldest_id);
         }
 
         Ok(())
     }

964-971: Sentinel header detection is fragile

The sentinel header detection uses multiple conditions that could accidentally match valid headers in edge cases.

Use a more robust sentinel detection:

+/// Check if a header is a sentinel (padding) header
+fn is_sentinel_header(header: &BlockHeader) -> bool {
+    header.version.to_consensus() == i32::MAX
+        && header.time == u32::MAX
+        && header.nonce == u32::MAX
+        && header.prev_blockhash == BlockHash::from_byte_array([0xFF; 32])
+        && header.bits.to_consensus() == 0xFFFFFFFF
+}

             // Only save actual headers, not sentinel headers
             for header in headers {
                 // Skip sentinel headers (used for padding)
-                if header.version.to_consensus() == i32::MAX
-                    && header.time == u32::MAX
-                    && header.nonce == u32::MAX
-                    && header.prev_blockhash == BlockHash::from_byte_array([0xFF; 32])
-                {
+                if is_sentinel_header(header) {
                     continue;
                 }

1200-1203: Confusing comment about height expectations

The TODO comment mentions that the method expects storage-relative heights, but the method name and usage suggest it should accept blockchain heights.

Either clarify the API or refactor to handle blockchain heights:

     async fn get_header(&self, height: u32) -> StorageResult<Option<BlockHeader>> {
-        // TODO: This method currently expects storage-relative heights (0-based from sync_base_height).
-        // Consider refactoring to accept blockchain heights and handle conversion internally for better UX.
+        // Note: This method expects storage-relative heights (0-based index into storage).
+        // For checkpoint sync, callers must convert blockchain heights to storage indices.

Or better yet, refactor to accept blockchain heights for consistency.


1273-1289: Complex and error-prone storage index calculation

The storage index calculation for filter headers with checkpoint sync is complex and could be simplified.

Extract the conversion logic into a helper method:

+    /// Convert blockchain height to storage index for checkpoint sync
+    fn blockchain_to_storage_index(&self, blockchain_height: u32, sync_base_height: u32) -> u32 {
+        if sync_base_height > 0 && blockchain_height >= sync_base_height {
+            blockchain_height - sync_base_height
+        } else {
+            blockchain_height
+        }
+    }

             // Convert blockchain height to storage index
-            let storage_index = if sync_base_height > 0 {
-                // For checkpoint sync, storage index is relative to sync_base_height
-                if next_blockchain_height >= sync_base_height {
-                    next_blockchain_height - sync_base_height
-                } else {
-                    // This shouldn't happen in normal operation
-                    tracing::warn!(
-                        "Attempting to store filter header at height {} below sync_base_height {}",
-                        next_blockchain_height,
-                        sync_base_height
-                    );
-                    next_blockchain_height
-                }
-            } else {
-                // For genesis sync, storage index equals blockchain height
-                next_blockchain_height
-            };
+            let storage_index = self.blockchain_to_storage_index(next_blockchain_height, sync_base_height);
+            if sync_base_height > 0 && next_blockchain_height < sync_base_height {
+                tracing::warn!(
+                    "Attempting to store filter header at height {} below sync_base_height {}",
+                    next_blockchain_height,
+                    sync_base_height
+                );
+            }

27-30: Consider making constants configurable

The hardcoded constants for segment size and cache size might not be optimal for all deployments.

Consider making these configurable through a builder pattern or configuration struct:

pub struct DiskStorageConfig {
    pub headers_per_segment: u32,
    pub max_active_segments: usize,
    pub base_path: PathBuf,
}

impl Default for DiskStorageConfig {
    fn default() -> Self {
        Self {
            headers_per_segment: 50_000,
            max_active_segments: 10,
            base_path: PathBuf::from("./storage"),
        }
    }
}

This would allow users to tune performance based on their specific requirements (e.g., memory-constrained vs. performance-optimized deployments).

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ca5f517 and 1f4bf2b.

📒 Files selected for processing (2)
  • .gitignore (1 hunks)
  • dash-spv/src/storage/disk.rs (2 hunks)
✅ Files skipped from review due to trivial changes (1)
  • .gitignore
🧰 Additional context used
📓 Path-based instructions (4)
dash-spv/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

dash-spv/**/*.rs: Enforce Rust formatting via cargo fmt --check on all Rust source files
All code must be clippy-clean: run cargo clippy --all-targets --all-features -- -D warnings

Files:

  • dash-spv/src/storage/disk.rs
dash-spv/src/storage/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

Provide both MemoryStorageManager and DiskStorageManager behind the StorageManager trait

Files:

  • dash-spv/src/storage/disk.rs
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.rs: Use proper error types with thiserror and propagate errors appropriately
Use the tokio runtime for async operations in Rust
Use conditional compilation feature flags for optional features (#[cfg(feature = ...)])
Format Rust code with cargo fmt (and enforce via cargo fmt --check)
Run clippy with -D warnings and fix all lints
Adhere to MSRV Rust 1.89 (avoid features requiring newer compiler)

**/*.rs: Format Rust code with rustfmt (per rustfmt.toml); run cargo fmt --all before commits
Lint with clippy; treat warnings as errors in CI
Follow Rust naming: snake_case for functions/variables, UpperCamelCase for types/traits, SCREAMING_SNAKE_CASE for consts
Prefer async using tokio where applicable

Files:

  • dash-spv/src/storage/disk.rs
**/src/**/*.rs

📄 CodeRabbit inference engine (AGENTS.md)

**/src/**/*.rs: Each crate keeps sources in src/
Avoid unwrap()/expect() in library code; use proper error types (e.g., thiserror)
Place unit tests alongside code with #[cfg(test)]

Files:

  • dash-spv/src/storage/disk.rs
🧠 Learnings (2)
📓 Common learnings
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/network/**/*.rs : Implement DNS-first peer discovery with the specified seeds, immediate startup (no initial discovery delay), and exclusive peer mode when explicit peers are provided
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/storage/**/*.rs : Provide both `MemoryStorageManager` and `DiskStorageManager` behind the `StorageManager` trait

Applied to files:

  • dash-spv/src/storage/disk.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (19)
  • GitHub Check: fuzz (hashes_sha512_256)
  • GitHub Check: fuzz (hashes_sha256)
  • GitHub Check: fuzz (hashes_sha1)
  • GitHub Check: fuzz (hashes_cbor)
  • GitHub Check: fuzz (hashes_ripemd160)
  • GitHub Check: fuzz (hashes_json)
  • GitHub Check: fuzz (dash_outpoint_string)
  • GitHub Check: fuzz (dash_deserialize_block)
  • GitHub Check: fuzz (dash_deser_net_msg)
  • GitHub Check: fuzz (dash_deserialize_script)
  • GitHub Check: fuzz (dash_deserialize_address)
  • GitHub Check: fuzz (dash_deserialize_witness)
  • GitHub Check: fuzz (dash_deserialize_amount)
  • GitHub Check: RPC Tests (stable, true)
  • GitHub Check: Strict Warnings and Clippy Checks
  • GitHub Check: Integration Tests (stable, 22.1.3)
  • GitHub Check: Core Components Tests
  • GitHub Check: Key Wallet Components Tests
  • GitHub Check: SPV Components Tests

Comment on lines +136 to +198
async fn start_worker(&mut self) {
let (worker_tx, mut worker_rx) = mpsc::channel::<WorkerCommand>(100);
let (notification_tx, notification_rx) = mpsc::channel::<WorkerNotification>(100);

let worker_base_path = self.base_path.clone();
let worker_notification_tx = notification_tx.clone();
let worker_handle = tokio::spawn(async move {
while let Some(cmd) = worker_rx.recv().await {
match cmd {
WorkerCommand::SaveHeaderSegment {
segment_id,
headers,
} => {
let path =
worker_base_path.join(format!("headers/segment_{:04}.dat", segment_id));
if let Err(e) = save_segment_to_disk(&path, &headers).await {
eprintln!("Failed to save segment {}: {}", segment_id, e);
} else {
let _ = worker_notification_tx
.send(WorkerNotification::HeaderSegmentSaved {
segment_id,
})
.await;
}
}
WorkerCommand::SaveFilterSegment {
segment_id,
filter_headers,
} => {
let path = worker_base_path
.join(format!("filters/filter_segment_{:04}.dat", segment_id));
if let Err(e) = save_filter_segment_to_disk(&path, &filter_headers).await {
eprintln!("Failed to save filter segment {}: {}", segment_id, e);
} else {
let _ = worker_notification_tx
.send(WorkerNotification::FilterSegmentSaved {
segment_id,
})
.await;
}
}
WorkerCommand::SaveIndex {
index,
} => {
let path = worker_base_path.join("headers/index.dat");
if let Err(e) = save_index_to_disk(&path, &index).await {
eprintln!("Failed to save index: {}", e);
} else {
let _ =
worker_notification_tx.send(WorkerNotification::IndexSaved).await;
}
}
WorkerCommand::Shutdown => {
break;
}
}
}
});

self.worker_tx = Some(worker_tx);
self.worker_handle = Some(worker_handle);
self.notification_rx = Arc::new(RwLock::new(notification_rx));
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Duplicated background worker initialization code

The background worker initialization code is duplicated between start_worker() and new(). This violates the DRY principle and could lead to maintenance issues if the worker logic needs to be updated.

Extract the worker spawning logic into a reusable function:

+    /// Spawn the background worker task
+    fn spawn_worker(
+        base_path: PathBuf,
+        mut worker_rx: mpsc::Receiver<WorkerCommand>,
+        notification_tx: mpsc::Sender<WorkerNotification>,
+    ) -> tokio::task::JoinHandle<()> {
+        tokio::spawn(async move {
+            while let Some(cmd) = worker_rx.recv().await {
+                match cmd {
+                    WorkerCommand::SaveHeaderSegment {
+                        segment_id,
+                        headers,
+                    } => {
+                        let path =
+                            base_path.join(format!("headers/segment_{:04}.dat", segment_id));
+                        if let Err(e) = save_segment_to_disk(&path, &headers).await {
+                            eprintln!("Failed to save segment {}: {}", segment_id, e);
+                        } else {
+                            tracing::trace!(
+                                "Background worker completed saving header segment {}",
+                                segment_id
+                            );
+                            let _ = notification_tx
+                                .send(WorkerNotification::HeaderSegmentSaved {
+                                    segment_id,
+                                })
+                                .await;
+                        }
+                    }
+                    WorkerCommand::SaveFilterSegment {
+                        segment_id,
+                        filter_headers,
+                    } => {
+                        let path = base_path
+                            .join(format!("filters/filter_segment_{:04}.dat", segment_id));
+                        if let Err(e) = save_filter_segment_to_disk(&path, &filter_headers).await {
+                            eprintln!("Failed to save filter segment {}: {}", segment_id, e);
+                        } else {
+                            tracing::trace!(
+                                "Background worker completed saving filter segment {}",
+                                segment_id
+                            );
+                            let _ = notification_tx
+                                .send(WorkerNotification::FilterSegmentSaved {
+                                    segment_id,
+                                })
+                                .await;
+                        }
+                    }
+                    WorkerCommand::SaveIndex {
+                        index,
+                    } => {
+                        let path = base_path.join("headers/index.dat");
+                        if let Err(e) = save_index_to_disk(&path, &index).await {
+                            eprintln!("Failed to save index: {}", e);
+                        } else {
+                            tracing::trace!("Background worker completed saving index");
+                            let _ =
+                                notification_tx.send(WorkerNotification::IndexSaved).await;
+                        }
+                    }
+                    WorkerCommand::Shutdown => {
+                        break;
+                    }
+                }
+            }
+        })
+    }

     /// Start the background worker and notification channel.
     async fn start_worker(&mut self) {
         let (worker_tx, mut worker_rx) = mpsc::channel::<WorkerCommand>(100);
         let (notification_tx, notification_rx) = mpsc::channel::<WorkerNotification>(100);
 
-        let worker_base_path = self.base_path.clone();
-        let worker_notification_tx = notification_tx.clone();
-        let worker_handle = tokio::spawn(async move {
-            // ... duplicate code ...
-        });
+        let worker_handle = Self::spawn_worker(self.base_path.clone(), worker_rx, notification_tx);
 
         self.worker_tx = Some(worker_tx);
         self.worker_handle = Some(worker_handle);
         self.notification_rx = Arc::new(RwLock::new(notification_rx));
     }

Then update the new() method to use the same function.

🤖 Prompt for AI Agents
In dash-spv/src/storage/disk.rs around lines 136 to 198, the worker spawn logic
inside start_worker() is duplicated with new(); extract that block into a
private helper function (e.g., spawn_disk_worker) that takes a PathBuf (or
&Path) for base_path and returns (mpsc::Sender<WorkerCommand>,
tokio::task::JoinHandle<()>, mpsc::Receiver<WorkerNotification>) or the already
wrapped types you need; move the tokio::spawn closure into that function,
cloning base_path and notification sender inside it, and preserve the exact
command handling and notification sends; then simplify start_worker() to call
this helper, assign self.worker_tx, self.worker_handle, and self.notification_rx
(wrapping the returned notification receiver in Arc<RwLock<...>> as before), and
update new() to call the same helper so both paths reuse the same
worker-spawning implementation.

Comment on lines +152 to +153
eprintln!("Failed to save segment {}: {}", segment_id, e);
} else {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use structured logging instead of eprintln!

The code uses eprintln! for error messages, which bypasses the structured logging system and makes it harder to filter or redirect logs appropriately.

Replace eprintln! with proper logging:

-                        if let Err(e) = save_segment_to_disk(&path, &headers).await {
-                            eprintln!("Failed to save segment {}: {}", segment_id, e);
+                        if let Err(e) = save_segment_to_disk(&path, &headers).await {
+                            tracing::error!("Failed to save segment {}: {}", segment_id, e);

Apply this change to all occurrences of eprintln! in the worker code.

Also applies to: 168-169, 182-183, 246-247, 266-267, 284-285

🤖 Prompt for AI Agents
In dash-spv/src/storage/disk.rs around lines 152-153 (and similarly at 168-169,
182-183, 246-247, 266-267, 284-285), replace the eprintln! calls with structured
logging (e.g., log::error! or the project's logger macro) so errors go through
the logging system; change each eprintln! invocation to an error-level log call
that includes the same contextual fields (segment_id, error) and ensure the
module imports the logging crate/macro (or uses the existing logger) so the
messages are recorded in structured form.

Comment on lines 1563 to 1577
// Best-effort removal; if concurrent files appear, retry once
match tokio::fs::remove_dir_all(&self.base_path).await {
Ok(_) => {}
Err(e) => {
// Retry once after a short delay to handle transient races
if e.kind() == std::io::ErrorKind::Other
|| e.kind() == std::io::ErrorKind::DirectoryNotEmpty
{
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
tokio::fs::remove_dir_all(&self.base_path).await?;
} else {
return Err(StorageError::Io(e).into());
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Race condition in directory removal and retry logic

The retry logic for directory removal could still fail if files are created between retries. The error handling also doesn't distinguish between actual permission errors and race conditions.

Improve the retry logic and error handling:

         // Remove all files and directories under base_path
         if self.base_path.exists() {
             // Best-effort removal; if concurrent files appear, retry once
-            match tokio::fs::remove_dir_all(&self.base_path).await {
-                Ok(_) => {}
-                Err(e) => {
-                    // Retry once after a short delay to handle transient races
-                    if e.kind() == std::io::ErrorKind::Other
-                        || e.kind() == std::io::ErrorKind::DirectoryNotEmpty
-                    {
-                        tokio::time::sleep(std::time::Duration::from_millis(50)).await;
-                        tokio::fs::remove_dir_all(&self.base_path).await?;
-                    } else {
-                        return Err(StorageError::Io(e).into());
-                    }
-                }
-            }
+            let mut retries = 2;
+            while retries > 0 {
+                match tokio::fs::remove_dir_all(&self.base_path).await {
+                    Ok(_) => break,
+                    Err(e) if retries > 1 => {
+                        match e.kind() {
+                            std::io::ErrorKind::Other | 
+                            std::io::ErrorKind::DirectoryNotEmpty => {
+                                tracing::debug!("Directory removal failed, retrying: {}", e);
+                                tokio::time::sleep(std::time::Duration::from_millis(100)).await;
+                                retries -= 1;
+                            }
+                            _ => return Err(StorageError::Io(e).into()),
+                        }
+                    }
+                    Err(e) => {
+                        tracing::error!("Failed to clear storage directory after retries: {}", e);
+                        return Err(StorageError::Io(e).into());
+                    }
+                }
+            }
             tokio::fs::create_dir_all(&self.base_path).await?;
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Best-effort removal; if concurrent files appear, retry once
match tokio::fs::remove_dir_all(&self.base_path).await {
Ok(_) => {}
Err(e) => {
// Retry once after a short delay to handle transient races
if e.kind() == std::io::ErrorKind::Other
|| e.kind() == std::io::ErrorKind::DirectoryNotEmpty
{
tokio::time::sleep(std::time::Duration::from_millis(50)).await;
tokio::fs::remove_dir_all(&self.base_path).await?;
} else {
return Err(StorageError::Io(e).into());
}
}
}
// Remove all files and directories under base_path
if self.base_path.exists() {
// Best-effort removal; if concurrent files appear, retry up to two times
let mut retries = 2;
while retries > 0 {
match tokio::fs::remove_dir_all(&self.base_path).await {
Ok(_) => break,
// On certain transient errors, wait and retry
Err(e) if retries > 1 => {
match e.kind() {
std::io::ErrorKind::Other |
std::io::ErrorKind::DirectoryNotEmpty => {
tracing::debug!("Directory removal failed, retrying ({} remaining): {}", retries - 1, e);
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
retries -= 1;
}
// Any other error is fatal
_ => return Err(StorageError::Io(e).into()),
}
}
// After exhausting retries, surface the error
Err(e) => {
tracing::error!("Failed to clear storage directory after retries: {}", e);
return Err(StorageError::Io(e).into());
}
}
}
tokio::fs::create_dir_all(&self.base_path).await?;
}

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
dash-spv/src/storage/disk.rs (1)

564-577: Potential race condition in directory removal

The retry logic for directory removal could still fail if files are created between retries. The error handling also doesn't distinguish between actual permission errors and race conditions properly.

Consider implementing a more robust retry mechanism with exponential backoff and better error classification to handle concurrent file operations more gracefully.

🧹 Nitpick comments (1)
dash-spv/src/storage/disk.rs (1)

1200-1203: Important TODO comment about height parameter semantics

The TODO comment indicates a significant UX issue where the method expects storage-relative heights but would be more intuitive with blockchain heights. This could lead to incorrect usage.

Would you like me to help refactor this method to accept blockchain heights and handle the conversion internally? This would improve the API usability and reduce potential confusion.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1f4bf2b and d60558c.

📒 Files selected for processing (1)
  • dash-spv/src/storage/disk.rs (2 hunks)
🧰 Additional context used
📓 Path-based instructions (4)
dash-spv/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

dash-spv/**/*.rs: Enforce Rust formatting via cargo fmt --check on all Rust source files
All code must be clippy-clean: run cargo clippy --all-targets --all-features -- -D warnings

Files:

  • dash-spv/src/storage/disk.rs
dash-spv/src/storage/**/*.rs

📄 CodeRabbit inference engine (dash-spv/CLAUDE.md)

Provide both MemoryStorageManager and DiskStorageManager behind the StorageManager trait

Files:

  • dash-spv/src/storage/disk.rs
**/*.rs

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.rs: Use proper error types with thiserror and propagate errors appropriately
Use the tokio runtime for async operations in Rust
Use conditional compilation feature flags for optional features (#[cfg(feature = ...)])
Format Rust code with cargo fmt (and enforce via cargo fmt --check)
Run clippy with -D warnings and fix all lints
Adhere to MSRV Rust 1.89 (avoid features requiring newer compiler)

**/*.rs: Format Rust code with rustfmt (per rustfmt.toml); run cargo fmt --all before commits
Lint with clippy; treat warnings as errors in CI
Follow Rust naming: snake_case for functions/variables, UpperCamelCase for types/traits, SCREAMING_SNAKE_CASE for consts
Prefer async using tokio where applicable

Files:

  • dash-spv/src/storage/disk.rs
**/src/**/*.rs

📄 CodeRabbit inference engine (AGENTS.md)

**/src/**/*.rs: Each crate keeps sources in src/
Avoid unwrap()/expect() in library code; use proper error types (e.g., thiserror)
Place unit tests alongside code with #[cfg(test)]

Files:

  • dash-spv/src/storage/disk.rs
🧠 Learnings (2)
📓 Common learnings
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/network/**/*.rs : Implement DNS-first peer discovery with the specified seeds, immediate startup (no initial discovery delay), and exclusive peer mode when explicit peers are provided
📚 Learning: 2025-08-25T17:15:59.361Z
Learnt from: CR
PR: dashpay/rust-dashcore#0
File: dash-spv/CLAUDE.md:0-0
Timestamp: 2025-08-25T17:15:59.361Z
Learning: Applies to dash-spv/src/storage/**/*.rs : Provide both `MemoryStorageManager` and `DiskStorageManager` behind the `StorageManager` trait

Applied to files:

  • dash-spv/src/storage/disk.rs
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: RPC Tests (stable, true)
  • GitHub Check: Core Components Tests
  • GitHub Check: Key Wallet Components Tests
  • GitHub Check: fuzz (hashes_ripemd160)
  • GitHub Check: fuzz (dash_script_bytes_to_asm_fmt)
  • GitHub Check: fuzz (dash_deserialize_amount)
🔇 Additional comments (3)
dash-spv/src/storage/disk.rs (3)

836-948: Excellent implementation of checkpoint-aware storage

This method correctly handles the dual indexing system (blockchain height vs storage index) and provides proper checkpoint sync functionality. The extensive logging helps with debugging and the atomic operations prevent race conditions.


1549-1588: Good implementation of worker lifecycle management in clear()

The implementation correctly stops the worker before file deletion and restarts it afterward. This prevents race conditions and ensures the background worker is available for future operations.


1040-1042: Ignore sync_base_height absence concern
sync_base_height is a non-optional u32 (defaulted to 0 via #[derive(Default)]) and load_chain_state() returns None only when the state file is missing; unwrap_or(0) therefore does not mask a missing field but simply handles the “no state file” case.

Likely an incorrect or invalid review comment.

@QuantumExplorer QuantumExplorer merged commit be06e94 into v0.40-dev Sep 10, 2025
27 of 28 checks passed
@QuantumExplorer QuantumExplorer deleted the feat/AllowLocalhostOnFFI branch September 10, 2025 09:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants