Skip to content

backport: bitcoin#18933, #23123, #23147, #23549, #26074, #26207, #26307, #26341, #26352, #26656, #26886, #26996, #27035#7192

Open
knst wants to merge 13 commits intodashpay:developfrom
knst:bp-v25-p8
Open

backport: bitcoin#18933, #23123, #23147, #23549, #26074, #26207, #26307, #26341, #26352, #26656, #26886, #26996, #27035#7192
knst wants to merge 13 commits intodashpay:developfrom
knst:bp-v25-p8

Conversation

@knst
Copy link
Collaborator

@knst knst commented Mar 2, 2026

What was done?

Backports from Bitcoin Core v25, mostly related to -rescan, rescanblock

How Has This Been Tested?

Run unit & functional tests

Here's a patch to calculate false-positive for rpc_scanblocks.py:

diff --git a/test/functional/rpc_scanblocks.py b/test/functional/rpc_scanblocks.py
index 3bb54a9cd8..1fb080c6a5 100755
--- a/test/functional/rpc_scanblocks.py
+++ b/test/functional/rpc_scanblocks.py
@@ -82,11 +84,14 @@ class ScanblocksTest(BitcoinTestFramework):
         genesis_spks = bip158_relevant_scriptpubkeys(node, genesis_blockhash)
         assert_equal(len(genesis_spks), 1)
         genesis_coinbase_spk = list(genesis_spks)[0]
-        false_positive_spk = bytes.fromhex("001400000000000000000000000000000000000cadcb")
-
         genesis_coinbase_hash = bip158_basic_element_hash(genesis_coinbase_spk, 1, genesis_blockhash)
-        false_positive_hash = bip158_basic_element_hash(false_positive_spk, 1, genesis_blockhash)
-        assert_equal(genesis_coinbase_hash, false_positive_hash)
+        iter = 0x1400000000000000000000000000000000000ad23b
+        while True:
+            iter = iter + 1
+            false_positive_spk = bytes.fromhex(format(iter, "x"))
+            false_positive_hash = bip158_basic_element_hash(false_positive_spk, 1, genesis_blockhash)
+            if genesis_coinbase_hash == false_positive_hash:
+                self.log.info(f"iter: {hex(iter)} hashes {genesis_coinbase_hash} {false_positive_hash}")
+                break
 
         assert(genesis_blockhash in node.scanblocks(
             "start", [{"desc": f"raw({genesis_coinbase_spk.hex()})"}], 0, 0)['relevant_blocks'])

Breaking Changes

Rescan startup parameter removed

The -rescan startup parameter has been removed. Wallets which require rescanning due to corruption will still be rescanned on startup.
Otherwise, please use the rescanblockchain RPC to trigger a rescan.

Checklist:

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have added or updated relevant unit/integration/functional/e2e tests
  • I have made corresponding changes to the documentation
  • I have assigned this pull request to a milestone (for repository code-owners and collaborators only)

MacroFake and others added 12 commits March 2, 2026 03:49
…ated initializers

Missing changes for `sendall` rpc, depends on dashpay#7131
fa2c72d rpc: Set RPCArg options with designated initializers (MacroFake)

Pull request description:

  For optional constructor arguments, use a new struct. This comes with two benefits:
  * Earlier unused optional arguments can be omitted
  * Designated initializers can be used

ACKs for top commit:
  stickies-v:
    re-ACK fa2c72d

Tree-SHA512: 2a0619548187cc7437fee2466ac4780746490622f202659f53641be01bc2a1fea4416d1a77f3e963bf7c4cce62899b61fab0b9683440cf82f68be44f63826658
626b7c8 fuzz: add scanblocks as safe for fuzzing (James O'Beirne)
94fe545 test: rpc: add scanblocks functional test (Jonas Schnelli)
6ef2566 rpc: add scanblocks - scan for relevant blocks with descriptors (Jonas Schnelli)
a4258f6 rpc: move-only: consolidate blockchain scan args (James O'Beirne)

Pull request description:

  Revives bitcoin#20664. All feedback from the previous PR has either been responded to inline or incorporated here.

  ---

  Major changes from Jonas' PR:
  - consolidated arguments for scantxoutset/scanblocks
  - substantial cleanup of the functional test

  Here's the range-diff (`git range-diff master jonasschnelli/2020/12/filterblocks_rpc jamesob/2021-11-scanblocks`): https://gist.github.com/jamesob/aa4a975344209f0316444b8de2ec1d18

  ### Original PR description

  > The `scanblocks` RPC call allows one to get relevant blockhashes from a set of descriptors by scanning all blockfilters in a given range.
  >
  > **Example:**
  >
  > `scanblocks start '["addr(<bitcoin_address>)"]' 661000` (returns relevant blockhashes for `<bitcoin_address>` from blockrange 661000->tip)
  >
  > ## Why is this useful?
  > **Fast wallet rescans**: get the relevant blocks and only rescan those via `rescanblockchain getblockheader(<hash>)[height] getblockheader(<hash>)[height])`. A future PR may add an option to allow to provide an array of blockhashes to `rescanblockchain`.
  >
  > **prune wallet rescans**: (_needs additional changes_): together with a call to fetch blocks from the p2p network if they have been pruned, it would allow to rescan wallets back to the genesis block in pruned mode (relevant bitcoin#15946).
  >
  > **SPV mode** (_needs additional changes_): it would be possible to build the blockfilterindex from the p2p network (rather then deriving them from the blocks) and thus allow some sort of hybrid-SPV mode with moderate bandwidth consumption (related bitcoin#9483)

ACKs for top commit:
  furszy:
    diff re-ACK 626b7c8

Tree-SHA512: f84e4dcb851b122b39e9700c58fbc31e899cdcf9b587df9505eaf1f45578cc4253e89ce2a45d1ff21bd213e31ddeedbbcad2c80810f46755b30acc17b07e2873
ae3626e test: use MiniWallet for rpc_scanblocks.py (Sebastian Falbesoner)

Pull request description:

  This is a small follow-up for bitcoin#23549 which introduced `scanblocks`. Since that RPC doesn't need the wallet, we can switch the functional test to use MiniWallet.

ACKs for top commit:
  MarcoFalke:
    review ACK ae3626e

Tree-SHA512: e0b0088103e059b29719299c58fd5173b1cff58cb73025a9a33ad493cd0ac50ba25a5f790e471c00a09b4dca80f3c011174ca4e0c8f34746c39831f5823dc8ba
dc3ec74 Add rescan removal release note (Samuel Dobson)
bccd1d9 Remove -rescan startup parameter (Samuel Dobson)
f963b0f Corrupt wallet tx shouldn't trigger rescan of all wallets (Samuel Dobson)
6c00649 Remove outdated dummy wallet -salvagewallet arg (Samuel Dobson)

Pull request description:

  Remove the `-rescan` startup parameter.

  Rescans can be run with the `rescanblockchain` RPC.

  Rescans are still done on wallet-load if needed due to corruption, for example.

ACKs for top commit:
  achow101:
    ACK dc3ec74
  laanwj:
    re-ACK dc3ec74

Tree-SHA512: 608360d0e7d73737fd3ef408b01b33d97a75eebccd70c6d1b47a32fecb99b9105b520b111b225beb10611c09aa840a2b6d2b6e6e54be5d0362829e757289de5c
…ED_RESCAN

8615507 scripted-diff: rename DBErrors::RESCAN_REQUIRED to NEED_RESCAN (Samuel Dobson)

Pull request description:

  Suggested here as a trivial follow-up: bitcoin#23123 (comment)

  Makes RESCAN_REQUIRED consistent with NEED_REWRITE

ACKs for top commit:
  achow101:
    ACK 8615507
  jonatack:
    ACK 8615507

Tree-SHA512: 82d057c45e192cd6dd8a47675b52699e6cbc82272609a971e6e5d6796aad14a941a70e40d3913dbb611f79c8eadff8030c60ea6f203f2edc3720c0e78c166b97
…n `--enable-debug`

BACKPORT NOTE:
missing changes for feature_taproot.py

1647a11 tests: Reorder longer running tests in test_runner (Andrew Chow)
ff6c9fe tests: Whitelist test p2p connection in rpc_packages (Andrew Chow)
8c20796 tests: Use waitfornewblock for work queue test in interface_rpc (Andrew Chow)
6c872d5 tests: Initialize sigops draining script with bytes in feature_taproot (Andrew Chow)
544cbf7 tests: Use batched RPC in feature_fee_estimation (Andrew Chow)
4ad7272 tests: reduce number of generated blocks for wallet_import_rescan (Andrew Chow)

Pull request description:

  When configured with `--enable-debug`, many tests become dramatically slower. These slow downs are particularly noticed in tests that generate a lot of blocks in separate calls, make a lot of RPC calls, or send a lot of data from the test framework's P2P connection. This PR aims to improve the runtime of some of the slower tests and improve the overall runtime of the test runner. This has improved the runtime of the test runner from ~400s to ~140s on my computer.

  The slowest test by far was `wallet_import_rescan.py`. This was taking ~320s. Most of that time was spent waiting for blocks to be mined and then synced to the other nodes. It was generating a new block for every new transaction it was creating in a setup loop. However it is not necessary to have one tx per block. By mining a block only every 10 txs, the runtime is improved to ~61s.

  The second slowest test was `feature_fee_estimation.py`. This test spends most of its time waiting for RPCs to respond. I was able to improve its runtime by batching RPC requests. This has improved the runtime from ~201s to ~140s.

  In `feature_taproot.py`, the test was constructing a Python `CScript` using a very large list of `OP_CHECKSIG`s. The constructor for the Python implementation of `CScript` was iterating this list in order to create a `bytes` from it even though a `bytes` could be created from it without iterating. By making the `bytes` before passing it into the constructor, we are able to improve this test's runtime from ~131s to ~106s.

  Although `interface_rpc.py` was not typically a slow test, I found that it would occasionally have a super long runtime. It typically takes ~7s, but I have observed it taking >400s to run on occasion. This longer runtime occurs more often when `--enable-debug`. This long runtime was caused by the "exceeding work queue" test which is really just trying to trigger a race condition. In this test, it would create a few threads and try an RPC in a loop in the hopes that eventually one of the RPCs would be added to the work queue while another was processing. It used `getrpcinfo` for this, but this function is fairly fast. I believe what was happening was that with `--enable-debug`, all of the code for receiving the RPC would often take longer to run than the RPC itself, so the majority of the requests would succeed, until we got lucky after 10's of thousands of requests. By changing this to use a slow RPC, the race condition can be triggered more reliably, and much sooner as well. I've used `waitfornewblock` with a 500ms timeout. This improves the runtime to ~3s consistently.

  The last test I've changed was `rpc_packages.py`. This test was one of the higher runtime variability tests. The main source of this variation appears to be waiting for the test node to relay a transaction to the test framework's P2P connection. By whitelisting that peer, the variability is reduced to nearly 0.

  Lastly, I've reordered the tests in `test_runner.py` to account for the slower runtimes when configured with `--enable-debug`. Some of the slow tests I've looked at were listed as being fast which was causing overall `test_runner.py` runtime to be extended. This change makes the test runner's runtime be bounded by the slowest test (currently `feature_fee_estimation.py` with my usual config (`-j 60`).

ACKs for top commit:
  willcl-ark:
    ACK 1647a11

Tree-SHA512: 529e0da4bc51f12c78a40d6d70b3a492b97723c96a3526148c46943d923c118737b32d2aec23d246392e50ab48013891ef19fe6205bf538b61b70d4f16a203eb
…ck.py by using MiniWallet

dee8549 test: simplify and speedup mempool_updatefromblock.py by using MiniWallet (Sebastian Falbesoner)

Pull request description:

  This PR simplifies the functional test mempool_updatefromblock.py by using MiniWallet in order to avoid manual low-level tx creation (signing, outputs selection, fee calculation). Most of the tedious work is done by the method `MiniWallet.send_self_transfer_multi` (calling `create_self_transfer_multi` internally) which supports spending a given set of UTXOs and creating a certain number of outputs.

  As a nice side-effect, the test's performance increases significantly (~3.5x on my system):

  ```
  master
      1m56.80s real     1m50.10s user     0m06.36s system

  PR
      0m32.34s real     0m30.26s user     0m01.41s system
  ```

  The arguments `start_input_txid` and `end_address` have been removed from the `transaction_graph_test` method, as they are currently unused and I don't see them being needed for future tests.

ACKs for top commit:
  brunoerg:
    crACK dee8549
  MarcoFalke:
    lgtm ACK dee8549 🚏

Tree-SHA512: 9f6da634bdc8c272f9a2af1cddaa364ee371d4e95554463a066249eecebb668d8c6cb123ec8a5404c41b3291010c0c8806a8a01dd227733cec03e73aa93b0103
…et's initialization

Missing changes are in the files:
        deleted by us:   test/functional/feature_txindex_compatibility.py
        deleted by us:   test/functional/mempool_dust.py
        both modified:   test/functional/mempool_expiry.py
        both modified:   test/functional/mempool_resurrect.py
        both modified:   test/functional/mining_getblocktemplate_longpoll.py
        both modified:   test/functional/p2p_leak_tx.py
        deleted by us:   test/functional/p2p_tx_privacy.py
        both modified:   test/functional/rpc_net.py
        both modified:   test/functional/rpc_txoutproof.py

6bd098a test: simplify tests by using the pre-mined chain (kouloumos)
42029a7 test: remove redundant blocks generation logic (kouloumos)
0377d6b test: add `rescan_utxos` in MiniWallet's initialization (kouloumos)

Pull request description:

  When a pre-mined blockchain is used (default behavior), it [contains coinbase outputs in blocks 76-10](https://github.com/bitcoin/bitcoin/blob/07c54de550035c3441f34ef6c34209666267eb38/test/functional/test_framework/test_framework.py#L809-L813) to [the MiniWallet's default address](https://github.com/bitcoin/bitcoin/blob/07c54de550035c3441f34ef6c34209666267eb38/test/functional/test_framework/wallet.py#L99-L101). That's why we always* `rescan_utxos()` after initializing the MiniWallet, in order for the MiniWallet to account for those mature UTXOs.

  > The tests following this usage pattern can be seen with:
  > ```git grep -n "MiniWallet(" $(git grep -le "rescan_utxos()" $(git grep -Le "self.setup_clean_chain = True"))```

  **This PR adds `rescan_utxos()` inside MiniWallet's initialization to simplify usage when the MiniWallet is used with a pre-mined chain.**

  ### secondary changes

  - *There are a few tests that use the pre-mined blockchain but do not `rescan_utxos()`, they instead generate new blocks to create mature UTXOs.

    > Those were written before the `rescan_utxos()` method was introduced with bitcoin#22955 (fac66d0) and can be seen with:
    > `git grep -n "MiniWallet(" $(git grep -Le "rescan_utxos()" $(git grep -Le "self.setup_clean_chain = True"))`
    >

    After including `rescan_utxos()` inside MiniWallets initilization, this blocks generation logic is not needed as the MiniWallet already accounts for enough mature UTXOs to perform the tests. **Therefore the now redundant blocks generation logic is removed from those tests with the second commit.**

  - The rest of the MiniWallet tests use a clean chain (`self.setup_clean_chain = True`)  and can be seen with
    `git grep -n "MiniWallet(" $(git grep -le "self.setup_clean_chain = True")`

    From those, there are a few that start from a clean chain and then create enough mature UTXOs for the MiniWallet with this kind of logic:
   https://github.com/bitcoin/bitcoin/blob/07c54de550035c3441f34ef6c34209666267eb38/test/functional/mempool_expiry.py#L36-L40

    **Those tests are simplified in the third commit to instead utilize the mature UTXOs of the pre-mined chain.**

ACKs for top commit:
  MarcoFalke:
    ACK 6bd098a 🕷
  theStack:
    re-ACK 6bd098a

Tree-SHA512: 7f9361e36910e4000c33a32efdde4449f4a8a763bb42df96de826fcde469f9362f701b8c99e2a2c482d2d5a42a83ae5ae3844fdbed187ed0ff231f386c222493
… fee in longpoll

fa0abcd test: Flatten miniwallet array and remove random fee in longpoll (MarcoFalke)

Pull request description:

  * Using a single MiniWallet is enough.
  * A random fee isn't needed either.

ACKs for top commit:
  theStack:
    re-ACK fa0abcd

Tree-SHA512: 77b99885b3f0d325d067838122114be57ec999ebc82912de6a22c33e2ba28a341c5e053c5bbc424b9922c2616562289a57c7156bd3b431d779182c2e472da59c
fa18504 rpc: Add submit option to generateblock (MarcoFalke)
fab9a08 refactor: Replace block_hash with block_out (MarcoFalke)

Pull request description:

  When submit is turned off, a block can be generated and returned as hex, to be used for further tests. For example, it can be submitted on a different node, on a different interface (like p2p), or just never submitted and be used for other testing purposes.

ACKs for top commit:
  instagibbs:
    ACK fa18504
  TheCharlatan:
    tACK fa18504

Tree-SHA512: 1b2ab6b71bb7e155c6482d75f5373f4e77de6446cb16bc2dfd19e7a4075b3a6ad87d7ad7a049a9eed934cb71574acfd27202f54c8bb3b03fac869f2e95db7ee5
…ams for mempool/contents

1ff5d61 doc: add mempool/contents rest verbose and mempool_sequence args (Andrew Toth)
52a31dc tests: mempool/contents verbose and mempool_sequence query params tests (Andrew Toth)
a518fff rest: add verbose and mempool_sequence query params for mempool/contents (Andrew Toth)

Pull request description:

  The verbose mempool json response can get very large. This adds an option to return the non-verbose response of just the txids. It is identical to the rpc response so the diff here is minimal. This also adds the mempool_sequence parameter for rpc consistency. Verbose defaults to true to remain backwards compatible.

  It uses query parameters to be compatible with the efforts in bitcoin#25752.

ACKs for top commit:
  achow101:
    ACK 1ff5d61
  stickies-v:
    re-ACK [1ff5d61](bitcoin@1ff5d61)
  pablomartin4btc:
    tested ACK 1ff5d61.

Tree-SHA512: 1bf08a7ffde2e7db14dc746e421feedf17d84c4b3f1141e79e36feb6014811dfde80e1d8dbc476c15ff705de2d3c967b3081dcd80536d76b7edf888f1a92e9d1
ff138f9 doc: add `scanblocks` to list of descriptor RPCs (Sebastian Falbesoner)

Pull request description:

  This is a tiny documentation follow-up to bitcoin#23549.

ACKs for top commit:
  aureleoules:
    ACK ff138f9
  shaavan:
    ACK ff138f9

Tree-SHA512: cc45b78c13ec4aa5bac688648f8e83f04a9ae54376e67371b280428f0253e2585cf0575fa51e3835d4c13c940559bfcdd88d7504bf97a81b2a73bb34a0db7577
@knst knst added this to the 24 milestone Mar 2, 2026
@github-actions
Copy link

github-actions bot commented Mar 2, 2026

⚠️ Potential Merge Conflicts Detected

This PR has potential conflicts with the following open PRs:

Please coordinate with the authors of these PRs to avoid merge conflicts.

@coderabbitai
Copy link

coderabbitai bot commented Mar 2, 2026

Walkthrough

This PR adds a new scanblocks RPC (with start/status/abort actions) that scans blockfilters for descriptors and reports progress; introduces global scan progress atomics and a BlockFiltersScanReserver for concurrency. It updates scantxoutset to reuse the new scan descriptors. REST /mempool/contents now accepts two query parameters (verbose default true, mempool_sequence default false) and validates them. MempoolToJSON signature is extended to accept the two booleans. Wallet rescan handling is reworked: a NEED_RESCAN DB error is introduced and propagated (removing the -rescan startup option declarations), CWallet::AttachChain gains a rescan_required parameter, and wallet load paths record and defer rescan requirements. GenerateBlock/ generateblock flow now produces a block_out and a process_new_block flag to support optional submission and returning hex when not submitted. RPC argument metadata is refactored into an RPCArgOptions struct used across many RPC help/arg declarations. Tests and docs updated accordingly, including a new functional test for scanblocks.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant RPCHandler as "RPC Handler\n(scanblocks)"
    participant FilterIndex as "BlockFilterIndex"
    participant GlobalState as "Global State\n(g_scanfilter_*)"
    participant Collector as "Result Collector"

    Client->>RPCHandler: start(action=start, descriptors, start_height, stop_height)
    activate RPCHandler
    RPCHandler->>GlobalState: set in_progress = true
    RPCHandler->>RPCHandler: build script set from descriptors
    loop for each block in range
        RPCHandler->>FilterIndex: request filter for block
        FilterIndex-->>RPCHandler: filter (or not found)
        RPCHandler->>RPCHandler: match filter against script set
        alt match
            RPCHandler->>Collector: add block to relevant_blocks
        end
        RPCHandler->>GlobalState: update progress, progress_height
    end
    RPCHandler->>GlobalState: set in_progress = false
    RPCHandler-->>Client: return {from_height, to_height, relevant_blocks}
    deactivate RPCHandler

    Client->>RPCHandler: status(action=status)
    activate RPCHandler
    RPCHandler->>GlobalState: read in_progress, progress, progress_height
    RPCHandler-->>Client: return status
    deactivate RPCHandler

    Client->>RPCHandler: abort(action=abort)
    activate RPCHandler
    RPCHandler->>GlobalState: attempt reserve & set should_abort
    alt reserved
        RPCHandler-->>Client: {aborted: true}
    else not reserved
        RPCHandler-->>Client: {aborted: false}
    end
    deactivate RPCHandler
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested reviewers

  • UdjinM6
  • PastaPastaPasta
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 14.49% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the PR as a backport of multiple Bitcoin Core PRs focused on rescan functionality and removal of the -rescan startup parameter.
Description check ✅ Passed The PR description clearly explains the backport is from Bitcoin Core v25 related to -rescan and rescanblockchain, mentions breaking changes, and provides testing information.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/wallet/walletdb.cpp (1)

866-877: ⚠️ Potential issue | 🟠 Major

tx_corrupt path still bypasses the new NEED_RESCAN recovery flow.

When corrupt tx records are detected, this path sets DBErrors::CORRUPT, which prevents the later rescan_required -> NEED_RESCAN mapping from applying to this recoverable case.

Proposed fix
-                } else if (wss.tx_corrupt) {
+                } else if (wss.tx_corrupt) {
                     pwallet->WalletLogPrintf("Error: Corrupt transaction found. This can be fixed by removing transactions from wallet and rescanning.\n");
                     // Set tx_corrupt back to false so that the error is only printed once (per corrupt tx)
                     wss.tx_corrupt = false;
-                    result = DBErrors::CORRUPT;
+                    fNoncriticalErrors = true;
+                    rescan_required = true;
                 } else {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/wallet/walletdb.cpp` around lines 866 - 877, The tx_corrupt branch
currently sets result = DBErrors::CORRUPT which prevents the later
rescan_required -> NEED_RESCAN recovery from being applied; change the
wss.tx_corrupt handling in walletdb.cpp so that after logging and clearing
wss.tx_corrupt it also sets rescan_required = true (and does not set result =
DBErrors::CORRUPT), or otherwise leave result as a non-fatal status so the later
mapping can convert rescan_required into NEED_RESCAN; update the block around
wss.tx_corrupt, pwallet->WalletLogPrintf, and result to implement this behavior.
src/wallet/wallet.cpp (1)

3403-3414: ⚠️ Potential issue | 🟠 Major

Replace stale -rescan gating with rescan_required in startup rescan logic.

Line [3406] still controls “full rescan” behavior via -rescan==2, but startup -rescan is being removed in this PR. This leaves a removed-flag dependency in the critical attach/rescan path.

✅ Direct fix
         // No need to read and scan block if block was created before
         // our wallet birthday (as adjusted for block time variability)
-        // unless a full rescan was requested
-        if (gArgs.GetIntArg("-rescan", 0) != 2) {
+        // unless a full rescan is required
+        if (!rescan_required) {
             std::optional<int64_t> time_first_key;
             for (auto spk_man : walletInstance->GetAllScriptPubKeyMans()) {
                 int64_t time = spk_man->GetTimeFirstKey();
                 if (!time_first_key || time < *time_first_key) time_first_key = time;
             }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/wallet/wallet.cpp` around lines 3403 - 3414, The startup rescan gating
still checks gArgs.GetIntArg("-rescan", 0) != 2; replace that removed-flag check
with the rescan_required boolean so the "skip reading/scanning blocks older than
wallet birthday" optimization runs only when a full rescan is NOT required.
Concretely, in the block surrounding walletInstance->GetAllScriptPubKeyMans(),
change the if condition from gArgs.GetIntArg("-rescan", 0) != 2 to if
(!rescan_required), keeping the time_first_key calculation and the
chain.findFirstBlockWithTimeAndHeight(*time_first_key - TIMESTAMP_WINDOW,
rescan_height, FoundBlock().height(rescan_height)) call unchanged.
🧹 Nitpick comments (7)
test/functional/feature_notifications.py (1)

97-97: Consider removing redundant reconnect call.

At Line 97, self.connect_nodes(0, 1) looks leftover from the old restart-based flow. Since this path no longer restarts/disconnects node 1, this adds unnecessary connection churn.

♻️ Proposed cleanup
-            self.connect_nodes(0, 1)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/functional/feature_notifications.py` at line 97, Remove the redundant
reconnection call self.connect_nodes(0, 1) from the test path that no longer
restarts or disconnects node 1; locate the invocation of self.connect_nodes(0,
1) in the feature_notifications test and delete it, ensuring there are no
subsequent assumptions that node 1 was disconnected (adjust any assertions if
they incorrectly expect a reconnect), so the test uses the existing live
connection without churn.
test/functional/wallet_import_rescan.py (1)

174-181: Avoid mining empty blocks at batch boundaries (i == 0) and on empty final flushes.

Line 174 and Line 223 trigger block generation on the first iteration, and Line 189 always mines a final block even when no variants remain. This adds avoidable blocks and test time.

Proposed cleanup
@@
-        for i, variant in enumerate(IMPORT_VARIANTS):
-            if i % 10 == 0:
+        for i, variant in enumerate(IMPORT_VARIANTS):
+            if i > 0 and i % 10 == 0:
                 blockhash = self.generate(self.nodes[0], 1)[0]
                 conf_height = self.nodes[0].getblockcount()
                 timestamp = self.nodes[0].getblockheader(blockhash)["time"]
                 for var in last_variants:
                     var.confirmation_height = conf_height
                     var.timestamp = timestamp
                 last_variants.clear()
@@
-        blockhash = self.generate(self.nodes[0], 1)[0]
-        conf_height = self.nodes[0].getblockcount()
-        timestamp = self.nodes[0].getblockheader(blockhash)["time"]
-        for var in last_variants:
-            var.confirmation_height = conf_height
-            var.timestamp = timestamp
-        last_variants.clear()
+        if last_variants:
+            blockhash = self.generate(self.nodes[0], 1)[0]
+            conf_height = self.nodes[0].getblockcount()
+            timestamp = self.nodes[0].getblockheader(blockhash)["time"]
+            for var in last_variants:
+                var.confirmation_height = conf_height
+                var.timestamp = timestamp
+            last_variants.clear()
@@
-        for i, variant in enumerate(IMPORT_VARIANTS):
-            if i % 10 == 0:
-                blockhash = self.generate(self.nodes[0], 1)[0]
-                conf_height = self.nodes[0].getblockcount() + 1
+        conf_height = self.nodes[0].getblockcount() + 1
+        for i, variant in enumerate(IMPORT_VARIANTS):
+            if i > 0 and i % 10 == 0:
+                self.generate(self.nodes[0], 1)
+                conf_height = self.nodes[0].getblockcount() + 1
             variant.sent_amount = get_rand_amount()
             variant.sent_txid = self.nodes[0].sendtoaddress(variant.address["address"], variant.sent_amount)
             variant.confirmation_height = conf_height

Also applies to: 189-195, 223-226

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/functional/wallet_import_rescan.py` around lines 174 - 181, The test is
mining unnecessary empty blocks at batch boundaries and at the final flush;
change the block-generation guards so generate(self.nodes[0], 1) is only called
when there are variants to confirm and not on the very first iteration: update
the condition in the loop from "if i % 10 == 0:" to something like "if i % 10 ==
0 and i != 0 and last_variants:" (or equivalent truthy check on last_variants)
and similarly guard the final flush that always mines a block (the block
generation around the final last_variants handling) so it only mines when
last_variants is non-empty before updating confirmation_height/timestamp and
calling last_variants.clear().
test/functional/wallet_upgradetohd.py (2)

185-185: Use node alias for consistency with the rest of the file.

The file defines node = self.nodes[0] at line 49 and uses node.rescanblockchain() everywhere else (lines 97, 124, 139, 153, 176, 196, 223). This line should follow the same pattern.

Suggested fix
-        self.nodes[0].rescanblockchain()
+        node.rescanblockchain()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/functional/wallet_upgradetohd.py` at line 185, Replace the direct use of
self.nodes[0].rescanblockchain() with the existing node alias to match the rest
of the file; locate the test method where rescanblockchain() is called
(currently using self.nodes[0]) and change it to node.rescanblockchain() — the
alias node is defined earlier (node = self.nodes[0]) and is used throughout
functions like the test routines that call rescanblockchain().

206-206: Use node alias and verify rescan placement.

Same style inconsistency as line 185—should use node.rescanblockchain().

Additionally, this rescan is placed after encryptwallet() but before upgradetohd() (line 209). Since encryption doesn't change wallet keys, rescanning here won't discover new coins. The actual recovery happens via the rescan at line 223 after the HD upgrade and keypoolrefill(). Consider whether this rescan is needed at all, or if it was simply carried over from the previous restart logic.

Suggested fix for consistency
-        self.nodes[0].rescanblockchain()
+        node.rescanblockchain()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/functional/wallet_upgradetohd.py` at line 206, Replace the direct
self.nodes[0].rescanblockchain() call with the node alias
(node.rescanblockchain()) for consistency, and either remove this rescan (it is
redundant because encryptwallet() does not add keys) or move it to after
upgradetohd() and keypoolrefill() so the rescan actually recovers coins created
by the HD upgrade; update the call site near
encryptwallet()/upgradetohd()/keypoolrefill() accordingly (functions:
encryptwallet(), upgradetohd(), keypoolrefill(), node.rescanblockchain()).
test/functional/mempool_updatefromblock.py (1)

50-53: Unused loop variable tx.

The loop variable tx is not used within the loop body. Consider using underscore to indicate it's intentionally unused.

♻️ Suggested fix
-                for j, tx in enumerate(tx_id[0:i]):
+                for j, _ in enumerate(tx_id[0:i]):

Alternatively, use range(i) since you only need the index:

-                for j, tx in enumerate(tx_id[0:i]):
+                for j in range(i):
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/functional/mempool_updatefromblock.py` around lines 50 - 53, The loop in
mempool_updatefromblock.py uses an unused variable `tx` in "for j, tx in
enumerate(tx_id[0:i]):"; change the loop to avoid the unused variable (either
"for j, _ in enumerate(tx_id[0:i]):" or use an index-only form like "for j in
range(i):") and keep the body that computes vout and calls
inputs.append(wallet.get_utxo(txid=tx_id[j], vout=vout)) unchanged.
src/rpc/util.h (1)

143-147: Align RPCArgOptions member defaults with project header convention.

Use explicit default for POD only, and rely on default constructors for std::string / std::vector.

♻️ Suggested style-only cleanup
 struct RPCArgOptions {
-    std::string oneline_description{};   //!< Should be empty unless it is supposed to override the auto-generated summary line
-    std::vector<std::string> type_str{}; //!< Should be empty unless it is supposed to override the auto-generated type strings. Vector length is either 0 or 2, m_opts.type_str.at(0) will override the type of the value in a key-value pair, m_opts.type_str.at(1) will override the type in the argument description.
-    bool hidden{false};                  //!< For testing only
+    std::string oneline_description;     //!< Should be empty unless it is supposed to override the auto-generated summary line
+    std::vector<std::string> type_str;   //!< Should be empty unless it is supposed to override the auto-generated type strings. Vector length is either 0 or 2, m_opts.type_str.at(0) will override the type of the value in a key-value pair, m_opts.type_str.at(1) will override the type in the argument description.
+    bool hidden = false;                 //!< For testing only
 };
Based on learnings: In header files, prefer in-class initialization for POD types and avoid brace initialization for non-POD members that already have default constructors.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/rpc/util.h` around lines 143 - 147, The struct RPCArgOptions currently
uses brace-init for non-POD members; remove the explicit initializers for
std::string oneline_description and std::vector<std::string> type_str and keep
an in-class default only for the POD member hidden (e.g., initialize hidden to
false using the project's preferred style), so update RPCArgOptions to rely on
the members' default constructors for oneline_description and type_str and
explicit default for hidden; locate the struct by name RPCArgOptions to make the
change.
src/rpc/blockchain.cpp (1)

2698-2704: Add explicit scanobjects validation for start.

start currently relies on get_array() to fail later with a generic type error. A direct check gives clearer RPC feedback.

💡 Proposed fix
     else if (request.params[0].get_str() == "start") {
         BlockFiltersScanReserver reserver;
         if (!reserver.reserve()) {
             throw JSONRPCError(RPC_INVALID_PARAMETER, "Scan already in progress, use action \"abort\" or \"status\"");
         }
+        if (request.params.size() < 2 || request.params[1].isNull()) {
+            throw JSONRPCError(RPC_MISC_ERROR, "scanobjects argument is required for the start action");
+        }
         const std::string filtertype_name{request.params[4].isNull() ? "basic" : request.params[4].get_str()};

Also applies to: 2743-2743

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/rpc/blockchain.cpp` around lines 2698 - 2704, In the "start" branch (else
if (request.params[0].get_str() == "start")) add an explicit validation that the
scanobjects parameter is present and is an array (e.g. check
request.params[1].isArray() or !isNull() + isArray()) before calling
get_array(), and if the check fails throw JSONRPCError(RPC_INVALID_PARAMETER,
"scanobjects must be an array"); place this check immediately after reserving
BlockFiltersScanReserver and before reading filtertype_name, and apply the same
explicit validation in the other start-handling site referenced around the
second spot (~2743) so both code paths return a clear RPC error instead of a
generic type error.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/rpc/blockchain.cpp`:
- Around line 2710-2713: The code currently grabs the BlockFilterIndex via
GetBlockFilterIndex(filtertype) but does not verify the index is fully synced
before beginning the scan, which can yield incomplete results; update the logic
after obtaining BlockFilterIndex* index to check the index's sync state (e.g.
call index->IsSynced() or index->BlocksSynced()) and either block until sync
completes (using an available helper like index->BlockUntilSynced() if present)
or return a JSONRPCError indicating the index is still syncing and the caller
should retry later; reference GetBlockFilterIndex and the BlockFilterIndex
instance used by the start scanning code so the sync check happens before any
scan/iteration begins.
- Around line 2753-2786: The chunked scan loop mishandles failed lookups and
reuses state across iterations: ensure LookupFilterRange's boolean failure is
handled (log the error via LogPrintf/LogPrint and abort the RPC loop or return
an error to avoid returning partial results) rather than silently continuing;
clear the local std::vector<BlockFilter> filters before each chunk fetch so
filters from prior chunks aren't reused; and update start_index to the next
block after the processed range (use the previously-computed next pointer or
start_index = next) instead of setting start_index = block to avoid
overlap/reprocessing and keep g_scanfilter_progress_height/progress consistent
with the actual processed height. Also ensure blocks/results are only appended
when LookupFilterRange succeeds.

In `@test/functional/feature_fee_estimation.py`:
- Around line 63-65: The test currently appends outputs to unconflist and
memutxo before verifying the batch RPC sendrawtransaction results; change the
logic in the batch handling flow (where batch() is called for
from_node.sendrawtransaction requests) to inspect each batch response for an
error object and only append to unconflist/memutxo when the corresponding
sendrawtransaction succeeded, and if any entry has an error, fail the test or
handle the error path (e.g., raise/assert) instead of silently continuing;
update the code around the sendrawtransaction batch processing and any loop that
later uses memutxo/unconflist so it validates per-request results before
trusting memutxo state.

In `@test/functional/rpc_scanblocks.py`:
- Line 35: The literal tpub value assigned to variable parent_key in
test/functional/rpc_scanblocks.py is a test-only extended public key that
triggers secret-scan false positives; either replace the hard-coded string with
a runtime-derived test key (e.g., generate from a fixed test mnemonic or use a
helper function like generate_test_xpub()) or mark the literal as an allowed
test value for the scanner by adding the project’s approved suppression
tag/comment next to parent_key so the scanner treats it as a benign test
fixture.

---

Outside diff comments:
In `@src/wallet/wallet.cpp`:
- Around line 3403-3414: The startup rescan gating still checks
gArgs.GetIntArg("-rescan", 0) != 2; replace that removed-flag check with the
rescan_required boolean so the "skip reading/scanning blocks older than wallet
birthday" optimization runs only when a full rescan is NOT required. Concretely,
in the block surrounding walletInstance->GetAllScriptPubKeyMans(), change the if
condition from gArgs.GetIntArg("-rescan", 0) != 2 to if (!rescan_required),
keeping the time_first_key calculation and the
chain.findFirstBlockWithTimeAndHeight(*time_first_key - TIMESTAMP_WINDOW,
rescan_height, FoundBlock().height(rescan_height)) call unchanged.

In `@src/wallet/walletdb.cpp`:
- Around line 866-877: The tx_corrupt branch currently sets result =
DBErrors::CORRUPT which prevents the later rescan_required -> NEED_RESCAN
recovery from being applied; change the wss.tx_corrupt handling in walletdb.cpp
so that after logging and clearing wss.tx_corrupt it also sets rescan_required =
true (and does not set result = DBErrors::CORRUPT), or otherwise leave result as
a non-fatal status so the later mapping can convert rescan_required into
NEED_RESCAN; update the block around wss.tx_corrupt, pwallet->WalletLogPrintf,
and result to implement this behavior.

---

Nitpick comments:
In `@src/rpc/blockchain.cpp`:
- Around line 2698-2704: In the "start" branch (else if
(request.params[0].get_str() == "start")) add an explicit validation that the
scanobjects parameter is present and is an array (e.g. check
request.params[1].isArray() or !isNull() + isArray()) before calling
get_array(), and if the check fails throw JSONRPCError(RPC_INVALID_PARAMETER,
"scanobjects must be an array"); place this check immediately after reserving
BlockFiltersScanReserver and before reading filtertype_name, and apply the same
explicit validation in the other start-handling site referenced around the
second spot (~2743) so both code paths return a clear RPC error instead of a
generic type error.

In `@src/rpc/util.h`:
- Around line 143-147: The struct RPCArgOptions currently uses brace-init for
non-POD members; remove the explicit initializers for std::string
oneline_description and std::vector<std::string> type_str and keep an in-class
default only for the POD member hidden (e.g., initialize hidden to false using
the project's preferred style), so update RPCArgOptions to rely on the members'
default constructors for oneline_description and type_str and explicit default
for hidden; locate the struct by name RPCArgOptions to make the change.

In `@test/functional/feature_notifications.py`:
- Line 97: Remove the redundant reconnection call self.connect_nodes(0, 1) from
the test path that no longer restarts or disconnects node 1; locate the
invocation of self.connect_nodes(0, 1) in the feature_notifications test and
delete it, ensuring there are no subsequent assumptions that node 1 was
disconnected (adjust any assertions if they incorrectly expect a reconnect), so
the test uses the existing live connection without churn.

In `@test/functional/mempool_updatefromblock.py`:
- Around line 50-53: The loop in mempool_updatefromblock.py uses an unused
variable `tx` in "for j, tx in enumerate(tx_id[0:i]):"; change the loop to avoid
the unused variable (either "for j, _ in enumerate(tx_id[0:i]):" or use an
index-only form like "for j in range(i):") and keep the body that computes vout
and calls inputs.append(wallet.get_utxo(txid=tx_id[j], vout=vout)) unchanged.

In `@test/functional/wallet_import_rescan.py`:
- Around line 174-181: The test is mining unnecessary empty blocks at batch
boundaries and at the final flush; change the block-generation guards so
generate(self.nodes[0], 1) is only called when there are variants to confirm and
not on the very first iteration: update the condition in the loop from "if i %
10 == 0:" to something like "if i % 10 == 0 and i != 0 and last_variants:" (or
equivalent truthy check on last_variants) and similarly guard the final flush
that always mines a block (the block generation around the final last_variants
handling) so it only mines when last_variants is non-empty before updating
confirmation_height/timestamp and calling last_variants.clear().

In `@test/functional/wallet_upgradetohd.py`:
- Line 185: Replace the direct use of self.nodes[0].rescanblockchain() with the
existing node alias to match the rest of the file; locate the test method where
rescanblockchain() is called (currently using self.nodes[0]) and change it to
node.rescanblockchain() — the alias node is defined earlier (node =
self.nodes[0]) and is used throughout functions like the test routines that call
rescanblockchain().
- Line 206: Replace the direct self.nodes[0].rescanblockchain() call with the
node alias (node.rescanblockchain()) for consistency, and either remove this
rescan (it is redundant because encryptwallet() does not add keys) or move it to
after upgradetohd() and keypoolrefill() so the rescan actually recovers coins
created by the HD upgrade; update the call site near
encryptwallet()/upgradetohd()/keypoolrefill() accordingly (functions:
encryptwallet(), upgradetohd(), keypoolrefill(), node.rescanblockchain()).

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 2e6a225 and cfbdbe6.

📒 Files selected for processing (59)
  • doc/REST-interface.md
  • doc/descriptors.md
  • doc/release-notes-remove-rescan.md
  • src/dummywallet.cpp
  • src/init.cpp
  • src/rest.cpp
  • src/rpc/blockchain.cpp
  • src/rpc/client.cpp
  • src/rpc/mining.cpp
  • src/rpc/server.cpp
  • src/rpc/util.cpp
  • src/rpc/util.h
  • src/test/fuzz/rpc.cpp
  • src/wallet/init.cpp
  • src/wallet/rpc/backup.cpp
  • src/wallet/rpc/coins.cpp
  • src/wallet/rpc/spend.cpp
  • src/wallet/rpc/wallet.cpp
  • src/wallet/salvage.cpp
  • src/wallet/test/wallet_tests.cpp
  • src/wallet/wallet.cpp
  • src/wallet/wallet.h
  • src/wallet/walletdb.cpp
  • src/wallet/walletdb.h
  • src/wallet/wallettool.cpp
  • test/functional/feature_bip68_sequence.py
  • test/functional/feature_dbcrash.py
  • test/functional/feature_fee_estimation.py
  • test/functional/feature_notifications.py
  • test/functional/interface_rest.py
  • test/functional/interface_rpc.py
  • test/functional/interface_usdt_utxocache.py
  • test/functional/interface_zmq.py
  • test/functional/mempool_accept.py
  • test/functional/mempool_datacarrier.py
  • test/functional/mempool_package_onemore.py
  • test/functional/mempool_persist.py
  • test/functional/mempool_reorg.py
  • test/functional/mempool_spend_coinbase.py
  • test/functional/mempool_unbroadcast.py
  • test/functional/mempool_updatefromblock.py
  • test/functional/mining_getblocktemplate_longpoll.py
  • test/functional/mining_prioritisetransaction.py
  • test/functional/p2p_blocksonly.py
  • test/functional/p2p_filter.py
  • test/functional/p2p_permissions.py
  • test/functional/rpc_generate.py
  • test/functional/rpc_mempool_info.py
  • test/functional/rpc_packages.py
  • test/functional/rpc_rawtransaction.py
  • test/functional/rpc_scanblocks.py
  • test/functional/rpc_scantxoutset.py
  • test/functional/test_framework/wallet.py
  • test/functional/test_runner.py
  • test/functional/wallet_basic.py
  • test/functional/wallet_fast_rescan.py
  • test/functional/wallet_hd.py
  • test/functional/wallet_import_rescan.py
  • test/functional/wallet_upgradetohd.py
💤 Files with no reviewable changes (21)
  • test/functional/rpc_mempool_info.py
  • test/functional/p2p_blocksonly.py
  • test/functional/rpc_rawtransaction.py
  • test/functional/mempool_unbroadcast.py
  • src/dummywallet.cpp
  • test/functional/interface_zmq.py
  • test/functional/mempool_datacarrier.py
  • test/functional/feature_bip68_sequence.py
  • test/functional/mempool_persist.py
  • test/functional/mempool_package_onemore.py
  • src/wallet/init.cpp
  • test/functional/mining_prioritisetransaction.py
  • test/functional/p2p_filter.py
  • test/functional/mempool_spend_coinbase.py
  • test/functional/wallet_fast_rescan.py
  • test/functional/p2p_permissions.py
  • test/functional/rpc_scantxoutset.py
  • test/functional/mempool_reorg.py
  • test/functional/feature_dbcrash.py
  • test/functional/mempool_accept.py
  • test/functional/interface_usdt_utxocache.py

Comment on lines +2710 to +2713
BlockFilterIndex* index = GetBlockFilterIndex(filtertype);
if (!index) {
throw JSONRPCError(RPC_MISC_ERROR, "Index is not enabled for filtertype " + filtertype_name);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Block filter index sync should be checked before scanning starts.

Without waiting for sync, start can run against a lagging index and return incomplete results.

💡 Proposed fix
         BlockFilterIndex* index = GetBlockFilterIndex(filtertype);
         if (!index) {
             throw JSONRPCError(RPC_MISC_ERROR, "Index is not enabled for filtertype " + filtertype_name);
         }
+        if (!index->BlockUntilSyncedToCurrentChain()) {
+            throw JSONRPCError(RPC_MISC_ERROR, "Block filters are still in the process of being indexed.");
+        }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/rpc/blockchain.cpp` around lines 2710 - 2713, The code currently grabs
the BlockFilterIndex via GetBlockFilterIndex(filtertype) but does not verify the
index is fully synced before beginning the scan, which can yield incomplete
results; update the logic after obtaining BlockFilterIndex* index to check the
index's sync state (e.g. call index->IsSynced() or index->BlocksSynced()) and
either block until sync completes (using an available helper like
index->BlockUntilSynced() if present) or return a JSONRPCError indicating the
index is still syncing and the caller should retry later; reference
GetBlockFilterIndex and the BlockFilterIndex instance used by the start scanning
code so the sync check happens before any scan/iteration begins.

Comment on lines +2753 to +2786
std::vector<BlockFilter> filters;
const CBlockIndex* start_block = block; // for progress reporting
const int total_blocks_to_process = stop_block->nHeight - start_block->nHeight;

g_scanfilter_should_abort_scan = false;
g_scanfilter_progress = 0;
g_scanfilter_progress_height = start_block->nHeight;

while (block) {
node.rpc_interruption_point(); // allow a clean shutdown
if (g_scanfilter_should_abort_scan) {
LogPrintf("scanblocks RPC aborted at height %d.\n", block->nHeight);
break;
}
const CBlockIndex* next = nullptr;
{
LOCK(cs_main);
CChain& active_chain = chainman.ActiveChain();
next = active_chain.Next(block);
if (block == stop_block) next = nullptr;
}
if (start_index->nHeight + amount_per_chunk == block->nHeight || next == nullptr) {
LogPrint(BCLog::RPC, "Fetching blockfilters from height %d to height %d.\n", start_index->nHeight, block->nHeight);
if (index->LookupFilterRange(start_index->nHeight, block, filters)) {
for (const BlockFilter& filter : filters) {
// compare the elements-set with each filter
if (filter.GetFilter().MatchAny(needle_set)) {
blocks.push_back(filter.GetBlockHash().GetHex());
LogPrint(BCLog::RPC, "scanblocks: found match in %s\n", filter.GetBlockHash().GetHex());
}
}
}
start_index = block;

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Chunk scan loop can silently degrade correctness and progress fidelity.

On Line 2776, a failed LookupFilterRange is ignored, so the RPC may return partial data as if successful.
Also, reusing filters across chunks and setting start_index = block on Line 2785 risks overlap/reprocessing behavior.

💡 Proposed fix
         std::vector<BlockFilter> filters;
@@
             if (start_index->nHeight + amount_per_chunk == block->nHeight || next == nullptr) {
                 LogPrint(BCLog::RPC, "Fetching blockfilters from height %d to height %d.\n", start_index->nHeight, block->nHeight);
-                if (index->LookupFilterRange(start_index->nHeight, block, filters)) {
-                    for (const BlockFilter& filter : filters) {
-                        // compare the elements-set with each filter
-                        if (filter.GetFilter().MatchAny(needle_set)) {
-                            blocks.push_back(filter.GetBlockHash().GetHex());
-                            LogPrint(BCLog::RPC, "scanblocks: found match in %s\n", filter.GetBlockHash().GetHex());
-                        }
-                    }
-                }
-                start_index = block;
+                filters.clear();
+                if (!index->LookupFilterRange(start_index->nHeight, block, filters)) {
+                    throw JSONRPCError(RPC_MISC_ERROR, "Failed to read block filters during scan");
+                }
+                for (const BlockFilter& filter : filters) {
+                    // compare the elements-set with each filter
+                    if (filter.GetFilter().MatchAny(needle_set)) {
+                        blocks.push_back(filter.GetBlockHash().GetHex());
+                        LogPrint(BCLog::RPC, "scanblocks: found match in %s\n", filter.GetBlockHash().GetHex());
+                    }
+                }
+                start_index = next;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/rpc/blockchain.cpp` around lines 2753 - 2786, The chunked scan loop
mishandles failed lookups and reuses state across iterations: ensure
LookupFilterRange's boolean failure is handled (log the error via
LogPrintf/LogPrint and abort the RPC loop or return an error to avoid returning
partial results) rather than silently continuing; clear the local
std::vector<BlockFilter> filters before each chunk fetch so filters from prior
chunks aren't reused; and update start_index to the next block after the
processed range (use the previously-computed next pointer or start_index = next)
instead of setting start_index = block to avoid overlap/reprocessing and keep
g_scanfilter_progress_height/progress consistent with the actual processed
height. Also ensure blocks/results are only appended when LookupFilterRange
succeeds.

Comment on lines +63 to 65
batch_reqs.append(from_node.sendrawtransaction.get_request(hexstring=tx_hex, maxfeerate=0))
unconflist.append({"txid": txid, "vout": 0, "value": total_in - amount - fee})
unconflist.append({"txid": txid, "vout": 1, "value": amount})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, check the file exists and read the relevant sections
wc -l test/functional/feature_fee_estimation.py
echo "=== Lines 60-70 ==="
sed -n '60,70p' test/functional/feature_fee_estimation.py
echo
echo "=== Lines 165-175 ==="
sed -n '165,175p' test/functional/feature_fee_estimation.py

Repository: dashpay/dash

Length of output: 1037


🏁 Script executed:

# Find and examine the authproxy batch implementation
fd -a 'authproxy.py' 2>/dev/null | head -n1 | xargs -I {} bash -c 'echo "File: {}"; wc -l {}'

Repository: dashpay/dash

Length of output: 191


🏁 Script executed:

# Check how batch() is typically used in functional tests
rg -n '\.batch\(' test/functional --max-count=10 -B2 -A2

Repository: dashpay/dash

Length of output: 1422


🏁 Script executed:

# Examine authproxy.batch() implementation
cat -n /home/jailuser/git/test/functional/test_framework/authproxy.py | sed -n '1,206p' | tail -n 100

Repository: dashpay/dash

Length of output: 5676


🏁 Script executed:

# Also look for the batch method specifically
rg -n 'def batch' /home/jailuser/git/test/functional/test_framework/authproxy.py -A 10

Repository: dashpay/dash

Length of output: 643


🏁 Script executed:

# Check how unconflist/memutxo is used after the batch call
sed -n '50,85p' test/functional/feature_fee_estimation.py

Repository: dashpay/dash

Length of output: 1379


🏁 Script executed:

# Verify the complete context around mempool sync and utxo usage
sed -n '167,180p' test/functional/feature_fee_estimation.py

Repository: dashpay/dash

Length of output: 592


🏁 Script executed:

# Check what happens to unconflist/memutxo after population
grep -n "memutxo\|unconflist" test/functional/feature_fee_estimation.py | head -20

Repository: dashpay/dash

Length of output: 546


Check batch RPC results before trusting memutxo state.

At lines 64–65, outputs are added to unconflist before submission outcome is known, and line 168 ignores all batch results. If any sendrawtransaction fails, the code later iterates over corrupted memutxo containing nonexistent UTXOs, breaking fee estimation logic.

The batch() method returns a response array where individual requests can carry error objects. Currently those results are discarded without verification.

Fix
-            for node in self.nodes:
-                node.batch(batch_sendtx_reqs)
+            submit_results = self.nodes[0].batch(batch_sendtx_reqs)
+            for result in submit_results:
+                assert_equal(result["error"], None)

Also applies to: 64–65

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/functional/feature_fee_estimation.py` around lines 63 - 65, The test
currently appends outputs to unconflist and memutxo before verifying the batch
RPC sendrawtransaction results; change the logic in the batch handling flow
(where batch() is called for from_node.sendrawtransaction requests) to inspect
each batch response for an error object and only append to unconflist/memutxo
when the corresponding sendrawtransaction succeeded, and if any entry has an
error, fail the test or handle the error path (e.g., raise/assert) instead of
silently continuing; update the code around the sendrawtransaction batch
processing and any loop that later uses memutxo/unconflist so it validates
per-request results before trusting memutxo state.

_, spk_1, addr_1 = getnewdestination()
wallet.send_to(from_node=node, scriptPubKey=spk_1, amount=1 * COIN)

parent_key = "tpubD6NzVbkrYhZ4WaWSyoBvQwbpLkojyoTZPRsgXELWz3Popb3qkjcJyJUGLnL4qHHoQvao8ESaAstxYSnhyswJ76uZPStJRJCTKvosUCJZL5B"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Suppress secret-scan false positive for the test extended public key literal.

Line [35] is test data, but it is flagged as a generic key and can create high-noise security output. Mark it as an allowed test literal (or derive it at runtime) so real leaks remain high-signal.

🔧 Minimal suppression example
-parent_key = "tpubD6NzVbkrYhZ4WaWSyoBvQwbpLkojyoTZPRsgXELWz3Popb3qkjcJyJUGLnL4qHHoQvao8ESaAstxYSnhyswJ76uZPStJRJCTKvosUCJZL5B"
+parent_key = "tpubD6NzVbkrYhZ4WaWSyoBvQwbpLkojyoTZPRsgXELWz3Popb3qkjcJyJUGLnL4qHHoQvao8ESaAstxYSnhyswJ76uZPStJRJCTKvosUCJZL5B"  # gitleaks:allow
🧰 Tools
🪛 Gitleaks (8.30.0)

[high] 35-35: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/functional/rpc_scanblocks.py` at line 35, The literal tpub value
assigned to variable parent_key in test/functional/rpc_scanblocks.py is a
test-only extended public key that triggers secret-scan false positives; either
replace the hard-coded string with a runtime-derived test key (e.g., generate
from a fixed test mnemonic or use a helper function like generate_test_xpub())
or mark the literal as an allowed test value for the scanner by adding the
project’s approved suppression tag/comment next to parent_key so the scanner
treats it as a benign test fixture.

… rpc_scanblocks.py

fa54d30 test: check for false-positives in rpc_scanblocks.py (Sebastian Falbesoner)
3bca6cd test: add compact block filter (BIP158) helper routines (Sebastian Falbesoner)
25ee74d test: add SipHash implementation for generic data in Python (Sebastian Falbesoner)

Pull request description:

  This PR adds a fixed false-positive element check to the functional test rpc_scanblocks.py by using a pre-calculated scriptPubKey that collides with the regtest genesis block's coinbase output. Note that determining a BIP158 false-positive at runtime would also be possible, but take too long (we'd need to create and check ~800k output scripts on average, which took at least 2 minutes on average on my machine).

  The introduced check is related to issue bitcoin#26322 and more concretely inspired by PR bitcoin#26325 which introduces an "accurate" mode that filters out these false-positives. The introduced cryptography routines (siphash for generic data) and helpers (BIP158 ranged hash calculation, relevant scriptPubKey per block determination) could potentially also be useful for more tests in the future that involve compact block filters.

ACKs for top commit:
  achow101:
    ACK fa54d30

Tree-SHA512: c6af50864146028228d197fb022ba2ff24d1ef48dc7d171bccfb21e62dd50ac80db5fae0c53f5d205edabd48b3493c7aa2040f628a223e68df086ec2243e5a93
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
test/functional/rpc_scanblocks.py (1)

35-35: ⚠️ Potential issue | 🟡 Minor

Suppress secret-scan noise for the test xpub literal.

This test fixture still uses a hard-coded extended public key string and can keep triggering gitleaks noise. Please either derive it at runtime or add the approved inline suppression marker.

🔧 Minimal suppression
-parent_key = "tpubD6NzVbkrYhZ4WaWSyoBvQwbpLkojyoTZPRsgXELWz3Popb3qkjcJyJUGLnL4qHHoQvao8ESaAstxYSnhyswJ76uZPStJRJCTKvosUCJZL5B"
+parent_key = "tpubD6NzVbkrYhZ4WaWSyoBvQwbpLkojyoTZPRsgXELWz3Popb3qkjcJyJUGLnL4qHHoQvao8ESaAstxYSnhyswJ76uZPStJRJCTKvosUCJZL5B"  # gitleaks:allow
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/functional/rpc_scanblocks.py` at line 35, The hard-coded extended public
key assigned to variable parent_key is causing gitleaks noise; either generate
the xpub at runtime (e.g., implement a helper like
derive_test_xpub(seed/mnemonic) and set parent_key = derive_test_xpub(...)) or
keep the literal but append the approved inline suppression marker to the
assignment (e.g., add the project's gitleaks allow comment like " #
gitleaks:allow" per CI rules) so the test fixture no longer triggers secret-scan
alerts.
🧹 Nitpick comments (1)
test/functional/rpc_scanblocks.py (1)

84-84: Use direct iterator access for single-element set extraction.

Prefer next(iter(genesis_spks)) over building a temporary list for one element.

♻️ Small refactor
-genesis_coinbase_spk = list(genesis_spks)[0]
+genesis_coinbase_spk = next(iter(genesis_spks))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/functional/rpc_scanblocks.py` at line 84, The code builds a temporary
list to extract a single element from the set genesis_spks (line creating
genesis_coinbase_spk = list(genesis_spks)[0]); replace that with direct iterator
access using next(iter(genesis_spks)) to avoid allocating a list—update the
assignment to use next(iter(genesis_spks)) for genesis_coinbase_spk.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@test/functional/rpc_scanblocks.py`:
- Line 35: The hard-coded extended public key assigned to variable parent_key is
causing gitleaks noise; either generate the xpub at runtime (e.g., implement a
helper like derive_test_xpub(seed/mnemonic) and set parent_key =
derive_test_xpub(...)) or keep the literal but append the approved inline
suppression marker to the assignment (e.g., add the project's gitleaks allow
comment like " # gitleaks:allow" per CI rules) so the test fixture no longer
triggers secret-scan alerts.

---

Nitpick comments:
In `@test/functional/rpc_scanblocks.py`:
- Line 84: The code builds a temporary list to extract a single element from the
set genesis_spks (line creating genesis_coinbase_spk = list(genesis_spks)[0]);
replace that with direct iterator access using next(iter(genesis_spks)) to avoid
allocating a list—update the assignment to use next(iter(genesis_spks)) for
genesis_coinbase_spk.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cfbdbe6 and a7c4a44.

📒 Files selected for processing (1)
  • test/functional/rpc_scanblocks.py

@knst knst requested review from PastaPastaPasta, UdjinM6 and kwvg March 6, 2026 17:18
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants