-
Notifications
You must be signed in to change notification settings - Fork 135
ReducedData Temporary Softfork #234
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: 29.x-knots
Are you sure you want to change the base?
Conversation
…to reduce data push size limit to 256 bytes (except for P2SH redeemScript push)
…TA (still unused)
…DUCED_DATA (still unused)
…ATA is active (still never)
…o cap output scripts at 83 bytes
…l never), limit non-OP_RETURN scripts to 34 bytes
…to expected memory usage
…r than exceed MAX_OP_RETURN_RELAY
…T_REDUCED_DATA is active (never yet)
…BLE_WITNESS_PROGRAM,UPGRADABLE_TAPROOT_VERSION,OP_SUCCESS}
…AM,UPGRADABLE_TAPROOT_VERSION,OP_SUCCESS} on blocks when DEPLOYMENT_REDUCED_DATA is active (never yet)
Github-Pull: bitcoin#10532 Rebased-From: cd74a23fcf9588199e196ab31bc64972400c2027
… risk Github-Pull: bitcoin#10532 Rebased-From: e42a2f6beb61df3e3a201804cf3bcce6b00c88ba
…imit; add reduced_data deployment name to allow regtest RPC access for testing
…DUCED_DATA); adapt 6 tests to NODE_BIP148 service flag; add assert_equal_without_usage helper for testmempoolaccept results
…eviction threshold The test was failing because commit 58a329b changed gen_return_txouts() from using 1 large OP_RETURN output to 734 small OP_RETURN outputs (to comply with the new MAX_OUTPUT_SCRIPT_SIZE=34 consensus rule in bip444). This change altered how fill_mempool() fills the mempool, raising the eviction threshold from ~0.68 sat/vB to ~1.10 sat/vB. The test's create_package_2p1c() was using hardcoded feerates (1.0 and 2.0 sat/vB), causing parent1 to be below the new eviction threshold and get rejected. Solution: Calculate parent feerates dynamically based on the actual mempoolminfee after fill_mempool() runs. This makes the test robust to future changes in mempool dynamics. - Store mempoolminfee in raise_network_minfee() - Use 2x and 4x mempoolminfee for parent1 and parent2 feerates - Add logging to show the calculated feerates Test results with fix: - mempoolminfee: 1.101 sat/vB - parent1: 2.202 sat/vB (2x threshold) → accepted ✓ - parent2: 4.404 sat/vB (4x threshold) → accepted ✓
The test was expecting services string ending with "nwl2?" but now receives "nwl1" because NODE_BIP148 is advertised (BIP148 service bit is represented as "1" in the services string). Updated regex pattern from "nwl2?" to "nwl[12]?" to accept both the BIP148 service bit (1) and any other service bits that may be represented as (2).
…coding The test was expecting addrv2 messages to be 187 bytes, but they're now 227 bytes due to the BIP148 service bit being added to P2P_SERVICES. P2P_SERVICES is now NODE_NETWORK | NODE_WITNESS | NODE_BIP148 = 0x08000009, which requires 5 bytes in CompactSize encoding (not 1 byte as before). Updated calc_addrv2_msg_size() to properly calculate the services field size using ser_compact_size() instead of assuming 1 byte. Difference: 5 bytes - 1 byte = 4 bytes per address × 10 addresses = 40 bytes 187 + 40 = 227 bytes ✓
The addpeeraddress RPC was creating addresses with only NODE_NETWORK | NODE_WITNESS, but the node requires NODE_BIP148 for outbound connections (added in commit c684ff1 from 2017). ThreadOpenConnections filters addresses using HasAllDesirableServiceFlags, which requires NODE_NETWORK | NODE_WITNESS | NODE_BIP148. Addresses without NODE_BIP148 are skipped entirely, making addpeeraddress useless for its intended testing purpose. This fix updates addpeeraddress to match production requirements, allowing test-added addresses to actually be used for outbound connections. Fixes p2p_seednode.py test which was failing because addresses added via addpeeraddress were being filtered out, preventing "trying v1 connection" log messages from appearing.
…TORY_VERIFY_FLAGS
…ation flags on a pet-input basis
…gin from reduced_data script validation rules
…YMENT_REDUCED_DATA enforcement
Adapt unit tests to comply with REDUCED_DATA restrictions: - Add REDUCED_DATA flag to mapFlagNames in transaction_tests - Update witness test from 520-byte to 256-byte push limit - Accept SCRIPT_ERR_PUSH_SIZE in miniscript satisfaction tests - Update Taproot tree depth tests from 128 to 7 levels - Fix descriptor error message to report correct nesting limit (7) REDUCED_DATA enforces MAX_SCRIPT_ELEMENT_SIZE_REDUCED (256 bytes) and TAPROOT_CONTROL_MAX_NODE_COUNT_REDUCED (7 levels) at the policy level via STANDARD_SCRIPT_VERIFY_FLAGS.
Replace thresh(2,pk(...),s:pk(...),adv:older(42)) with and_v(v:pk(...),pk(...)) because thresh() uses OP_IF opcodes which are completely forbidden in Tapscript when REDUCED_DATA is active (see src/script/interpreter.cpp:621-623). The and_v() construction provides equivalent 2-of-2 multisig functionality without conditional branching, making it compatible with REDUCED_DATA restrictions. Also update line 1010 test to expect "tr() supports at most 7 nesting levels" error instead of multi() error, as the test's 22 opening braces exceed REDUCED_DATA's 7-level limit before the parser can discover the multi() error.
Add NODE_BIP444 flag to GetDesirableServiceFlags assertions in peerman_tests and to service flags in denialofservice_tests and net_tests peer setup. NODE_BIP444 (bit 27) signals BIP444/REDUCED_DATA enforcement and is now included in desirable service flags alongside NODE_NETWORK and NODE_WITNESS for peer connections.
| ValidationCache& validation_cache, | ||
| std::vector<CScriptCheck>* pvChecks) | ||
| std::vector<CScriptCheck>* pvChecks, | ||
| const std::vector<unsigned int>& flags_per_input) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Cache uses base flags but this validation uses per-input flags. Verify this can't allow cached invalid scripts.
| flags_per_input.resize(tx.vin.size()); | ||
| for (size_t j = 0; j < tx.vin.size(); j++) { | ||
| prevheights[j] = view.AccessCoin(tx.vin[j].prevout).nHeight; | ||
| flags_per_input[j] = (prevheights[j] < reduced_data_start_height) ? (flags & ~REDUCED_DATA_MANDATORY_VERIFY_FLAGS) : flags; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this off by one? UTXOs created at exactly activation height might be incorrectly exempted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why?
| return; | ||
| } | ||
|
|
||
| constexpr uint256 bad_block_hash{"0000000000000000000000000000000000000000000000000000000000000000"}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a placeholder? Specify an actual bad block or remove.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, this code only makes sense for the (now removed) reactive activation.
Instead, this PR should implement detecting that it's on an invalid chain at startup and rewinding.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will remove.
| if (deployment.active_duration > 0) { | ||
| const int activation_height = versionbitscache.StateSinceHeight(pindexPrev, params, dep); | ||
| const int next_block_height = (pindexPrev == nullptr ? 0 : pindexPrev->nHeight + 1); | ||
| if (next_block_height > activation_height + deployment.active_duration) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note: nodes with different activation heights will disagree on the expiry block, which introduces chain split risk.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nodes disagreeing on consensus rules would be a chain split risk regardless. But why would they disagree?
| std::vector<std::vector<unsigned char> > stack, stackCopy; | ||
| if (scriptPubKey.IsPayToScriptHash()) { | ||
| // Disable SCRIPT_VERIFY_REDUCED_DATA for pushing the P2SH redeemScript | ||
| if (!EvalScript(stack, scriptSig, flags & ~SCRIPT_VERIFY_REDUCED_DATA, checker, SigVersion::BASE, serror)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Verify nested P2SH doesn't create bypass.
| int nMaxConnections; | ||
| int available_fds; | ||
| ServiceFlags g_local_services = ServiceFlags(NODE_NETWORK_LIMITED | NODE_WITNESS); | ||
| ServiceFlags g_local_services = ServiceFlags(NODE_NETWORK_LIMITED | NODE_WITNESS | NODE_UASF_REDUCED_DATA); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By broadcasting the UASF flag by default, doesn't that risk network partition before sufficient adoption?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should only be used for the 8 outgoing connections. Incoming connections from old nodes are accepted.
| flags &= ~SCRIPT_VERIFY_DISCOURAGE_UPGRADABLE_NOPS; | ||
| } | ||
| if (ignore_rejects.count("non-mandatory-script-verify-flag-upgradable-witness_program")) { | ||
| flags &= ~SCRIPT_VERIFY_DISCOURAGE_UPGRADABLE_WITNESS_PROGRAM; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a breaking change, since it removes the ability to bypass DISCOURAGE_UPGRADABLE_* policy flags?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we support that? I think other code risks breaking if we let potentially-invalid transactions into our mempool...
| for (const auto& txout : tx.vout) { | ||
| if (txout.scriptPubKey.empty()) continue; | ||
| if (txout.scriptPubKey.size() > ((txout.scriptPubKey[0] == OP_RETURN) ? MAX_OUTPUT_DATA_SIZE : MAX_OUTPUT_SCRIPT_SIZE)) { | ||
| return state.Invalid(TxValidationResult::TX_PREMATURE_SPEND, "bad-txns-vout-script-toolarge"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: wherever it says toolarge may want to replace with too-large
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typically these are structured from parent to child, and toolarge is a logical single element.
luke-jr
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Strongly disagree with reviving BIP9, especially giving miners decision-making authority over any protocol change. At the very least, there should be an end height where it activates unconditionally.
For merging, this also needs to demonstrate strong community acceptance. I understand there are bad actors and trolls who will fabricate objections, and the importance of activating this quickly even if support isn't as strong as might be needed for a new feature, but try to do what you can.
| return "Knots:" + CLIENT_BUILD.substr(pos + 6) + "/"; | ||
| }(); | ||
| ua += ua_knots; | ||
| ua += "UASF-ReducedData:0.1/"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes sense if you're going to have an activation client, but not so much for a merge into mainline Knots.
| throw std::runtime_error(strprintf("Invalid active_duration (%s)", vDeploymentParams[4])); | ||
| } | ||
| } else { | ||
| vbparams.active_duration = 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
std::numeric_limits<int>::max() would make more sense here than 0
| /** Block heights during which Reduced Data is active */ | ||
| int ReducedDataHeightBegin{std::numeric_limits<int>::max()}; | ||
| int ReducedDataHeightEnd{std::numeric_limits<int>::max()}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the purpose of this with the MASF variant?
|
|
||
| int DeploymentHeightEnd(BuriedDeployment dep) const | ||
| { | ||
| switch (dep) { | ||
| case DEPLOYMENT_HEIGHTINCB: | ||
| case DEPLOYMENT_CLTV: | ||
| case DEPLOYMENT_DERSIG: | ||
| case DEPLOYMENT_CSV: | ||
| case DEPLOYMENT_SEGWIT: | ||
| // These are forever | ||
| break; | ||
| } // no default case, so the compiler can warn about missing cases | ||
| return std::numeric_limits<int>::max(); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, seems useless/wrong with your MASF concept.
| for (const auto& txout : tx.vout) { | ||
| if (txout.scriptPubKey.empty()) continue; | ||
| if (txout.scriptPubKey.size() > ((txout.scriptPubKey[0] == OP_RETURN) ? MAX_OUTPUT_DATA_SIZE : MAX_OUTPUT_SCRIPT_SIZE)) { | ||
| return state.Invalid(TxValidationResult::TX_PREMATURE_SPEND, "bad-txns-vout-script-toolarge"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typically these are structured from parent to child, and toolarge is a logical single element.
| aRules.push_back("!signet"); | ||
| } | ||
| if (DeploymentActiveAfter(pindexPrev, consensusParams, Consensus::DEPLOYMENT_REDUCED_DATA, chainman.m_versionbitscache)) { | ||
| aRules.push_back("reduced_data"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This shouldn't be needed with the MASF implementation?
| @@ -0,0 +1,212 @@ | |||
| #!/usr/bin/env python3 | |||
| # Copyright (c) 2023 The Bitcoin Core developers | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this really 2023 code? Surely there's at least something new from 2025...
| @@ -0,0 +1,734 @@ | |||
| #!/usr/bin/env python3 | |||
| # Copyright (c) 2024 The Bitcoin Core developers | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-2025?
| if (vDeploymentParams.size() < 3 || 4 < vDeploymentParams.size()) { | ||
| throw std::runtime_error("Version bits parameters malformed, expecting deployment:start:end[:min_activation_height]"); | ||
| if (vDeploymentParams.size() < 3 || 5 < vDeploymentParams.size()) { | ||
| throw std::runtime_error("Version bits parameters malformed, expecting deployment:start:end[:min_activation_height[:active_duration]]"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would prefer if you split up "Convert DEPLOYMENT_REDUCED_DATA to temporary BIP9 MASF" into 3 commits:
- Adding expiry support to versionbit deployments
- Support for overriding active_duration on regtest
- Adding the new reduced_data deployment (drop the commits adding the buried version)
|
|
||
| # BIP9 constants for regtest | ||
| BIP9_PERIOD = 144 # blocks per period in regtest | ||
| BIP9_THRESHOLD = 108 # 75% of 144 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Suggest using a threshold of only 55% for this. Miner signalling is for readiness, not voting. If the community supports this (required for any deployment), the only thing it should delay for is miner readiness, which 55% is sufficient for. Once the threshold is reached, the remaining miners still have 2 weeks of LOCKED_IN to upgrade.
ReducedData Temporary Softfork, implemented as a MASF