CIP-???? | Conflict-Based Fee Priority in Mempool#1178
CIP-???? | Conflict-Based Fee Priority in Mempool#1178fallen-icarus wants to merge 4 commits intocardano-foundation:masterfrom
Conversation
|
Mass tagging people who might be interested 🙂 @lehins @WhatisRT @colll78 @Quantumplation @ch1bo @matteocoppola @theeldermillenial @perturbing @coot @will-break-it @Lupowilk @Crypto2099 @Cerkoryn |
|
I don't have time to read the proposal right now, but happy to share some initial thoughts just based on your PR summary; To me, this doesn't materially alleviate the "bad" parts of contention; indeed, I believe your proposal makes both worse. Whether it's not this is actually true, or whether it is worth it on other grounds, I will reserve judgement until I get time to read your proposal. To me, there are two closely intertwined "bad" properties of contention:
Thus, if protocols (like Sundae) want to maintain a structural fairness and nice UX for all users, they're pushed towards protocols that avoid contention (perhaps even more strongly!) Now, maybe that's fine; those protocols continue to remain unchanged, and other protocols accept this structural imbalance, and at least the network benefits from it, but it doesn't materially lesson the "problems" of contention. And, at the very least (and maybe you mention this in your writeup), it should be sophisticated enough for the mempool to consider the chain of transactions it's replacing, not just an individual transaction, so that earning 0.1 extra ADA on a transaction doesn't cause the network to lose 20 ada worth of fees from the whole chain 😅 |
|
I really like this concept overall. The mempool currently decides whether an incoming transaction is valid by looking at its own bespoke ledger state. This mempool ledger state is not the same as the ledger state of the blockchain tip itself, but rather the blockchain tip ledger state + mempool txns. This ledger state for the mempool exists as if all mempool transactions have already been applied. This is what allows us to do transaction chaining in the mempool for many protocols today. As such, if a new transaction arrives that gets rejected, how will we tell the difference between a malformed transaction, a truly "already spent utxo" transaction, and one where the only utxo conflict is another tx already in the mempool? Is there an algorithm we can add to this CIP to clarify the process a tx goes through today (rejected due to conflict), vs what process it will go through in the future (kicking out the existing conflicting tx if we paid a higher fee)? Great work on this CIP! |
Cerkoryn
left a comment
There was a problem hiding this comment.
Largely a fan of this CIP, especially the economics of it. Left some comments on some specific sections mostly about the UX implications.
On a semi-related note, I am reminded of the determinism tradeoffs of the full version of Ouroboros Leios to which @Quantumplation gave multiple possible solutions here:
https://www.314pool.com/post/leios
It is largely a physics/causality problem, but I wonder if having a UTxO contention fee market like this could be another possible solution. I.e., if multiple people try to spend the same UTxO from different "light cones", then the one with the highest fee wins and the other is rolled back.
I'll need to give that another read though, it's been a minute since I worked through that.
| transaction fees. The loser's transaction fails cleanly — on eUTxO, a transaction that references a | ||
| spent UTxO simply fails validation at no cost to the submitter. It does not execute with a worse | ||
| outcome. |
There was a problem hiding this comment.
How does this work when a user's Tx passes validation, but then later is surpassed by a conflicting transaction with a higher fee after it has been accepted and propagated to many nodes?
There was a problem hiding this comment.
The new tx still needs to be propagated to the rest of the nodes. It is possible the original tx still makes it into a block if the new tx doesn't propagate fast enough.
CIP-????/README.md
Outdated
| Cardano's mempool does not allow spending unconfirmed outputs. Every transaction input must reference | ||
| a UTxO that is already confirmed on-chain. There are no parent-child chains in the mempool, and | ||
| therefore nothing to pin. The entire class of complexity that dominates Bitcoin's RBF design simply | ||
| does not exist on Cardano. |
There was a problem hiding this comment.
What about transaction chaining or nested transactions? Don't both of these contain a similar parent-child dependency in different ways?
There was a problem hiding this comment.
Yes, this is an issue. Building on top of mempool transaction exists on Cardano, it is partially covered later in the CIP with the consideration of multi-input conflict (it's a different problem, but may be solved with the same solution). I would suggest rewriting both parts to make it clearer.
There was a problem hiding this comment.
Nested txs does not support cross-tx chaining. But for tx chaining, see this comment: #1178 (comment)
| This probabilistic nature is a feature, not a limitation. A deterministic guarantee would require a | ||
| single global ordering authority — a centralizing force. Probabilistic resolution still enables | ||
| rational decision-making: a participant who consistently pays more than their competitors will | ||
| consistently win more contests, and can model their expected success rate based on network topology | ||
| and competitor behavior. Over many contests, the law of large numbers ensures that the fee signal | ||
| is reliable even though any individual outcome is uncertain. |
There was a problem hiding this comment.
Earlier you mentioned...
Transactions are fully deterministic: a user knows exactly what inputs will be consumed and what outputs will be produced before they sign.
Which I assume is in the sense of you either get exactly what you pay for or your transaction never happens. It is all or nothing. However, this seems to add a new way for transactions to "fail" in the event that their transaction passes validation and is accepted, but later is surpassed by a higher-paying transaction and subsequently dropped.
I think in practice this would look more like a rollback than non-determinism, but I still wonder how it would affect the user experience.
There was a problem hiding this comment.
On second thought, I think this is largely solved by intents. Have a user sign an intent once and then it will eventually go through.
However the tradeoff here is that it may force some dApp designs to have to use intents (and thus the dreaded intermediary 😅 ). Maybe it would be possible for this CIP to be optional at the dApp-level somehow?
There was a problem hiding this comment.
It is only in the mempool that the tx can be dropped. Once an SPO adds it to a block, it is too late to replace the tx.
On second thought, I think this is largely solved by intents. Have a user sign an intent once and then it will eventually go through.
You don't need to use an intent. See this comment: #1178 (comment)
CIP-????/README.md
Outdated
| This CIP does not introduce any new attack vectors. It does facilitate competitive extraction of | ||
| existing value — when bots bid against each other to capture a mispriced limit order, the surplus | ||
| between the order price and the market price is a form of MEV. But this extraction is non-adversarial: | ||
| it cannot worsen any user's execution, the limit order creator receives exactly the price they | ||
| specified, and the bidding war converts extractable value into fee revenue for the network rather | ||
| than leaving it as pure profit for whichever bot happened to propagate fastest. This is MEV | ||
| redistribution, not MEV harm. |
There was a problem hiding this comment.
I'm envisioning a scenario where there is some kind of token launch or NFT mint where many users are scrambling to quickly get their transactions included. Could either a malicious actor or a greedy whale bypass the entire line by setting extremely high fees on their own transactions many times in quick succession?
The fees are of course a limiting factor here, but they would be paid to the SPO reward pool. The users themselves could be left with a bad experience. Perhaps this could be mitigated at the dApp/application level?
I could also see this as a feature instead of a bug though, but it probably requires careful design at the dApp level.
There was a problem hiding this comment.
It would be a bad UX with the current model as well. It's something to handle at the dApp level (sharding, address tracking). The auction may actually add an extra guarantee to it, as you mentioned.
There was a problem hiding this comment.
A token launch should rely on CIP-159 where the tokens are held in the account address. This way users can take from the address concurrently. This CIP is more for use cases that can't use another architecture to handle contention (e.g., an order book). Developers should use the right tool for the job.
| The metrics agree when competing transactions are roughly the same size — the transaction paying | ||
| more in total also pays more per byte. They diverge when transaction sizes differ significantly: | ||
|
|
||
| | Scenario | Tx A (incumbent) | Tx B (challenger) | Absolute fee winner | Fee rate winner | | ||
| |:---------|:-----------------|:------------------|:--------------------|:---------------| | ||
| | Similar size | 500 bytes, 0.5 ADA | 520 bytes, 0.8 ADA | B | B | | ||
| | Different size | 2000 bytes, 1.0 ADA | 500 bytes, 0.6 ADA | A | B | | ||
|
|
||
| In the second scenario, the two metrics give opposite answers. Absolute fee keeps the 1.0 ADA | ||
| transaction. Fee rate keeps the 0.6 ADA transaction because its per-byte efficiency is higher. |
There was a problem hiding this comment.
At first I thought this would create additional denial-of-service attack vectors, but on second thought that should be already handled by the existing fee calculation, Fee = minFeeB + minFeeA * size(TX)
If we consider this to be an additional fee on top of that, then I think we can set all those concerns aside. However, instead of using an absolute fee winner (flat fee), or a fee rate winner (percent) for the replacement delta I wonder if it could be better served by using a multiplier of the conflicting transactions it would need to replace.
Consider your earlier example of...
If
Bconflicts with multiple mempool transactionsA₁, A₂, ..., Aₙ, thenBmust pay strictly
more than the sum of all conflicting transactions' fees, plus the replacement delta:
fee(B) > fee(A₁) + fee(A₂) + ... + fee(Aₙ) + delta
Instead of being just +1 lovelace above the sum of all conflicting transactions, what if it had to be double? Or 1.5x? Or some other number?
That would make the fee ramp up much quicker, which would also likely make the contention conflicts (and thus the UX issues of dropped transactions) less common, which could be a good thing.
(I had another reason that led me to this idea in the first place, but it slipped out of my head while I was trying to write it out 😅 . Will post again if I think of it)
There was a problem hiding this comment.
The delta needs to be small enough to be useful but large enough to deter spam. The multiplier might be too large. Block production is 20 seconds so each auction should be fairly short lived which means there shouldn't be much of a ddos chance here.
There was a problem hiding this comment.
@fallen-icarus this is a well expressed proposal that should be assessed from all quarters of the ecosystem. The writing is perfect except for some minor format things; any technical question I might have raised has been better addressed already by the previous reviewers.
On track for introduction via Triage at next CIP meeting (https://hackmd.io/@cip-editors/132) and it would also help get the word out for a community-wide review if you can make it to the meeting to say a few words about it.
Tagging @jpraynaud in the meantime to confirm that what is proposed here would mainly impact the Network system (as currently categorised) and not have greater implications for Ledger (cc @lehins @WhatisRT already tagged: but again regarding this particular question).
| @@ -0,0 +1,407 @@ | |||
| --- | |||
| CIP: ???? | |||
| Title: Conflict-Based Fee Priority in Mempool Policy | |||
There was a problem hiding this comment.
I'll update the PR title according to this CIP title which I think avoids the assumption that this would create a "fee market" — likely, but not certain — and also avoid the practically nonexistent term fee-market.
There was a problem hiding this comment.
| Title: Conflict-Based Fee Priority in Mempool Policy | |
| Title: Conflict-Based Fee Priority in Mempool |
p.s. I'm testing the truncation of the term Policy since on a second reading the phrase Priority in Mempool Policy would seem (at my level) to achieve what people would think of as Priority in Mempool and therefore be more concise with better impact.
If there's a subtle difference then @fallen-icarus @Quantumplation @AndrewWestberg @Cerkoryn please advise & in any case we'll confirm a best title at the upcoming meeting.
There was a problem hiding this comment.
The purpose of the word "policy" here is to emphasize that mempool behavior is not an enforceable property of the protocol, and so if a single node chooses different behavior, it will still follow the same tip as other nodes. What is being described here is really a convention or property that @fallen-icarus is recommending for the mempool implementations, rather than a protocol change.
That being said, that is true without the word "policy" as well, so to me it just reads as rhetorical emphasis, and doesn't actually change the meaning, so... 🤷
|
@Quantumplation Both of your concerns are directly addressed by the mechanics of a localized fee market, and I think the fear of endless bidding wars might be a bit overstated. Consider the following:
A retail user can take advantage of this characteristic for scenarios where they desperately need a contentious UTxO. If the user overpays by 5x-10x (so a standard 0.5 ADA fee becomes 2.5-5.0 ADA), they are very likely to win the "auction." It is like a real-world auction where everyone is incrementing by $1, and suddenly someone increments by $100. At that point, the room usually folds. The eUTxO situation actually makes this even better because many of these contested resources are "continuing UTxOs" where the resource is immediately made available again after the current tx. When a bot sees a 5x retail bid, the rational, margin-optimizing choice isn't to start a bidding war—it's to let that UTxO go and just grab the next one in the subsequent block for a fraction of the cost. A retail user choosing to pay 2.5-5.0 ADA for guaranteed priority does not mean "only the rich can play." A huge reason bidding wars get so toxic on other chains is collateral damage: normal transactions get caught in the crossfire and have to bid for global block space. This CIP's targeted fee market confines the bidding only to the specific resource in question, meaning there is no extraneous bidding pushing up the price for the rest of the network. Instead of one giant auction, you have many small auctions: one UTxO can have a 15 ADA bid while another one could have a 1.5 ADA bid. |
berewt
left a comment
There was a problem hiding this comment.
First review round. There's some approximations and minor challenges.
My fear is that in the current state, it may lead to an as arbitrary result than today. But maybe with a small fee increase, which isn't that bad.
CIP-????/README.md
Outdated
| Cardano's mempool does not allow spending unconfirmed outputs. Every transaction input must reference | ||
| a UTxO that is already confirmed on-chain. There are no parent-child chains in the mempool, and | ||
| therefore nothing to pin. The entire class of complexity that dominates Bitcoin's RBF design simply | ||
| does not exist on Cardano. |
There was a problem hiding this comment.
Yes, this is an issue. Building on top of mempool transaction exists on Cardano, it is partially covered later in the CIP with the consideration of multi-input conflict (it's a different problem, but may be solved with the same solution). I would suggest rewriting both parts to make it clearer.
CIP-????/README.md
Outdated
| When a node receives a new transaction `B`, and its mempool already contains a transaction `A` such | ||
| that `A` and `B` conflict: | ||
|
|
||
| - **If `fee(B) > fee(A) + delta`:** Evict `A` from the mempool. Admit `B`. |
There was a problem hiding this comment.
The problem I see here is that depending on delta, it introduces a new subtle FIFO problem. If B is higher than A but lower than A + delta, the order matters again.
We can consider it as a marginal tie breaker though.
There was a problem hiding this comment.
It puts a lot of pressure on the definition of delta. Being to low, you're open to tx spam, too high, you would revert to the previous mechanism.
There was a problem hiding this comment.
Agreed we need to find a balance. I erred on the side of caution by having delta be the full propagation cost of the new tx. IMHO this should be good enough. It means the minimum new bid is approximately +0.5 ADA over the current bid. At current prices, that is ~$0.15. I'd hardly call that an expensive jump 🙂
CIP-????/README.md
Outdated
| This CIP does not introduce any new attack vectors. It does facilitate competitive extraction of | ||
| existing value — when bots bid against each other to capture a mispriced limit order, the surplus | ||
| between the order price and the market price is a form of MEV. But this extraction is non-adversarial: | ||
| it cannot worsen any user's execution, the limit order creator receives exactly the price they | ||
| specified, and the bidding war converts extractable value into fee revenue for the network rather | ||
| than leaving it as pure profit for whichever bot happened to propagate fastest. This is MEV | ||
| redistribution, not MEV harm. |
There was a problem hiding this comment.
It would be a bad UX with the current model as well. It's something to handle at the dApp level (sharding, address tracking). The auction may actually add an extra guarantee to it, as you mentioned.
| peers. The evicted transaction may still exist in other nodes' mempools across the network; whether | ||
| it survives depends on whether the replacement reaches those nodes before the next block. | ||
|
|
||
| From the user's perspective, detecting eviction is straightforward: monitor the UTxOs that the |
There was a problem hiding this comment.
I disagree with this. It's straightforward, yes, but way worse than having a direct feedback from the node you submitted to. We increase the odds of having a tx included in the mempool on submission and not included on-chain, which is generally a bad UX.
More importantly, it means that you can't know (without running your own mempool) whether your transaction has been evicted by a higher-fee one.
Ideally, node implementing it should have an off-chain way to communicate about transactions that have been evicted.
There was a problem hiding this comment.
Having feedback directly from a node would be ideal, but that would add more work to deliver this CIP. Off-chain tooling is already capable of tracking this. So perhaps this CIP could be phased? Implement the policy now and update the node to give feedback for dropped mempool txs later?
There was a problem hiding this comment.
It's more work yes, but I do prefer more work than making the user experience work. Offchain tooling will notice rejection too late if you don't have access to the node. It creates a mix of blind auction (for users without access to the mempool) with insiders (solutions with access to the mempool) that can use standard auction.
I agree that this UX issue already exists in the current model, when we have divergent mempools, but it's already something people complain about, and we're making it worse.
|
@AndrewWestberg @Quantumplation @berewt I did not know Cardano supports tx chaining in the mempool. Thanks for pointing this out! However, I don't think it changes much because Cardano is structurally different than Bitcoin so many of the downsides shouldn't apply:
By taking advantage of these differences, we can come up with a simpler solution than on Bitcoin.
With this in mind, it should be enough to just include descendant tx fees in the replacement sum — i.e., B must pay more than the total fees of all transactions it would evict (the conflicting transaction plus its descendants). Please lmk if I'm misunderstanding something else about the mempool. Ideally, someone who understands the low-level details of the mempool can weigh in about the required algorithm. |
* Fixed tx pinning inaccuracy * Added descendants to fee comparison * Added section explaining retail isn't priced out * Expanded MEV section * Delta formula uses existing calculation instead of inlining it * Added note on eviction notification
|
@fallen-icarus A cardano mempool "default" is 2 blocks. This is configurable in cardano-node. Many protocols such as dripdropz or Eternl wallet run with a much higher mempool. In ddz case, we use a 1gb mempool. In the early days when we did the Sundae token drop this was absolutely required and we had about a 4-hour backlog of valid chained transactions in our mempool that slowly confirmed on chain. In this scenario, you also have to handle rollbacks and re-inflation of the mempool as the node doesn't handle that automatically. Any rollback kills the mempool so it needs to have transactions replayed. All this to say, we can't assume small mempools so an O(n^2) change still needs to be considered carefully. |
|
Add to this that Leios will increase the size requirement. It would still be a few blocks but higher chance of conflict and arbitrage. |
|
I looked into Bitcoin's RBF approach to see how they handle tx chain eviction, but its mempool is structurally different than Cardano's. Efficiently handling these evictions on Cardano seems to require additional data structures alongside the current mempool. And we also need to consider CIP-159. This is turning out to be a much bigger CIP than I was hoping for 🫤 Also, this may be a controversial take, but IMHO tx chain fees should not be considered when deciding the auction winner. Auction theory seems pretty clear about this. Bundling auctions reduces revenue. The whole point of this CIP is that each contested UTxO gets its own independent auction. Counting descendant fees undermines that by merging separate auctions into one. Consider a continuing UTxO that cycles four times. Without chaining in the fee comparison, each cycle gets its own auction. If each attracts a 15 ADA bid, the network earns 60 ADA. With chaining, all four cycles are bundled under a single replacement threshold. A chain paying 20 ADA total across all four beats out individual 15 ADA bids on each — the network earns 20 ADA instead of 60. Separate auctions over independent resources yield at least as much total revenue as bundled ones. This is a standard result in auction theory, and continuing UTxOs are the dominant case for DeFi contention. It also misprices demand. This fee market prices UTxO contention right now. A transaction chain spreads its fees across future blocks. Requiring a challenger to outbid the next few hours of activity is not pricing current demand, it is creating a market inefficiency by ignoring the time-value of money. If we don't consider tx chain fees, then we need a way to protect against ddos since these txs are fully validated but dropped before their fees are collected. Bitcoin caps the max chain depth but it doesn't have plutus script validation. So my current thought is a compromise: If Tx B conflicts directly with Tx A and Tx A has descendants, use the absolute fee for Tx A but only the required minimum fees for each descendant. The fee of Tx B only needs to cover the lost validation work of the descendants, not outbid them. This separates the auctions as much as possible while still protecting against ddos. |
|
Transactions chained in the mempool are also "right now"; the mempool will include as many transactions as it can in the block, it doesn't force chained txs into consecutive blocks. And I don't think your hypothetical is correct; Let's suppose I have A -> B -> C -> D, that pay 1 ADA in fees each; Let's say there's someone who wants to replace A, and is willing to pay 2 ADA for it. Replacing A invalidates B, C, and D, meaning the network goes from earning 4 ADA in fees to 2 ADA in fees, you've worsened the UX for users B, C, and D; if B, C, and D are still valuable, then you might get them back to the table, and if you do, you've imposed additional computational load on the network. B, C, and D have already been validated by anyone who puts them in the mempool; that is computational work they should be paid, and it only makes sense to accept an A' if the collected fee is greater than the cost to validate A, A', B, C, and D. On the other hand, if the minimum fee to replace A' is I think, as a DDOS prevention mechanism, we should instead impose a minimum increment on the fee; that is, don't bother replacing for 1 lovelace more of income each time. Also, all of this becomes that much more sensitive in a Leios world, because the mempool determines what suffix gets attached in a Leios endorsement block, so mempools are going to get much much larger, and the data structures needed to maintain them are going to be that much more sensitive to re-evaluating. |
This CIP proposes UTxO conflict-based fee priority as the default mempool policy — when two transactions conflict over the same UTxO, keep the one paying a higher fee. No hardforks are required.
This helps alleviate UTxO contention while benefiting Cardano's long-term financial sustainability at the same time. Critically, this introduces a targeted fee market; normal users are unaffected.
Rendered README