-
Notifications
You must be signed in to change notification settings - Fork 961
Askrene: prune and cap #8332
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Askrene: prune and cap #8332
Conversation
f866e48 to
b7f1ae2
Compare
|
Rebased on top of changes to #8299 |
b7f1ae2 to
18f0bec
Compare
plugins/askrene/mcf.c
Outdated
| * accuracy x 1M >= amount | ||
| * */ | ||
| params->accuracy = amount_msat_max( | ||
| AMOUNT_MSAT(1), amount_msat_div_ceil(amount, 1000000)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The 1M precision here is arbitrary, it could be any number.
And I think 1M is way too much, in fact I will show that we get a
negligible runtime improvement with 1M as opposed with other values (eg. 1000, 100, 10).
|
I made some tests runs on the same gossmap we have for reference in the tests I notice that the fail rate is very high even for not so big payment amounts. Does this mean that most of the "big nodes" |
5d72855 to
1b5e87a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor nit:
Can you split 1b5e87a into the "prune small channels" change, and the "dynamic granularity based on < 1000 flows assumption" (which, FWIW, is a LOT of flows!).
They're logically separate improvements...
|
Hmm, so this is a lot of data. A summary of this in the commit message itself would be invaluable: Remember, GitHub is transitory, and will one day stop hosting us for free: git commits are forever. You graph precision amounts, but then if I understand correctly, your precision varies on the total payment amount, so these graphs don't exactly apply to our overall performance? |
That, or our canned gossip store is missing data? That is quite possible, and worth investigating with a more modern node where there have been propagation fixes will give a different result. However, it's still useful to have a standard corpus, of course. |
It isn't about 1000 flows, it is the number of units in which we divide the payment amount. |
|
Minor nit from me as well: I have realized that CI doesn't test for SLOW_MACHINE=1, therefore some of the |
|
We also need a better commit message explaining what we are actually pruning. |
Yes, that is subtle. I would love to see you paste exactly that sentence into the source code! Commit messages are about changes, comments are about existing code. Sometimes these are similar: Commit message: ... speeds up these cases by N% [table]. Comment: /* If there are other args with lower cost between the same pair of nodes with combined capacity to carry the entire flow, we can remove them from consideration. For small payments, this can be around X% of arcs! */ |
1b5e87a to
060e870
Compare
|
060e870 to
7c49092
Compare
Changelog-None Signed-off-by: Lagrang3 <[email protected]>
From the multiple arcs that derive from the same channel we consider
only those with the smallest cost such that the payment amount and HTLC
max can fit in their combined capacity, ie. we prune high cost arcs that
surely will never be used by the optimal solution.
This reduces the number of arcs in the graph approximately from 8 arcs
per channel to approximately 2 arcs per channel.
No pruning.
amount: 100 1000 10000 100000 1000000
channels: 104741 106163 106607 106654 106666
arcs: 837928 849304 852856 853232 853328
Prune, limit the channel capacity by its HTLC max
amount: 100 1000 10000 100000 1000000
channels: 104741 106163 106607 106654 106666
arcs: 255502 259314 260538 260676 260704
Prune, limit the channel capacity to the payment amount
amount: 100 1000 10000 100000 1000000
channels: 104741 106163 106607 106654 106666
arcs: 209482 216270 228618 295450 432468
Prune, limit the channel capacity to the payment amount and its HTLC max
amount: 100 1000 10000 100000 1000000
channels: 104741 106163 106607 106654 106666
arcs: 209480 212324 213242 215726 228018
This produces a slight speedup for MCF computations:
Amount (sats) | speedup
-----------------------
100 | 1.89
1000 | 1.77
10000 | 1.25
100000 | 1.25
1000000 | 1.18
Changelog-None
Signed-off-by: Lagrang3 <[email protected]>
Speed in getroutes up by setting the granularity to 1000
Amount (sats) | speedup
-----------------------
100 | 1.00
1000 | 1.00
10000 | 1.06
100000 | 1.31
1000000 | 2.64
Worst runtime of getroutes
Amount (sats) | before (ms) | after (ms)
--------------------------------------
100 | 1507 | 761
1000 | 2129 | 1214
10000 | 1632 | 1043
100000 | 2004 | 1150
1000000 | 27170 | 3289
Changelog-None
Signed-off-by: Lagrang3 <[email protected]>
7c49092 to
89a0cd1
Compare
|
Trivial rebase now predecessor merged! |
420cff9
into
ElementsProject:master














This PR depends on #8299 and supersedes #8314.
With this PR I want to make a small improvement to the MCF solver in askrene.
First: I would like to constrain the number of flow units to 1000 by setting the accuracy of the solver
to the total payment amount divided by 1000. Some MCF algorithms like "successive shortest path" (SSP)
have theoretical complexity bounds that depend on that number.
Note: the 1000 number is arbitrary, the smaller it is we may reduce the solver's runtime but we lose
a accuracy.
Second: I would like to prune the set of arcs in the network. I can achieve this by setting a limit to the sum
of the arc capacities that correspond to the same channel to U, the maximum number of flow units
in the payment. Notice that due to the piece-wise linearization of the channel
cost function, one channel becomes several arcs in the MCF network, therefore we can discard the higher cost
arcs of a channel linearization if the lower cost arcs already sum up to U in flow capacity.