Skip to content

Commit fdb0172

Browse files
committed
Rebuild pending payments list before replaying pending claims/fails
On `ChannelManager` reload we rebuild the pending outbound payments list by looking for any missing payments in `ChannelMonitor`s. However, in the same loop over `ChannelMonitor`s, we also re-claim any pending payments which we see we have a payment preimage for. If we send an MPP payment across different chanels, the result may be that we'll iterate the loop, and in each iteration add a pending payment with only one known path, then claim/fail it and remove the pending apyment (at least for the claim case). This may result in spurious extra events, or even both a `PaymentFailed` and `PaymentSent` event on startup for the same payment.
1 parent af2bb1a commit fdb0172

File tree

1 file changed

+16
-0
lines changed

1 file changed

+16
-0
lines changed

lightning/src/ln/channelmanager.rs

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16489,6 +16489,10 @@ where
1648916489
// payments which are still in-flight via their on-chain state.
1649016490
// We only rebuild the pending payments map if we were most recently serialized by
1649116491
// 0.0.102+
16492+
//
16493+
// First we rebuild the pending payments, and only once we do so we go through and
16494+
// re-claim and re-fail pending payments. This avoids edge-cases around MPP payments
16495+
// resulting in redundant actions.
1649216496
for (channel_id, monitor) in args.channel_monitors.iter() {
1649316497
let mut is_channel_closed = false;
1649416498
let counterparty_node_id = monitor.get_counterparty_node_id();
@@ -16527,6 +16531,18 @@ where
1652716531
);
1652816532
}
1652916533
}
16534+
}
16535+
}
16536+
for (channel_id, monitor) in args.channel_monitors.iter() {
16537+
let mut is_channel_closed = false;
16538+
let counterparty_node_id = monitor.get_counterparty_node_id();
16539+
if let Some(peer_state_mtx) = per_peer_state.get(&counterparty_node_id) {
16540+
let mut peer_state_lock = peer_state_mtx.lock().unwrap();
16541+
let peer_state = &mut *peer_state_lock;
16542+
is_channel_closed = !peer_state.channel_by_id.contains_key(channel_id);
16543+
}
16544+
16545+
if is_channel_closed {
1653016546
for (htlc_source, (htlc, preimage_opt)) in
1653116547
monitor.get_all_current_outbound_htlcs()
1653216548
{

0 commit comments

Comments
 (0)