Skip to content

Commit 3508047

Browse files
committed
Rebuild pending payments list before replaying pending claims/fails
On `ChannelManager` reload we rebuild the pending outbound payments list by looking for any missing payments in `ChannelMonitor`s. However, in the same loop over `ChannelMonitor`s, we also re-claim any pending payments which we see we have a payment preimage for. If we send an MPP payment across different chanels, the result may be that we'll iterate the loop, and in each iteration add a pending payment with only one known path, then claim/fail it and remove the pending apyment (at least for the claim case). This may result in spurious extra events, or even both a `PaymentFailed` and `PaymentSent` event on startup for the same payment.
1 parent af1bd1e commit 3508047

File tree

1 file changed

+16
-0
lines changed

1 file changed

+16
-0
lines changed

lightning/src/ln/channelmanager.rs

Lines changed: 16 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16386,6 +16386,10 @@ where
1638616386
// payments which are still in-flight via their on-chain state.
1638716387
// We only rebuild the pending payments map if we were most recently serialized by
1638816388
// 0.0.102+
16389+
//
16390+
// First we rebuild the pending payments, and only once we do so we go through and
16391+
// re-claim and re-fail pending payments. This avoids edge-cases around MPP payments
16392+
// resulting in redundant actions.
1638916393
for (channel_id, monitor) in args.channel_monitors.iter() {
1639016394
let mut is_channel_closed = false;
1639116395
let counterparty_node_id = monitor.get_counterparty_node_id();
@@ -16424,6 +16428,18 @@ where
1642416428
);
1642516429
}
1642616430
}
16431+
}
16432+
}
16433+
for (channel_id, monitor) in args.channel_monitors.iter() {
16434+
let mut is_channel_closed = false;
16435+
let counterparty_node_id = monitor.get_counterparty_node_id();
16436+
if let Some(peer_state_mtx) = per_peer_state.get(&counterparty_node_id) {
16437+
let mut peer_state_lock = peer_state_mtx.lock().unwrap();
16438+
let peer_state = &mut *peer_state_lock;
16439+
is_channel_closed = !peer_state.channel_by_id.contains_key(channel_id);
16440+
}
16441+
16442+
if is_channel_closed {
1642716443
for (htlc_source, (htlc, preimage_opt)) in
1642816444
monitor.get_all_current_outbound_htlcs()
1642916445
{

0 commit comments

Comments
 (0)