Skip to content

Commit f95d24d

Browse files
committed
Support async get_per_commitment_point, release_commitment_secret
Apologies in advance for the hair-ball. Mostly just wanted to get a 50,000 foot overview to see if things were headed in the right direction. If things look okay, I can take a step back and chop this into more modestly-sized PRs. In general, this PR adds state to the context to allow a `ChannelSigner` implementation to respond asynchronously to the `get_per_commitment_point`, `release_commitment_secret`, and `sign_counterparty_commitment` methods, which are the main signer methods that are called during channel setup and normal operation. These changes seem to work as advertised during normal channel operation (creation and payment exchange), and do not obviously fail during channel re-establishment or across node restart. That said, there are a lot more test scenarios to evaluate here. The details are below. Adds the RevokeAndAck and RAACommitemnt ordering to the `SignerResumeUpdates` struct that is returned from `signer_maybe_unblocked`. Adds `signer_maybe_unblocked` for both inbound and outbound unfunded channel states. We need these now because `get_per_commitment_point` is asynchronous -- and necessary for us to proceed out of the unfunded state into normal operation. Adds appropriate `SignerResumeUpdates` classes for both inbound and outbound unfunded states. Maintains `cur_holder_commitment_point` and `prev_holder_commitment_secret` on the channel context. By making these part of the context state, we can access them at any point we need them without requiring a signer call to regenerate. These are updated appropriately throughout the channel state machine by calling a new context method `update_holder_per_commitment`. Add several flags to indicate messages that may now be pending the remote signer unblocking: - `signer_pending_revoke_and_ack`, set when we're waiting to send the revoke-and-ack to the counterparty. This might _not_ just be us waiting on the signer; for example, if the commitment order requires sending the RAA after the commitment update, then even though we _might_ be able to generate the RAA (e.g., because we have the secret), we will not do so while the commitment update is pending (e.g., because we're missing the latest point or do not have a signed counterparty commitment). - `signer_pending_channel_ready`, set when we're waiting to send the channel-ready to the counterparty. - `signer_pending_commitment_point`, set when we're waiting for the signer to return us the commitment point for the current state. - `signer_pending_released_secret`, set when we're waiting for the signer to release the commitment secret for the previous state. This state (current commitment point, previous secret, flags) is persisted in the channel monitor, and restored when the channel is unpickled. When a monitor update completes, we may still be pending results from the remote signer. If that is the case, we ensure that we correctly maintain the above state flags before the channel state is resumed. For example, if the _monitor_ is pending a commitment signed, but we were not able to retrieve the commitment update from the signer, then we ensure that `signer_pending_commitment_update` is set. When unblocking the signer, we need to ensure that we honor message ordering constraints. For the commitment update and revoke-and-ack, ensure that we can honor the context's current `resend_order`. For example, assume we must send the RAA before the CU, and could potentially send a CU because we have the commitment point and a signed counterparty commitment transaction. _But_, we have not yet received the previous state's commitment secret. In this case, we ensure that no messages are emitted until the commitment secret is released, at which point the signer-unblocked call will emit both messages. A similar situation exists with the `channel_ready` message and the `funding_signed` / `funding_created` messages at channel startup: we make sure that we don't emit `channel_ready` before the funding message. There is at least one unsolved problem here: during channel re-establishment, we need to request an _arbitrary_ commitment point from the signer in order to verify that the counterparty's giving us a legitimate secret. For the time being, I've simply commented out this check; however, this is not a viable solution. There are a few options to consider here: 1. We could require that an asynchronous signer _cache_ the previous commitment points, so that any such request must necessarily succeed synchronously. 2. We could attempt to factor this method as to admit an asynchronous response that would restart the channel re-establish once the commitment point has been provided by the signer and the counterparty's secret can be verified. The former places some additional burden on a remote signer (albeit minimal) and seems reasonable to me. As for testing... For testing asynchronous channel signing, replaces the simple boolean ("everything is on, or everything is off") with flags that let us toggle different signing methods on and off. This lets us (sort of) simulate the signer returning responses in an arbitrary order. Adds a fully-exploded "send a one-hop payment" test to the async test suite. At each step, simulate the async signer being unavailable, and then unblocking. Vary the order to check all possible orderings of `get_per_commitment_point`, `release_commitment_secret`, and `sign_counterparty_commitment` being provided by the signer. But there is a lot more to be done here. Many of the odd-ball cases in the PR _aren't_ covered by unit tests and were instead uncovered by running the code in situ with an LND counterparty. So there are a lot more tests to write here.
1 parent ebb155b commit f95d24d

File tree

10 files changed

+956
-274
lines changed

10 files changed

+956
-274
lines changed

fuzz/src/chanmon_consistency.rs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -266,7 +266,7 @@ impl SignerProvider for KeyProvider {
266266
inner,
267267
state,
268268
disable_revocation_policy_check: false,
269-
available: Arc::new(Mutex::new(true)),
269+
unavailable: Arc::new(Mutex::new(0)),
270270
})
271271
}
272272

lightning/src/chain/channelmonitor.rs

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -2832,9 +2832,7 @@ impl<Signer: WriteableEcdsaChannelSigner> ChannelMonitorImpl<Signer> {
28322832
},
28332833
commitment_txid: htlc.commitment_txid,
28342834
per_commitment_number: htlc.per_commitment_number,
2835-
per_commitment_point: self.onchain_tx_handler.signer.get_per_commitment_point(
2836-
htlc.per_commitment_number, &self.onchain_tx_handler.secp_ctx,
2837-
),
2835+
per_commitment_point: htlc.per_commitment_point,
28382836
htlc: htlc.htlc,
28392837
preimage: htlc.preimage,
28402838
counterparty_sig: htlc.counterparty_sig,

lightning/src/chain/onchaintx.rs

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -179,6 +179,7 @@ pub(crate) struct ExternalHTLCClaim {
179179
pub(crate) htlc: HTLCOutputInCommitment,
180180
pub(crate) preimage: Option<PaymentPreimage>,
181181
pub(crate) counterparty_sig: Signature,
182+
pub(crate) per_commitment_point: bitcoin::secp256k1::PublicKey,
182183
}
183184

184185
// Represents the different types of claims for which events are yielded externally to satisfy said
@@ -1192,9 +1193,16 @@ impl<ChannelSigner: WriteableEcdsaChannelSigner> OnchainTxHandler<ChannelSigner>
11921193
})
11931194
.map(|(htlc_idx, htlc)| {
11941195
let counterparty_htlc_sig = holder_commitment.counterparty_htlc_sigs[htlc_idx];
1196+
1197+
// TODO(waterson) fallible: move this somewhere!
1198+
let per_commitment_point = self.signer.get_per_commitment_point(
1199+
trusted_tx.commitment_number(), &self.secp_ctx,
1200+
).unwrap();
1201+
11951202
ExternalHTLCClaim {
11961203
commitment_txid: trusted_tx.txid(),
11971204
per_commitment_number: trusted_tx.commitment_number(),
1205+
per_commitment_point: per_commitment_point,
11981206
htlc: htlc.clone(),
11991207
preimage: *preimage,
12001208
counterparty_sig: counterparty_htlc_sig,

lightning/src/ln/async_signer_tests.rs

Lines changed: 238 additions & 43 deletions
Large diffs are not rendered by default.

lightning/src/ln/channel.rs

Lines changed: 509 additions & 146 deletions
Large diffs are not rendered by default.

lightning/src/ln/channelmanager.rs

Lines changed: 97 additions & 40 deletions
Original file line numberDiff line numberDiff line change
@@ -2372,7 +2372,7 @@ where
23722372
.ok_or_else(|| APIError::APIMisuseError{ err: format!("Not connected to node: {}", their_network_key) })?;
23732373

23742374
let mut peer_state = peer_state_mutex.lock().unwrap();
2375-
let channel = {
2375+
let mut channel = {
23762376
let outbound_scid_alias = self.create_and_insert_outbound_scid_alias();
23772377
let their_features = &peer_state.latest_features;
23782378
let config = if override_config.is_some() { override_config.as_ref().unwrap() } else { &self.default_configuration };
@@ -2387,8 +2387,11 @@ where
23872387
},
23882388
}
23892389
};
2390-
let res = channel.get_open_channel(self.genesis_hash.clone());
2391-
2390+
let opt_msg = channel.get_open_channel(self.genesis_hash.clone());
2391+
if opt_msg.is_none() {
2392+
channel.signer_pending_open_channel = true;
2393+
}
2394+
23922395
let temporary_channel_id = channel.context.channel_id();
23932396
match peer_state.channel_by_id.entry(temporary_channel_id) {
23942397
hash_map::Entry::Occupied(_) => {
@@ -2401,10 +2404,13 @@ where
24012404
hash_map::Entry::Vacant(entry) => { entry.insert(ChannelPhase::UnfundedOutboundV1(channel)); }
24022405
}
24032406

2404-
peer_state.pending_msg_events.push(events::MessageSendEvent::SendOpenChannel {
2405-
node_id: their_network_key,
2406-
msg: res,
2407-
});
2407+
if let Some(msg) = opt_msg {
2408+
peer_state.pending_msg_events.push(events::MessageSendEvent::SendOpenChannel {
2409+
node_id: their_network_key,
2410+
msg,
2411+
});
2412+
};
2413+
24082414
Ok(temporary_channel_id)
24092415
}
24102416

@@ -5659,6 +5665,12 @@ where
56595665
emit_channel_ready_event!(pending_events, channel);
56605666
}
56615667

5668+
5669+
log_debug!(self.logger, "Outgoing message queue is:");
5670+
for msg in pending_msg_events.iter() {
5671+
log_debug!(self.logger, " {:?}", msg);
5672+
}
5673+
56625674
htlc_forwards
56635675
}
56645676

@@ -5809,10 +5821,14 @@ where
58095821
let outbound_scid_alias = self.create_and_insert_outbound_scid_alias();
58105822
channel.context.set_outbound_scid_alias(outbound_scid_alias);
58115823

5812-
peer_state.pending_msg_events.push(events::MessageSendEvent::SendAcceptChannel {
5813-
node_id: channel.context.get_counterparty_node_id(),
5814-
msg: channel.accept_inbound_channel(),
5815-
});
5824+
match channel.accept_inbound_channel() {
5825+
Some(msg) =>
5826+
peer_state.pending_msg_events.push(events::MessageSendEvent::SendAcceptChannel {
5827+
node_id: channel.context.get_counterparty_node_id(),
5828+
msg
5829+
}),
5830+
None => channel.signer_pending_accept_channel = true,
5831+
};
58165832

58175833
peer_state.channel_by_id.insert(temporary_channel_id.clone(), ChannelPhase::UnfundedInboundV1(channel));
58185834

@@ -5964,10 +5980,15 @@ where
59645980
let outbound_scid_alias = self.create_and_insert_outbound_scid_alias();
59655981
channel.context.set_outbound_scid_alias(outbound_scid_alias);
59665982

5967-
peer_state.pending_msg_events.push(events::MessageSendEvent::SendAcceptChannel {
5968-
node_id: counterparty_node_id.clone(),
5969-
msg: channel.accept_inbound_channel(),
5970-
});
5983+
match channel.accept_inbound_channel() {
5984+
Some(msg) =>
5985+
peer_state.pending_msg_events.push(events::MessageSendEvent::SendAcceptChannel {
5986+
node_id: channel.context.get_counterparty_node_id(),
5987+
msg
5988+
}),
5989+
None => channel.signer_pending_accept_channel = true,
5990+
};
5991+
59715992
peer_state.channel_by_id.insert(channel_id, ChannelPhase::UnfundedInboundV1(channel));
59725993
Ok(())
59735994
}
@@ -6027,6 +6048,12 @@ where
60276048
Some(ChannelPhase::UnfundedInboundV1(inbound_chan)) => {
60286049
match inbound_chan.funding_created(msg, best_block, &self.signer_provider, &self.logger) {
60296050
Ok(res) => res,
6051+
Err((inbound_chan, ChannelError::Ignore(_))) => {
6052+
// If we get an `Ignore` error then something transient went wrong. Put the channel
6053+
// back into the table and bail.
6054+
peer_state.channel_by_id.insert(msg.temporary_channel_id, ChannelPhase::UnfundedInboundV1(inbound_chan));
6055+
return Ok(());
6056+
},
60306057
Err((mut inbound_chan, err)) => {
60316058
// We've already removed this inbound channel from the map in `PeerState`
60326059
// above so at this point we just need to clean up any lingering entries
@@ -6147,6 +6174,7 @@ where
61476174
match peer_state.channel_by_id.entry(msg.channel_id) {
61486175
hash_map::Entry::Occupied(mut chan_phase_entry) => {
61496176
if let ChannelPhase::Funded(chan) = chan_phase_entry.get_mut() {
6177+
log_debug!(self.logger, "<== channel_ready");
61506178
let announcement_sigs_opt = try_chan_phase_entry!(self, chan.channel_ready(&msg, &self.node_signer,
61516179
self.genesis_hash.clone(), &self.default_configuration, &self.best_block.read().unwrap(), &self.logger), chan_phase_entry);
61526180
if let Some(announcement_sigs) = announcement_sigs_opt {
@@ -6367,6 +6395,7 @@ where
63676395
_ => pending_forward_info
63686396
}
63696397
};
6398+
log_debug!(self.logger, "<== update_add_htlc: htlc_id={} amount_msat={}", msg.htlc_id, msg.amount_msat);
63706399
try_chan_phase_entry!(self, chan.update_add_htlc(&msg, pending_forward_info, create_pending_htlc_status, &self.fee_estimator, &self.logger), chan_phase_entry);
63716400
} else {
63726401
return try_chan_phase_entry!(self, Err(ChannelError::Close(
@@ -6392,6 +6421,7 @@ where
63926421
match peer_state.channel_by_id.entry(msg.channel_id) {
63936422
hash_map::Entry::Occupied(mut chan_phase_entry) => {
63946423
if let ChannelPhase::Funded(chan) = chan_phase_entry.get_mut() {
6424+
log_debug!(self.logger, "<== update_fulfill_htlc: htlc_id={}", msg.htlc_id);
63956425
let res = try_chan_phase_entry!(self, chan.update_fulfill_htlc(&msg), chan_phase_entry);
63966426
if let HTLCSource::PreviousHopData(prev_hop) = &res.0 {
63976427
peer_state.actions_blocking_raa_monitor_updates.entry(msg.channel_id)
@@ -6485,6 +6515,7 @@ where
64856515
hash_map::Entry::Occupied(mut chan_phase_entry) => {
64866516
if let ChannelPhase::Funded(chan) = chan_phase_entry.get_mut() {
64876517
let funding_txo = chan.context.get_funding_txo();
6518+
log_debug!(self.logger, "<== commitment_signed: {} htlcs", msg.htlc_signatures.len());
64886519
let monitor_update_opt = try_chan_phase_entry!(self, chan.commitment_signed(&msg, &self.logger), chan_phase_entry);
64896520
if let Some(monitor_update) = monitor_update_opt {
64906521
handle_new_monitor_update!(self, funding_txo.unwrap(), monitor_update, peer_state_lock,
@@ -6656,6 +6687,7 @@ where
66566687
&peer_state.actions_blocking_raa_monitor_updates, funding_txo,
66576688
*counterparty_node_id)
66586689
} else { false };
6690+
log_debug!(self.logger, "<== revoke_and_ack");
66596691
let (htlcs_to_fail, monitor_update_opt) = try_chan_phase_entry!(self,
66606692
chan.revoke_and_ack(&msg, &self.fee_estimator, &self.logger, mon_update_blocked), chan_phase_entry);
66616693
if let Some(monitor_update) = monitor_update_opt {
@@ -6997,28 +7029,51 @@ where
69977029

69987030
let unblock_chan = |phase: &mut ChannelPhase<SP>, pending_msg_events: &mut Vec<MessageSendEvent>| {
69997031
let node_id = phase.context().get_counterparty_node_id();
7000-
if let ChannelPhase::Funded(chan) = phase {
7001-
let msgs = chan.signer_maybe_unblocked(&self.logger);
7002-
if let Some(updates) = msgs.commitment_update {
7003-
pending_msg_events.push(events::MessageSendEvent::UpdateHTLCs {
7004-
node_id,
7005-
updates,
7006-
});
7007-
}
7008-
if let Some(msg) = msgs.funding_signed {
7009-
pending_msg_events.push(events::MessageSendEvent::SendFundingSigned {
7010-
node_id,
7011-
msg,
7012-
});
7032+
match phase {
7033+
ChannelPhase::Funded(chan) => {
7034+
let msgs = chan.signer_maybe_unblocked(&self.logger);
7035+
match (msgs.commitment_update, msgs.raa) {
7036+
(Some(cu), Some(raa)) if msgs.order == RAACommitmentOrder::CommitmentFirst => {
7037+
pending_msg_events.push(events::MessageSendEvent::UpdateHTLCs { node_id, updates: cu });
7038+
pending_msg_events.push(events::MessageSendEvent::SendRevokeAndACK { node_id, msg: raa });
7039+
},
7040+
(Some(cu), Some(raa)) if msgs.order == RAACommitmentOrder::RevokeAndACKFirst => {
7041+
pending_msg_events.push(events::MessageSendEvent::SendRevokeAndACK { node_id, msg: raa });
7042+
pending_msg_events.push(events::MessageSendEvent::UpdateHTLCs { node_id, updates: cu });
7043+
},
7044+
(Some(cu), _) => pending_msg_events.push(events::MessageSendEvent::UpdateHTLCs { node_id, updates: cu }),
7045+
(_, Some(raa)) => pending_msg_events.push(events::MessageSendEvent::SendRevokeAndACK { node_id, msg: raa }),
7046+
(_, _) => (),
7047+
};
7048+
if let Some(msg) = msgs.funding_signed {
7049+
pending_msg_events.push(events::MessageSendEvent::SendFundingSigned {
7050+
node_id,
7051+
msg,
7052+
});
7053+
}
7054+
if let Some(msg) = msgs.funding_created {
7055+
pending_msg_events.push(events::MessageSendEvent::SendFundingCreated {
7056+
node_id,
7057+
msg,
7058+
});
7059+
}
7060+
if let Some(msg) = msgs.channel_ready {
7061+
send_channel_ready!(self, pending_msg_events, chan, msg);
7062+
}
70137063
}
7014-
if let Some(msg) = msgs.funding_created {
7015-
pending_msg_events.push(events::MessageSendEvent::SendFundingCreated {
7016-
node_id,
7017-
msg,
7018-
});
7064+
ChannelPhase::UnfundedInboundV1(chan) => {
7065+
let msgs = chan.signer_maybe_unblocked(&self.logger);
7066+
let node_id = phase.context().get_counterparty_node_id();
7067+
if let Some(msg) = msgs.accept_channel {
7068+
pending_msg_events.push(events::MessageSendEvent::SendAcceptChannel { node_id, msg });
7069+
}
70197070
}
7020-
if let Some(msg) = msgs.channel_ready {
7021-
send_channel_ready!(self, pending_msg_events, chan, msg);
7071+
ChannelPhase::UnfundedOutboundV1(chan) => {
7072+
let msgs = chan.signer_maybe_unblocked(&self.genesis_hash, &self.logger);
7073+
let node_id = phase.context().get_counterparty_node_id();
7074+
if let Some(msg) = msgs.open_channel {
7075+
pending_msg_events.push(events::MessageSendEvent::SendOpenChannel { node_id, msg });
7076+
}
70227077
}
70237078
}
70247079
};
@@ -8318,11 +8373,13 @@ where
83188373
let mut peer_state_lock = peer_state_mutex_opt.unwrap().lock().unwrap();
83198374
let peer_state = &mut *peer_state_lock;
83208375
if let Some(ChannelPhase::UnfundedOutboundV1(chan)) = peer_state.channel_by_id.get_mut(&msg.channel_id) {
8321-
if let Ok(msg) = chan.maybe_handle_error_without_close(self.genesis_hash, &self.fee_estimator) {
8322-
peer_state.pending_msg_events.push(events::MessageSendEvent::SendOpenChannel {
8323-
node_id: *counterparty_node_id,
8324-
msg,
8325-
});
8376+
if let Ok(opt_msg) = chan.maybe_handle_error_without_close(self.genesis_hash, &self.fee_estimator) {
8377+
if let Some(msg) = opt_msg {
8378+
peer_state.pending_msg_events.push(events::MessageSendEvent::SendOpenChannel {
8379+
node_id: *counterparty_node_id,
8380+
msg,
8381+
});
8382+
}
83268383
return;
83278384
}
83288385
}

lightning/src/ln/functional_test_utils.rs

Lines changed: 14 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -32,6 +32,8 @@ use crate::util::config::{UserConfig, MaxDustHTLCExposure};
3232
use crate::util::ser::{ReadableArgs, Writeable};
3333
#[cfg(test)]
3434
use crate::util::logger::Logger;
35+
#[cfg(test)]
36+
use crate::util::test_channel_signer::ops;
3537

3638
use bitcoin::blockdata::block::{Block, BlockHeader};
3739
use bitcoin::blockdata::transaction::{Transaction, TxOut};
@@ -438,14 +440,14 @@ impl<'a, 'b, 'c> Node<'a, 'b, 'c> {
438440
pub fn get_block_header(&self, height: u32) -> BlockHeader {
439441
self.blocks.lock().unwrap()[height as usize].0.header
440442
}
443+
441444
/// Changes the channel signer's availability for the specified peer and channel.
442445
///
443446
/// When `available` is set to `true`, the channel signer will behave normally. When set to
444447
/// `false`, the channel signer will act like an off-line remote signer and will return `Err` for
445-
/// several of the signing methods. Currently, only `get_per_commitment_point` and
446-
/// `release_commitment_secret` are affected by this setting.
448+
/// several of the signing methods.
447449
#[cfg(test)]
448-
pub fn set_channel_signer_available(&self, peer_id: &PublicKey, chan_id: &ChannelId, available: bool) {
450+
pub fn set_channel_signer_ops_available(&self, peer_id: &PublicKey, chan_id: &ChannelId, mask: u32, available: bool) {
449451
let per_peer_state = self.node.per_peer_state.read().unwrap();
450452
let chan_lock = per_peer_state.get(peer_id).unwrap().lock().unwrap();
451453
let signer = (|| {
@@ -454,8 +456,9 @@ impl<'a, 'b, 'c> Node<'a, 'b, 'c> {
454456
None => panic!("Couldn't find a channel with id {}", chan_id),
455457
}
456458
})();
457-
log_debug!(self.logger, "Setting channel signer for {} as available={}", chan_id, available);
458-
signer.as_ecdsa().unwrap().set_available(available);
459+
log_debug!(self.logger, "Setting channel signer for {} as {}available for {} (mask={})",
460+
chan_id, if available { "" } else { "un" }, ops::string_from(mask), mask);
461+
signer.as_ecdsa().unwrap().set_ops_available(mask, available);
459462
}
460463
}
461464

@@ -3206,15 +3209,15 @@ pub fn reconnect_nodes<'a, 'b, 'c, 'd>(args: ReconnectArgs<'a, 'b, 'c, 'd>) {
32063209
} else { panic!("Unexpected event! {:?}", announcement_event[0]); }
32073210
}
32083211
} else {
3209-
assert!(chan_msgs.0.is_none());
3212+
assert!(chan_msgs.0.is_none(), "did not expect to have a ChannelReady for node 1");
32103213
}
32113214
if pending_raa.0 {
32123215
assert!(chan_msgs.3 == RAACommitmentOrder::RevokeAndACKFirst);
32133216
node_a.node.handle_revoke_and_ack(&node_b.node.get_our_node_id(), &chan_msgs.1.unwrap());
32143217
assert!(node_a.node.get_and_clear_pending_msg_events().is_empty());
32153218
check_added_monitors!(node_a, 1);
32163219
} else {
3217-
assert!(chan_msgs.1.is_none());
3220+
assert!(chan_msgs.1.is_none(), "did not expect to have a RevokeAndACK for node 1");
32183221
}
32193222
if pending_htlc_adds.0 != 0 || pending_htlc_claims.0 != 0 || pending_htlc_fails.0 != 0 ||
32203223
pending_cell_htlc_claims.0 != 0 || pending_cell_htlc_fails.0 != 0 ||
@@ -3247,7 +3250,7 @@ pub fn reconnect_nodes<'a, 'b, 'c, 'd>(args: ReconnectArgs<'a, 'b, 'c, 'd>) {
32473250
check_added_monitors!(node_b, if pending_responding_commitment_signed_dup_monitor.0 { 0 } else { 1 });
32483251
}
32493252
} else {
3250-
assert!(chan_msgs.2.is_none());
3253+
assert!(chan_msgs.2.is_none(), "did not expect to have commitment updates for node 1");
32513254
}
32523255
}
32533256

@@ -3264,15 +3267,15 @@ pub fn reconnect_nodes<'a, 'b, 'c, 'd>(args: ReconnectArgs<'a, 'b, 'c, 'd>) {
32643267
}
32653268
}
32663269
} else {
3267-
assert!(chan_msgs.0.is_none());
3270+
assert!(chan_msgs.0.is_none(), "did not expect to have a ChannelReady for node 2");
32683271
}
32693272
if pending_raa.1 {
32703273
assert!(chan_msgs.3 == RAACommitmentOrder::RevokeAndACKFirst);
32713274
node_b.node.handle_revoke_and_ack(&node_a.node.get_our_node_id(), &chan_msgs.1.unwrap());
32723275
assert!(node_b.node.get_and_clear_pending_msg_events().is_empty());
32733276
check_added_monitors!(node_b, 1);
32743277
} else {
3275-
assert!(chan_msgs.1.is_none());
3278+
assert!(chan_msgs.1.is_none(), "did not expect to have a RevokeAndACK for node 2");
32763279
}
32773280
if pending_htlc_adds.1 != 0 || pending_htlc_claims.1 != 0 || pending_htlc_fails.1 != 0 ||
32783281
pending_cell_htlc_claims.1 != 0 || pending_cell_htlc_fails.1 != 0 ||
@@ -3305,7 +3308,7 @@ pub fn reconnect_nodes<'a, 'b, 'c, 'd>(args: ReconnectArgs<'a, 'b, 'c, 'd>) {
33053308
check_added_monitors!(node_a, if pending_responding_commitment_signed_dup_monitor.1 { 0 } else { 1 });
33063309
}
33073310
} else {
3308-
assert!(chan_msgs.2.is_none());
3311+
assert!(chan_msgs.2.is_none(), "did not expect to have commitment updates for node 2");
33093312
}
33103313
}
33113314
}

0 commit comments

Comments
 (0)