-
Notifications
You must be signed in to change notification settings - Fork 422
ensure peer_connected is called before peer_disconnected #3110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -1682,25 +1682,26 @@ impl<Descriptor: SocketDescriptor, CM: Deref, RM: Deref, OM: Deref, L: Deref, CM | |
| peer_lock.sync_status = InitSyncTracker::ChannelsSyncing(0); | ||
| } | ||
|
|
||
| if let Err(()) = self.message_handler.route_handler.peer_connected(&their_node_id, &msg, peer_lock.inbound_connection) { | ||
| log_debug!(logger, "Route Handler decided we couldn't communicate with peer {}", log_pubkey!(their_node_id)); | ||
| return Err(PeerHandleError { }.into()); | ||
| } | ||
| if let Err(()) = self.message_handler.chan_handler.peer_connected(&their_node_id, &msg, peer_lock.inbound_connection) { | ||
| log_debug!(logger, "Channel Handler decided we couldn't communicate with peer {}", log_pubkey!(their_node_id)); | ||
| return Err(PeerHandleError { }.into()); | ||
| } | ||
| if let Err(()) = self.message_handler.onion_message_handler.peer_connected(&their_node_id, &msg, peer_lock.inbound_connection) { | ||
| log_debug!(logger, "Onion Message Handler decided we couldn't communicate with peer {}", log_pubkey!(their_node_id)); | ||
| return Err(PeerHandleError { }.into()); | ||
| let results = [ | ||
| ("Route Handler", self.message_handler.route_handler.peer_connected(&their_node_id, &msg, peer_lock.inbound_connection)), | ||
| ("Channel Handler", self.message_handler.chan_handler.peer_connected(&their_node_id, &msg, peer_lock.inbound_connection)), | ||
| ("Onion Handler", self.message_handler.onion_message_handler.peer_connected(&their_node_id, &msg, peer_lock.inbound_connection)), | ||
| ("Custom Message Handler", self.message_handler.custom_message_handler.peer_connected(&their_node_id, &msg, peer_lock.inbound_connection)) | ||
| ]; | ||
|
|
||
| for (handler_name, result) in &results { | ||
| if result.is_err() { | ||
| log_debug!(logger, "{} decided we couldn't communicate with peer {}", handler_name, log_pubkey!(their_node_id)); | ||
| } | ||
| } | ||
| if let Err(()) = self.message_handler.custom_message_handler.peer_connected(&their_node_id, &msg, peer_lock.inbound_connection) { | ||
| log_debug!(logger, "Custom Message Handler decided we couldn't communicate with peer {}", log_pubkey!(their_node_id)); | ||
|
|
||
| peer_lock.their_features = Some(msg.features); | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm a bit confused by this, wouldn't that lead to use falsely assuming the handshake succeeded even though one of our handlers rejected it? And there is a window between us dropping the lock and handling the disconnect even where we would deal with it in a 'normal' manner, e.g., accepting further messages, and potentially rebroadcasting etc? (cc @TheBlueMatt as he requested this change)
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. hm, if that's true then seems like we'll need to separate "handshake_completed" from "triggered peer_connected" with a new flag on the peer that we can use to decide whether or not to trigger peer_disconnected in
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yea, kinda, we'll end up forwarding broadcasts, as you point out, which is maybe not ideal, but we shouldn't process any further messages - we're currently in a read processing call, and we require read processing calls for any given peer to be serial, so presumably when we return an error the read-processing pipeline for this peer will stall and we won't get any more reads. We could make that explicit in the docs, however.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Mhh, rather than introducing this race-y behavior in the first place, couldn't we just introduce a new
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That's fine too. |
||
|
|
||
| if results.iter().any(|(_, result)| result.is_err()) { | ||
| return Err(PeerHandleError { }.into()); | ||
| } | ||
|
|
||
| peer_lock.awaiting_pong_timer_tick_intervals = 0; | ||
| peer_lock.their_features = Some(msg.features); | ||
| return Ok(None); | ||
| } else if peer_lock.their_features.is_none() { | ||
| log_debug!(logger, "Peer {} sent non-Init first message", log_pubkey!(their_node_id)); | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you don't like this attempt to dry up the handling then I'm find just having separate results where I check them one by one with their own log messages.