Skip to content

Commit cc028f4

Browse files
committed
Speed up remove_stale_channels_and_tracking nontrivially
During startup, the lightning protocol forces us to fetch a ton of gossip for channels where there is a `channel_update` in only one direction. We then have to wait around a while until we can prune the crap cause we don't know when the gossip sync has completed. Sadly, doing a large prune via `remove_stale_channels_and_tracking` is somewhat slow. Removing a large portion of our graph currently takes a bit more than 7.5 seconds on an i9-14900K, which can ultimately ~hang a node with a few less GHz ~forever. The bulk of this time is in our `IndexedMap` removals, where we walk the entire `keys` `Vec` to remove the entry, then shift it down after removing. Here we shift to a bulk removal model when removing channels, doing a single `Vec` iterate + shift. This reduces the same test to around 1.38 seconds on the same hardware.
1 parent c71334e commit cc028f4

File tree

2 files changed

+20
-9
lines changed

2 files changed

+20
-9
lines changed

lightning/src/routing/gossip.rs

Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -2356,9 +2356,7 @@ where
23562356
return;
23572357
}
23582358
let min_time_unix: u32 = (current_time_unix - STALE_CHANNEL_UPDATE_AGE_LIMIT_SECS) as u32;
2359-
// Sadly BTreeMap::retain was only stabilized in 1.53 so we can't switch to it for some
2360-
// time.
2361-
let mut scids_to_remove = Vec::new();
2359+
let mut scids_to_remove = new_hash_set();
23622360
for (scid, info) in channels.unordered_iter_mut() {
23632361
if info.one_to_two.is_some()
23642362
&& info.one_to_two.as_ref().unwrap().last_update < min_time_unix
@@ -2382,18 +2380,19 @@ where
23822380
if announcement_received_timestamp < min_time_unix as u64 {
23832381
log_gossip!(self.logger, "Removing channel {} because both directional updates are missing and its announcement timestamp {} being below {}",
23842382
scid, announcement_received_timestamp, min_time_unix);
2385-
scids_to_remove.push(*scid);
2383+
scids_to_remove.insert(*scid);
23862384
}
23872385
}
23882386
}
23892387
if !scids_to_remove.is_empty() {
23902388
let mut nodes = self.nodes.write().unwrap();
2391-
for scid in scids_to_remove {
2392-
let info = channels
2393-
.remove(&scid)
2394-
.expect("We just accessed this scid, it should be present");
2389+
let mut removed_channels_lck = self.removed_channels.lock().unwrap();
2390+
2391+
let channels_removed_bulk = channels.remove_fetch_bulk(&scids_to_remove);
2392+
removed_channels_lck.reserve(channels_removed_bulk.len());
2393+
for (scid, info) in channels_removed_bulk {
23952394
self.remove_channel_in_nodes(&mut nodes, &info, scid);
2396-
self.removed_channels.lock().unwrap().insert(scid, Some(current_time_unix));
2395+
removed_channels_lck.insert(scid, Some(current_time_unix));
23972396
}
23982397
}
23992398

lightning/src/util/indexed_map.rs

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -72,6 +72,18 @@ impl<K: Clone + Hash + Ord, V> IndexedMap<K, V> {
7272
ret
7373
}
7474

75+
/// Removes elements with the given `keys` in bulk, returning the set of removed elements.
76+
pub fn remove_fetch_bulk(&mut self, keys: &HashSet<K>) -> Vec<(K, V)> {
77+
let mut res = Vec::with_capacity(keys.len());
78+
for key in keys.iter() {
79+
if let Some((k, v)) = self.map.remove_entry(key) {
80+
res.push((k, v));
81+
}
82+
}
83+
self.keys.retain(|k| !keys.contains(k));
84+
res
85+
}
86+
7587
/// Inserts the given `key`/`value` pair into the map, returning the element that was
7688
/// previously stored at the given `key`, if one exists.
7789
pub fn insert(&mut self, key: K, value: V) -> Option<V> {

0 commit comments

Comments
 (0)