Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions protocols/kad/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
## Unreleased

- Make the number of closest peers returned in response to FIND_NODE requests configurable via `Config::set_num_closest_peers()`.
The default remains `K_VALUE` (20) for backward compatibility.
See [PR XXXX](https://github.com/libp2p/rust-libp2p/pull/6255)

## 0.49.0

- Remove no longer constructed GetRecordError::QuorumFailed.
Expand Down
26 changes: 19 additions & 7 deletions protocols/kad/src/behaviour.rs
Original file line number Diff line number Diff line change
Expand Up @@ -437,6 +437,14 @@ impl Config {
self
}

/// Sets the number of closest peers to return in response to a request.
///
/// The default is `K_VALUE`.
pub fn set_num_closest_peers(&mut self, num_closest_peers: NonZeroUsize) -> &mut Self {
self.query_config.num_closest_peers = num_closest_peers;
self
}

/// Sets the time to wait before calling [`Behaviour::bootstrap`] after a new peer is inserted
/// in the routing table. This prevent cascading bootstrap requests when multiple peers are
/// inserted into the routing table "at the same time". This also allows to wait a little
Expand Down Expand Up @@ -729,24 +737,27 @@ where
where
K: Into<kbucket::Key<K>> + Into<Vec<u8>> + Clone,
{
self.get_closest_peers_inner(key, K_VALUE)
self.get_closest_peers_inner(key, self.queries.config().num_closest_peers)
}

/// Initiates an iterative query for the closest peers to the given key.
/// The expected responding peers is specified by `num_results`
/// Note that the result is capped after exceeds K_VALUE
/// Note that the result is capped to the configured `num_closest_peers` (defaults to
/// `K_VALUE`).
///
/// The result of the query is delivered in a
/// [`Event::OutboundQueryProgressed`] with `result` [`QueryResult::GetClosestPeers`].
pub fn get_n_closest_peers<K>(&mut self, key: K, num_results: NonZeroUsize) -> QueryId
where
K: Into<kbucket::Key<K>> + Into<Vec<u8>> + Clone,
{
// The inner code never expect higher than K_VALUE results to be returned.
// And removing such cap will be tricky,
// The inner code never expect higher than the configured num_closest_peers results to be
// returned. And removing such cap will be tricky,
// since it would involve forging a new key and additional requests.
// Hence bound to K_VALUE here to set clear expectation and prevent unexpected behaviour.
Comment on lines 744 to -748
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As stated in this comment, having a cap higher than K_VALUE isn't trivial. It requires to send extra requests for forged keys.

The number of peers returned is a protocol limitation (it should be K_VALUE as per spec).

The max cap shouldn't be increased without either:

  1. A protocol change
  2. Implementing mechanism for exploring a keyspace area, which includes additional requests for forged keys.

Copy link
Author

@noamsp-starkware noamsp-starkware Feb 3, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@noamsp-starkware you may be interested in #5414 instead. Note that if you modify the bucket size, you should form a DHT swarm containing only peers with the same bucket size, and not try to join an existing DHT swarm with a different bucket size (e.g Amino DHT).

I have tried modifying the bucket size; that didn't work.

Motivation for this change: We want to create a network of a few hundred nodes that forms a full clique.
I ran 100 nodes with a bucket size of 100, and it seems I always had at least 2 nodes that couldn't find each other.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want a full clique (every peer connected to every other peer), you don't need a DHT, you only need a sharding algorithm (depending on what you are actually trying to achieve).

DHT is for discovering peers, or discovering which peers are responsible for storing information related to a key. If you know all peers, you don't need to discover to which peer to send the request.

let capped_num_results = std::cmp::min(num_results, K_VALUE);
// Hence bound to num_closest_peers here to set clear expectation and prevent unexpected
// behaviour.
let capped_num_results =
std::cmp::min(num_results, self.queries.config().num_closest_peers);
self.get_closest_peers_inner(key, capped_num_results)
}

Expand Down Expand Up @@ -784,10 +795,11 @@ where
key: &'a kbucket::Key<K>,
source: &'a PeerId,
) -> impl Iterator<Item = KadPeer> + 'a {
let num_closest_peers = self.queries.config().num_closest_peers;
self.kbuckets
.closest(key)
.filter(move |e| e.node.key.preimage() != source)
.take(K_VALUE.get())
.take(num_closest_peers.get())
.map(KadPeer::from)
}

Expand Down
58 changes: 57 additions & 1 deletion protocols/kad/src/behaviour/test.rs
Original file line number Diff line number Diff line change
Expand Up @@ -434,7 +434,7 @@ fn unresponsive_not_returned_indirect() {
}

// Test the result of get_closest_peers with different num_results
// Note that the result is capped after exceeds K_VALUE
// Note that the result is capped to the configured num_closest_peers (defaults to K_VALUE)
#[test]
fn get_closest_with_different_num_results() {
let k_value = K_VALUE.get();
Expand Down Expand Up @@ -1680,3 +1680,59 @@ fn get_closest_peers_should_return_up_to_k_peers() {
}))
}
}

// Test that nodes respond with the configured num_closest_peers amount.
#[test]
fn get_closest_peers_should_return_configured_num_closest_peers() {
let k_value = K_VALUE.get();
// Test with values less than, equal to, and greater than K_VALUE
for num_closest_peers in [5, k_value, k_value * 2] {
// Should be enough nodes for every node to have >= num_closest_peers nodes in their RT.
let num_of_nodes = 3 * num_closest_peers;

let mut cfg = Config::new(PROTOCOL_NAME);
cfg.set_num_closest_peers(NonZeroUsize::new(num_closest_peers).unwrap());

let swarms = build_connected_nodes_with_config(num_of_nodes, k_value, cfg);
let mut swarms = swarms
.into_iter()
.map(|(_addr, swarm)| swarm)
.collect::<Vec<_>>();

// Ask first node to search for a random peer.
let search_target = PeerId::random();
swarms[0].behaviour_mut().get_closest_peers(search_target);

let rt = Runtime::new().unwrap();
rt.block_on(poll_fn(move |ctx| {
for swarm in &mut swarms {
loop {
match swarm.poll_next_unpin(ctx) {
Poll::Ready(Some(SwarmEvent::Behaviour(
Event::OutboundQueryProgressed {
result: QueryResult::GetClosestPeers(Ok(ok)),
..
},
))) => {
assert_eq!(&ok.key[..], search_target.to_bytes().as_slice());
// Verify that we get the configured num_closest_peers amount.
assert_eq!(
ok.peers.len(),
num_closest_peers,
"Expected {} peers but got {}",
num_closest_peers,
ok.peers.len()
);
return Poll::Ready(());
}
// Ignore any other event.
Poll::Ready(Some(_)) => (),
e @ Poll::Ready(_) => panic!("Unexpected return value: {e:?}"),
Poll::Pending => break,
}
}
}
Poll::Pending
}))
}
}
8 changes: 7 additions & 1 deletion protocols/kad/src/query.rs
Original file line number Diff line number Diff line change
Expand Up @@ -265,15 +265,21 @@ pub(crate) struct QueryConfig {
///
/// See [`crate::behaviour::Config::disjoint_query_paths`] for details.
pub(crate) disjoint_query_paths: bool,
/// The number of closest peers to return in response to a request.
///
/// See [`crate::behaviour::Config::set_num_closest_peers`] for details.
pub(crate) num_closest_peers: NonZeroUsize,
}

impl Default for QueryConfig {
fn default() -> Self {
let k_value = NonZeroUsize::new(K_VALUE.get()).expect("K_VALUE > 0");
QueryConfig {
timeout: Duration::from_secs(60),
replication_factor: NonZeroUsize::new(K_VALUE.get()).expect("K_VALUE > 0"),
replication_factor: k_value,
parallelism: ALPHA_VALUE,
disjoint_query_paths: false,
num_closest_peers: k_value,
}
}
}
Expand Down