Skip to content

feat: Add Downloader::with_pool_options() constructor#213

Merged
rklaehn merged 3 commits inton0-computer:mainfrom
pefontana:pool-options
Mar 17, 2026
Merged

feat: Add Downloader::with_pool_options() constructor#213
rklaehn merged 3 commits inton0-computer:mainfrom
pefontana:pool-options

Conversation

@pefontana
Copy link
Copy Markdown
Contributor

@pefontana pefontana commented Mar 11, 2026

Adds a Downloader::with_pool_options() constructor that allows configuring the internal ConnectionPool options.

Currently Downloader::new() hardcodes Default::default() for pool options, which means the idle_timeout is always 5 seconds. The ConnectionPool and Options types are already public, but there's no way to pass them through to the Downloader.

We're from the Psyche/Nousnet team
In our use case, peers exchange gradient via iroh-blobs. With the 5s idle timeout, every connection gets dropped between transfers and a new one is created for the next round.

On a devnet run with 6 peers, over 8 hours we see:

MaxPathIdReached warnings: 3,098
NAT traversal warnings:    3,344
Connection open/close cycles per hour: ~1,300
Handshake aborts: 31
2025-03-09T10:15:02 WARN iroh: MaxPathIdReached for connection ...
2025-03-09T10:15:02 WARN iroh: NAT traversal to ... via ... failed
2025-03-09T10:15:03 new connection to peer ...
2025-03-09T10:15:25 connection closed (idle timeout)
2025-03-09T10:15:25 new connection to peer ...  <- same peer, 22s later

The MaxPathIdReached happens because QUIC path IDs are monotonically increasing and never reused, constant connection churn burns through them.

Being able to set a longer idle timeout (e.g. 60s) would let the pool reuse connections across transfers, eliminating the churn entirely.

Here is how we plan to use it on NousNet
https://github.com/PsycheFoundation/nousnet/pull/600/changes

Feedback is really appreciated

Expose a way to create a Downloader with custom connection pool options
(idle_timeout, connect_timeout, max_connections) instead of always using
the hardcoded defaults. This allows callers to configure connection reuse
behavior to reduce connection churn in long-running transfers.
@n0bot n0bot bot added this to iroh Mar 11, 2026
@github-project-automation github-project-automation bot moved this to 🚑 Needs Triage in iroh Mar 11, 2026
@flub
Copy link
Copy Markdown
Contributor

flub commented Mar 12, 2026

The MaxPathIdReached happens because QUIC path IDs are monotonically increasing and never reused, constant connection churn burns through them.

Ugh. The path IDs start from 0 for each new connection. So connection churn should help with this. I'm really surprised you see a single connection churn through 2**32 paths.

I don't have a lot of context but I suspect this might be the PathError::MaxPathIdReached error, which is the "maximum number of concurrent paths reached". That basically means we are probably setting the limit too low currently.

It would be great if you could file a separate issue for this and provide a bit more logs around it. Because I think that is part of the multipath teething problems.

(I'm leaving considering the actual PR to someone more familiar with blobs, this doesn't mean the PR is wrong)

@pefontana
Copy link
Copy Markdown
Contributor Author

Thank for the feedback @flub !

Great so I didn't know path IDs start from 0 for each new connection . So we shouldn't be exhausting all path IDs. Our config uses set_max_remote_nat_traversal_addresses(50). We'll lower that to 12 on our side and see if it fixes the MaxPathIdReached warnings.

We will test it and tell you how it goes

Regarding the changes in the PR:

I could not get the old log with MaxPathIdReached errors, but I have some recent ones, the problem is that in Nousnet, clients exchange data every ~22s, so every connection gets dropped between transfers and recreated for the next round. With 6 peers that's ~1,300 open/close cycles per hour

logs.txt

Being able to configure the pool's idle_timeout would let us reuse connections across transfers, eliminating the churn entirely.

@rklaehn rklaehn self-requested a review March 13, 2026 09:40
@rklaehn rklaehn changed the title Add Downloader::with_pool_options() constructor feat: Add Downloader::with_pool_options() constructor Mar 13, 2026
Copy link
Copy Markdown
Collaborator

@rklaehn rklaehn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks pretty straightforward.

See the comments, but nothing major.

Also, we might want to just call it new_with_opts(...).

We got this pattern here and all over other crates where we have a "convenient" fn foo and then a "configurable" fn foo_with_opts. We might need more options later, and we don't want to have a giant list of new_with_x_and_y fns.

Signed-off-by: pefontana <fontana.pedro93@gmail.com>
@pefontana
Copy link
Copy Markdown
Contributor Author

Thanks for the review @rklaehn !
done 97ed3a5

@pefontana pefontana requested a review from rklaehn March 13, 2026 18:47
@rklaehn
Copy link
Copy Markdown
Collaborator

rklaehn commented Mar 16, 2026

@pefontana now we get a dead code warning. I guess the internal thing doesn't need a fn new. Otherwise good to go.

Don't worry about the cargo deny - I will fix this in a separate PR.

@pefontana
Copy link
Copy Markdown
Contributor Author

pefontana commented Mar 16, 2026

@pefontana now we get a dead code warning. I guess the internal thing doesn't need a fn new. Otherwise good to go.

Don't worry about the cargo deny - I will fix this in a separate PR.

Thanks @rklaehn !
Done 075d049

@rklaehn rklaehn merged commit 972927d into n0-computer:main Mar 17, 2026
24 of 25 checks passed
@github-project-automation github-project-automation bot moved this from 🚑 Needs Triage to ✅ Done in iroh Mar 17, 2026
@rklaehn
Copy link
Copy Markdown
Collaborator

rklaehn commented Mar 17, 2026

@pefontana FYI we just released a new iroh and iroh-blobs, and this made it in.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: ✅ Done

Development

Successfully merging this pull request may close these issues.

3 participants