-
-
Notifications
You must be signed in to change notification settings - Fork 12
Description
As far as I see the connection pool limit is not enforced precisely. With the attached test script, which runs 100 of concurrent async tasks and a pool limit of 10, I see like 19 connections actually established.
Script is a little weird for being AI generated but I fixed up the important parts. So for example I set a limit of 10 and the script actually generates 19 connections. Also I noticed this with redis so it is redis based.
Statistics:
Total tasks created: 432
Total operations: 432
Total errors: 0
Duration: 18.61 seconds
Operations/sec: 23.21
Connection Pool Information:
Max connections configured: 10
Current pool size: 19
I pin the issue down to the default concurrency option of Async::Pool::Controller
It is limit || 1 which would almost certainly create more than the requested limit under high load.
I think that the solution to prevent user surprise can be one of the following:
- change the default
concurrencyto one, with such setting the limit is being properly enforced + document that a higher concurrency may result in more resources created than expected - make sure
concurrencyis adjusted based on the existing number of resources such that it never exceeds the specified max number
Of course the the latter solution would be much preferable.
P.S. Also I see an old issue socketry/async-redis#20 where op observed more and more resources being created when there is no limit. I'm not sure why this happens but I also observed something similar. Maybe that should be a separate issue but not sure whether it might be addressed here or in async-redis.