mod_oauth2 redis implementation specifics around connection pooling, concurrency, tweaking #80
Unanswered
jlang1-dude
asked this question in
Q&A
Replies: 1 comment
-
even though the implementation of locking is quite different across mod_auth_openidc and liboauth2, it turns out that they suffered from the same issue indeed; that is addressed (in a quite different way again) here: cde2c4eb514415ee6b4ab3765fc1697710b81577 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
@zandbelt - tagging you because this is an identical question to one i asked over on the mod_auth_openidc project and you helped me there. OpenIDC/mod_auth_openidc#1340
We recently implemented the redis backend cache (from file cache) for mod_oauth2. This has been largely successful, except for one of our busier sites where we see some bottlenecks.
After going up and down the network layer, and redis, we are pretty certain that its not a network or redis issue, Redis is continuing to return all queries (read and write) in <1ms throughout, and the network latency between our apache server(s) and redis is also very low ~1ms. No other metrics on the redis side seem to indicate any issues, low CPU, free memory, network isnt saturated, etc. I've got plenty of other redis DB's doing far more overall IOPS than this without issue as well. We can also re-disable the redis cache, and go back to file caching, and the issue goes away as well.
I'm trying to better understand if there are any tunables for connection pool size, concurrency, performance, etc.
If I'm using event MPM, with a ServerLimit of 8 - does each apache/httpd process spinning up create its own individual connection pool for redis? How many connections in the pool, and is that tweakable or adjustable per process? I know with some of our other busier apps with regard to redis, we need to increase connection pools quite a bit for more concurrency. In this case, our apache setup is purely a reverse proxy where we can see the server is processing ~100-120 requests/sec. When traffic per host goes to 150+ requests/sec, Apache "locks up" with all threads stuck "Writing". Currently we focus on low ServerLimit, with lots of threads (600 or so total) so we end up with 600 threads stuck writing. This recovers on its own and catches up with processing in 30-60 seconds and is back to normal, but we do drop some connections due to all threads being full in the meantime.
If there aren't any direct tweakables for Connection Pooling i can see - I'm wondering if i can hypothetically lower my threadcounts, and increase my servercounts (assuming i have enough memory to handle all the extra httpd processes) - or if the architecture and implementation here means I wouldn't in-fact get better redis performance overall? Is there anything else I should investigate or take a look at with regard to this?
Wondering if this is implemented "the same" as it was in mod_auth_openidc, or different. Similarly, if a change is needed to the locking mechanism or not to hypothetically give me more concurrency with more httpd processes (Apache ServerCount value) or if that is already configured?
Beta Was this translation helpful? Give feedback.
All reactions