Replies: 1 comment
-
Care to provide more details, such as your stack traces or profiling data? Without further information, we cannot investigate what your arrangement is. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, I'm using lettuce as a client to cache storage. I store in Redis deserialized to bytes protobuf format objects which are zipped as well. The average size of value is 50kb. On a daily basis everything working charming, we're using Netty epoll event loop group for lettuce.
Lettuce creates 24 lettuce-epollEventLoop threads for itself in a 4 cores container.
Sometimes after high load at some pods we see through profiling that threads of lettuce-epollEventLoop pool are busy inside RedisCodec.encode(). I also see in async profiler that io.lettuce.core.protocol.SharedLock#doExclusive is called before RedisCodec.encode(). Am I correct that if operation in RedisCodec.encode() long time-consuming, then there is no progress in this lettuce connection with another reads from socket due to exclusive lock? Maybe it is good idea to move our logic of serialization/deserialization from codec to business level of our application (will be handled on another working pool). Will it increase the lettuce throughput?
This is how I say lettuce to use netty based working pool:
I can see that only a few(5-6 threads from lettuce-epollEventLoop are 100% busy in RedisCodec.encode() others are not in use)
Beta Was this translation helpful? Give feedback.
All reactions