Replies: 1 comment
-
Hey @otbe, welcome back.
It's hard to say without a sample project reproducing the scenario, but most likely you have a bottleneck somewhere in your flow, the app is processing the messages faster than more messages are being delivered, and thus never gets over 60 / 80. Some usual suspects:
SqsAsyncClient
.builder()
.httpClientBuilder(NettyNioAsyncHttpClient.builder()
.maxConcurrency(1000))
.build()
Let me know if that helps. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi there,
its me again and I have another question. Let's say we have two queues, two message listener containers and a shared ThreadPoolTaskExecutor. Something similar to this
I would've expected to see at max 120 threads being used to process 60 messages from each queue. But this is not what is happening. Most of the time the overall active tasks in the ThreadPoolTaskExecutor is 60 (sometimes 61, 62 but not higher). Both queues receive a large amount of messages (~2k) basically at the same the time and my assumption is now that we process 60 messages (in 60 threads) from both queues.
I already played around with a lot of settings like the back pressure mode (setting it to always high throughput) but nothing really helps.
If I make it 80/40 in terms of max concurrent messages, 80 threads are occupied. If I make it 40/40, 40 threads are occupied. If I add more instances of my application then all of them have the same amount of threads being used (so there are really messages left in the queue to be processed). I feel like I hit some hidden limitation like there's only one max concurrent messages number applied for all containers.
Where is my conceptional misunderstanding of how things work?
Beta Was this translation helpful? Give feedback.
All reactions