-
Disclaimer: I believe I'm setting this up per the directions - but I'm posting it as a discussion to first validate that what I'm doing is setup as per the directions before calling it a bug. I hope it's not a bug, and I could easily just be doing something wrong. Now here's my problem, and why I chose Micronaut Kafka listeners to do it: problem my solution It works like this: Consumer: How I attempted to solve this with micronaut:
The topic is auto-created. It has one partition (I know I can split them off to three to solve this, more on that in a sec). I produce the messages in a straightforward way:
This creates the request without any issues and I can see the messages just fine in the front end of the confluent platform.
Link to documentation
So here's my configured class for a listener: package com.krickert.search.download.dump;
As you can see, it follows all the steps per the directions. Now, this is what happens:
I'm positive there's got to be a quick fix around this. I'll be glad to debug the micronaut code and get deeper - but first I want to ensure that "I'm doing it right." Additional points
Then my resulting listener configuration:
(repeated 3x, since it is successfully creating 3 listeners as we expect...) Now, I start running the producers and only one if them is working. I'm 100% convinced that I have one setting that's just a little off - or a few. Can anyone let me know what more I can do to test this out? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
Well, so much for that adventure. I figured out a way around it (although I'd rather just use a single partition - since some files might take a LONG time and others very short. That means I'm going to have hanging threads that are just listening to nothing) I found two ways around it - and for now I'll run with partitions because it's more straightforward.
Which appropriately hashes to the 3 partition buckets. Then I don't have to do a think with the threads- Kafka automagically assigned a partition to each one. So I realized: the way it's setup now, I'm not sure how to have multiple threads of consumers read from a single partition. This would be really cool to do - it would prevent a situation like this with two partitions: File 1 - 1MB using the partition strategy above, the following partitions would be created: Consumers: Result: Thread 2 takes FOREVER to complete (with my ISP at least) and Thread 1 is done before I refresh this page. I did fine a strategy to have each thread point to a block of offsets: But it feels like I'm doing kafka gymnastics if I do this. So just to see if anyone agrees: kafka might not be the best messaging queue for this situation, right? Due to it's in-process nature (which is awesome), trying to consume messages without a care for the ordering seems like it isn't the best product. Or am I wrong? Does kafka has this capability and if so, does Micronaut's helpers provide it? I suppose underneath the rug, I can look at the code that does this. I'm willing to get my hands dirty to add it if someone can talk to me a little more about it. Or am I trying to fit a square peg in a round hole? |
Beta Was this translation helpful? Give feedback.
Well, so much for that adventure. I figured out a way around it (although I'd rather just use a single partition - since some files might take a LONG time and others very short. That means I'm going to have hanging threads that are just listening to nothing)
I found two ways around it - and for now I'll run with partitions because it's more straightforward.
Which appropriately hashe…