Question on Vector PubSub Source throughput capability #13565
Unanswered
atibdialpad
asked this question in
Q&A
Replies: 3 comments 1 reply
-
Hi, @atibdialpad What version of Vector are you running this on? Is it still the nightly I recommended on #12990 or have you moved to the 0.23 release? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Hi @bruceg
|
Beta Was this translation helpful? Give feedback.
0 replies
-
Right now I have topology A setup. Since Vector Pod Utilization as well as CPU utilization is low in general, I am thinking of moving to topology B What do you think ? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi @bruceg @jszwedko
I have setup a Vector Deployment in GKE (Agg Mode) which consumes message from a Google PubSub Subscription.
The pipeline looks like this : PubSub Src -> Log Processing Transform Layer -> Loki
When I look at the utilisation metric for the transform stage (first stage after the source) I see it remains very low (~0.02). On the Google Subscription stats I see that the number of unacknowledged message ~ (200K - 300K). I want to understand if there is a bottleneck somewhere in the pubsub->vector_source part and how to identify / fix it.
This is different from #12990 though where the number of unack'ed mesages kept on growing. In this case, message are being processed correctly by Vector. I am trying to explain the constant ~200K-300K unacked messages.
Deployment Details :
100 Vector Pods part of a GKE Deployment
Each pod requests: 3-3.5 vCPU and 6-7GB of memory
Node type: c2-standard-8 Google Instances
Config:
Beta Was this translation helpful? Give feedback.
All reactions