Replies: 1 comment 1 reply
-
Is there anything in the logs of the client or the Pods receiving less traffic that would indicate they have higher latency than the Pods receiving most of the traffic? Linkerd uses latency for its load balancing metric; it takes the average latency of all a Service's backends and distributes according to that (EWMA is the algorithm). Are the Pods in significantly different regions or something like that? |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I am running Linkerd2 to load balance a workload that is using gRPC, on an on-prem K8s cluster (K8s version 1.21, Linkerd 2.11.0, Cilium CNI)
Linkerd successfully distributes traffic evenly between the multiple pods of all services, except one service.
Linkerd directs all traffic to one/two pods of the three replicas for this particular service. Further more, if I scale up the replicas of this service, Linkerd is unable to send traffic to the newly added pods. On the Linkerd dashboard too, I see most of the TCP connections made to 1 or 2 pods of this service and the rest have just 1 connection. How can I fix the balance for this particular service? Strangely, I don't see any of these issues with the other services.
Beta Was this translation helpful? Give feedback.
All reactions