ARC always schedules containers on a single node #2848
Replies: 1 comment
-
This seems more like an issue with your kubernetes cluster(s) rather than ARC. ARC creates these runner resources as pods and then relies on k8s to place runner pods on the appropriate nodes that meet criteria, such as taint tolerations. I would carefully inspect the deployments you mention being correctly distributed across nodes to see if they have something your ARC manifests do not such as node selectors or taint tolerations. Another thing to consider is that your ARC runner container image is now cached on a single node (let's say node A), so k8s prefers to continue to place the runner pods there on node A. If you do not have enough runner pods active to exceed node A The next step I would attempt is to try and exceed node A's (where all runner pods are being placed) available resources and see if you do, in fact, scale out to node B and so on. If you have already done that I'd dump the node description for node A (where the runner pod is placed) and compare it versus node B (where it does not get placed) and see if there are any discrepancies. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi,
I am trying to achieve auto scaling using following configurations but all containers are always being created on a single node of our kubernetes cluster and rest of the nodes are idle with no containers related to github actions runners.
kubernetes cluster itself is healthy and is working fine and when you run a simple deployment, all replicas are being properly distributed in all nodes with no problem.
i appreciate if someone can help.
runner and autoscaler configs are as follows:
and
Beta Was this translation helpful? Give feedback.
All reactions