Seeking advice for shortening job queue times #2651
Unanswered
glenthomas
asked this question in
Questions
Replies: 2 comments
-
I'm also extremely interested in this! |
Beta Was this translation helpful? Give feedback.
0 replies
-
We are using a dedicated node-pool with beefier machines to run some of our builds. Actions-runner-controller always starts a given number of runners. After startup they connect to GitHub and are ready to accept jobs. So there is basically no wait time. After a job finished, the runner de-registers itself and a new pod is started to accept new jobs. To keep costs down, we scale our runner-set (and with that the node-pool) to zero in the night. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
I am looking at using this solution as I need some higher spec runners than GitHub Actions default, but the GitHub 'larger runners' are super expensive, and I also found that there was some longer cold-start time on those runners which made the workflow runs longer than expected.
For a private runner solution, something I am keen to avoid is a long job queue time as this can wipe out the benefits of using higher-spec machines. When testing this actions-runner-controller with AWS EKS my colleague was experiencing some fairly long pod provisioning times that were resulting in the benefits of the higher spec runner hardware being wiped out by extra queue time.
Does anyone have any general advice for keeping job queue times to a minimum? Is it possible to have idle runners ready to collect jobs?
Beta Was this translation helpful? Give feedback.
All reactions