Replies: 3 comments
-
We set up the tools cache in our runners then we went a step further mounted a persistent EFS storage volume to the tools cache directory so that it would persist across runners. This way, you don't download the same installed versions of releases multiple times. |
Beta Was this translation helpful? Give feedback.
-
Hey thanks for the hint @jrkarnes! To be clear, are you using the cache action as described here or something else? We use ephemeral runners and I'm wondering if we need to consider stateful runners or possibly baking-in the tools to a custom runner image that we maintain ourselves. |
Beta Was this translation helpful? Give feedback.
-
Jeff,
No. I am talking about setting up the actual runner tool cache as described in this documentation:
***@***.***/admin/github-actions/managing-access-to-actions-from-githubcom/setting-up-the-tool-cache-on-self-hosted-runners-without-internet-access
You can update the GH version pin according to your needs, but the method will mostly be the same. You may find that the path is different with different runner versions, but you should be able to resolve any differences that occur.
Let me know if this solved your problem or if you still need additional help.
… On Aug 24, 2022, at 5:33 PM, Jeff Billimek ***@***.***> wrote:
Hey thanks for the hint @jrkarnes!
To be clear, are you using the cache action as described here or something else?
We use ephemeral runners and I'm wondering if we need to consider stateful runners or possibly baking-in the tools to a custom runner image that we maintain ourselves.
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you were mentioned.
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
... I don't mean API rate limiting of runner operations, but rather secondary rate limiting of workloads running within your runner environment that are leveraging the unauthenticated github API.
For operators of large pools of self-hosted runners, I imagine that all of your network traffic egressing to github is being concentrated via one or a small number of NAT IPs from your datacenter/cloud provider. From GitHub's perspective, workloads originating within those large pools are all going to appear to come from (usually) a single IP.
Assuming you fall into this category, how are you handling this? We're starting to see secondary rate limiting issues from github with certain actions like
docker/setup-buildx-action
because part of the code is making unauthenticated calls to the github API to download a release from github.Beta Was this translation helpful? Give feedback.
All reactions