Skip to content

feat(lambda): publish retry message from scale up if runner creation fails #4605

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from

Conversation

galargh
Copy link
Contributor

@galargh galargh commented Jun 4, 2025

Note

I suggest reviewing the PR with whitespace changes hidden. It makes the diff much smaller and it highlights the actual change being proposed here better.

See https://github.com/github-aws-runners/terraform-aws-github-runner/pull/4605/files?diff=split&w=1

Overview

Currently, the retry messages are only published by the scale up lambda when the runner creation succeeds. I propose to also publish them when the runner creation fails and when the runner creation is skipped due to the maximum runner cap being reached.

Motivation

The runner creation, i.e. a call to createRunners, can fail for a number of reasons. For example, it can fail due to exceeding the AWS service quotas. In such a case, we'd like the retry message to be published so that the runner creation can be retried in the future. Otherwise, in the ephemeral runners mode, we risk facing runner starvation.

Similarly, when runners are not created due to their limits being reached, we'd like to retry the creation in the future when some of the running runners will have had a chance to shut down and make space for the new ones.

Testing

I added unit tests to test the new behaviour and run them locally. I have not run the updated lambda in production.

@galargh galargh requested a review from a team as a code owner June 4, 2025 09:37
@npalm npalm self-requested a review June 5, 2025 20:16
@galargh
Copy link
Contributor Author

galargh commented Jun 5, 2025

I tested it out in my setup, and unfortunately, it's not quite right. I didn't account for the default lambda retries. As things stand, the implementation proposed here can lead to an explosion of messages in the job retry queue.

This is because when we publish a retry message when the scale-up lambda throws an error, and that error persists over the default lambda retries, then a single invocation of a scale-up lambda can produce 3 messages on the queue. Because of that, I'm going to close this PR for now.

A quick workaround for this could be not to throw an error from the scale-up lambda when a retry message is successfully published. I quickly implemented this approach here - b06ca50

I will test it out in one of my orgs. I'll also think some more about whether that's the best approach to the original problem I've been facing.

@galargh galargh closed this Jun 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant