From 19a94ebc2abb34a97741b00e616243f357a1ea86 Mon Sep 17 00:00:00 2001 From: Iain Lane Date: Sat, 6 Dec 2025 18:37:14 +0000 Subject: [PATCH 1/4] feat(control-plane)!: add support for handling multiple events in a single invocation (#4603) Currently we restrict the `scale-up` Lambda to only handle a single event at a time. In very busy environments this can prove to be a bottleneck: there are calls to GitHub and AWS APIs that happen each time, and they can end up taking long enough that we can't process job queued events faster than they arrive. In our environment we are also using a pool, and typically we have responded to the alerts generated by this (SQS queue length growing) by expanding the size of the pool. This helps because we will more frequently find that we don't need to scale up, which allows the lambdas to exit a bit earlier, so we can get through the queue faster. But it makes the environment much less responsive to changes in usage patterns. At its core, this Lambda's task is to construct an EC2 `CreateFleet` call to create instances, after working out how many are needed. This is a job that can be batched. We can take any number of events, calculate the diff between our current state and the number of jobs we have, capping at the maximum, and then issue a single call. The thing to be careful about is how to handle partial failures, if EC2 creates some of the instances we wanted but not all of them. Lambda has a configurable function response type which can be set to `ReportBatchItemFailures`. In this mode, we return a list of failed messages from our handler and those are retried. We can make use of this to give back as many events as we failed to process. Now we're potentially processing multiple events in a single Lambda, one thing we should optimise for is not recreating GitHub API clients. We need one client for the app itself, which we use to find out installation IDs, and then one client for each installation which is relevant to the batch of events we are processing. This is done by creating a new client the first time we see an event for a given installation. We also remove the same `batch_size = 1` constraint from the `job-retry` Lambda. This Lambda is used to retry events that previously failed. However, instead of reporting failures to be retried, here we maintain the pre-existing fault-tolerant behaviour where errors are logged but explicitly do not cause message retries, avoiding infinite loops from persistent GitHub API issues or malformed events. Tests are added for all of this. Tests in a private repo (sorry) look good. This was running ephemeral runners with no pool, SSM high throughput enabled, the job queued check \_dis_abled, batch size of 200, wait time of 10 seconds. The workflow runs are each a matrix with 250 jobs. ![image](https://github.com/user-attachments/assets/0a656e99-8f1e-45e2-924b-0d5c1b6d6afb) --------- Signed-off-by: dependabot[bot] Co-authored-by: Niek Palm Co-authored-by: Niek Palm Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: github-actions[bot] --- README.md | 2 + .../control-plane/src/aws/runners.test.ts | 130 ++- .../control-plane/src/aws/runners.ts | 101 +- .../control-plane/src/lambda.test.ts | 230 +++-- lambdas/functions/control-plane/src/lambda.ts | 54 +- lambdas/functions/control-plane/src/local.ts | 42 +- .../control-plane/src/pool/pool.test.ts | 24 +- .../functions/control-plane/src/pool/pool.ts | 2 +- .../src/scale-runners/ScaleError.test.ts | 76 ++ .../src/scale-runners/ScaleError.ts | 26 +- .../src/scale-runners/job-retry.test.ts | 92 ++ .../src/scale-runners/scale-up.test.ts | 944 +++++++++++++++--- .../src/scale-runners/scale-up.ts | 275 +++-- .../aws-powertools-util/src/logger/index.ts | 10 +- main.tf | 46 +- modules/multi-runner/README.md | 2 + modules/multi-runner/runners.tf | 46 +- modules/multi-runner/variables.tf | 12 + modules/runners/README.md | 2 + modules/runners/job-retry.tf | 50 +- modules/runners/job-retry/README.md | 2 +- modules/runners/job-retry/main.tf | 7 +- modules/runners/job-retry/variables.tf | 16 +- modules/runners/scale-up.tf | 10 +- modules/runners/variables.tf | 20 + modules/webhook-github-app/README.md | 2 +- variables.tf | 16 + 27 files changed, 1727 insertions(+), 512 deletions(-) create mode 100644 lambdas/functions/control-plane/src/scale-runners/ScaleError.test.ts diff --git a/README.md b/README.md index b202597cc2..f242fe734b 100644 --- a/README.md +++ b/README.md @@ -155,6 +155,8 @@ Join our discord community via [this invite link](https://discord.gg/bxgXW8jJGh) | [key\_name](#input\_key\_name) | Key pair name | `string` | `null` | no | | [kms\_key\_arn](#input\_kms\_key\_arn) | Optional CMK Key ARN to be used for Parameter Store. This key must be in the current account. | `string` | `null` | no | | [lambda\_architecture](#input\_lambda\_architecture) | AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. | `string` | `"arm64"` | no | +| [lambda\_event\_source\_mapping\_batch\_size](#input\_lambda\_event\_source\_mapping\_batch\_size) | Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used. | `number` | `10` | no | +| [lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds](#input\_lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds) | Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10. Defaults to 0. | `number` | `0` | no | | [lambda\_principals](#input\_lambda\_principals) | (Optional) add extra principals to the role created for execution of the lambda, e.g. for local testing. |
list(object({
type = string
identifiers = list(string)
}))
| `[]` | no | | [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | diff --git a/lambdas/functions/control-plane/src/aws/runners.test.ts b/lambdas/functions/control-plane/src/aws/runners.test.ts index a02f62cd36..c4fd922fd0 100644 --- a/lambdas/functions/control-plane/src/aws/runners.test.ts +++ b/lambdas/functions/control-plane/src/aws/runners.test.ts @@ -1,26 +1,26 @@ +import { tracer } from '@aws-github-runner/aws-powertools-util'; import { CreateFleetCommand, - CreateFleetCommandInput, - CreateFleetInstance, - CreateFleetResult, + type CreateFleetCommandInput, + type CreateFleetInstance, + type CreateFleetResult, CreateTagsCommand, + type DefaultTargetCapacityType, DeleteTagsCommand, - DefaultTargetCapacityType, DescribeInstancesCommand, - DescribeInstancesResult, + type DescribeInstancesResult, EC2Client, SpotAllocationStrategy, TerminateInstancesCommand, } from '@aws-sdk/client-ec2'; -import { GetParameterCommand, GetParameterResult, PutParameterCommand, SSMClient } from '@aws-sdk/client-ssm'; -import { tracer } from '@aws-github-runner/aws-powertools-util'; +import { GetParameterCommand, type GetParameterResult, PutParameterCommand, SSMClient } from '@aws-sdk/client-ssm'; import { mockClient } from 'aws-sdk-client-mock'; import 'aws-sdk-client-mock-jest/vitest'; +import { beforeEach, describe, expect, it, vi } from 'vitest'; import ScaleError from './../scale-runners/ScaleError'; -import { createRunner, listEC2Runners, tag, untag, terminateRunner } from './runners'; -import { RunnerInfo, RunnerInputParameters, RunnerType } from './runners.d'; -import { describe, it, expect, beforeEach, vi } from 'vitest'; +import { createRunner, listEC2Runners, tag, terminateRunner, untag } from './runners'; +import type { RunnerInfo, RunnerInputParameters, RunnerType } from './runners.d'; process.env.AWS_REGION = 'eu-east-1'; const mockEC2Client = mockClient(EC2Client); @@ -110,7 +110,10 @@ describe('list instances', () => { it('check orphan tag.', async () => { const instances: DescribeInstancesResult = mockRunningInstances; - instances.Reservations![0].Instances![0].Tags!.push({ Key: 'ghr:orphan', Value: 'true' }); + instances.Reservations![0].Instances![0].Tags!.push({ + Key: 'ghr:orphan', + Value: 'true', + }); mockEC2Client.on(DescribeInstancesCommand).resolves(instances); const resp = await listEC2Runners(); @@ -132,7 +135,11 @@ describe('list instances', () => { it('filters instances on repo name', async () => { mockEC2Client.on(DescribeInstancesCommand).resolves(mockRunningInstances); - await listEC2Runners({ runnerType: 'Repo', runnerOwner: REPO_NAME, environment: undefined }); + await listEC2Runners({ + runnerType: 'Repo', + runnerOwner: REPO_NAME, + environment: undefined, + }); expect(mockEC2Client).toHaveReceivedCommandWith(DescribeInstancesCommand, { Filters: [ { Name: 'instance-state-name', Values: ['running', 'pending'] }, @@ -145,7 +152,11 @@ describe('list instances', () => { it('filters instances on org name', async () => { mockEC2Client.on(DescribeInstancesCommand).resolves(mockRunningInstances); - await listEC2Runners({ runnerType: 'Org', runnerOwner: ORG_NAME, environment: undefined }); + await listEC2Runners({ + runnerType: 'Org', + runnerOwner: ORG_NAME, + environment: undefined, + }); expect(mockEC2Client).toHaveReceivedCommandWith(DescribeInstancesCommand, { Filters: [ { Name: 'instance-state-name', Values: ['running', 'pending'] }, @@ -249,7 +260,9 @@ describe('terminate runner', () => { }; await terminateRunner(runner.instanceId); - expect(mockEC2Client).toHaveReceivedCommandWith(TerminateInstancesCommand, { InstanceIds: [runner.instanceId] }); + expect(mockEC2Client).toHaveReceivedCommandWith(TerminateInstancesCommand, { + InstanceIds: [runner.instanceId], + }); }); }); @@ -324,7 +337,10 @@ describe('create runner', () => { await createRunner(createRunnerConfig({ ...defaultRunnerConfig, type: type })); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, { - ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, type: type }), + ...expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + type: type, + }), }); }); @@ -333,24 +349,36 @@ describe('create runner', () => { mockEC2Client.on(CreateFleetCommand).resolves({ Instances: instances }); - await createRunner({ ...createRunnerConfig(defaultRunnerConfig), numberOfRunners: 2 }); + await createRunner({ + ...createRunnerConfig(defaultRunnerConfig), + numberOfRunners: 2, + }); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, { - ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, totalTargetCapacity: 2 }), + ...expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + totalTargetCapacity: 2, + }), }); }); it('calls create fleet of 1 instance with the on-demand capacity', async () => { await createRunner(createRunnerConfig({ ...defaultRunnerConfig, capacityType: 'on-demand' })); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, { - ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, capacityType: 'on-demand' }), + ...expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + capacityType: 'on-demand', + }), }); }); it('calls run instances with the on-demand capacity', async () => { await createRunner(createRunnerConfig({ ...defaultRunnerConfig, maxSpotPrice: '0.1' })); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, { - ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, maxSpotPrice: '0.1' }), + ...expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + maxSpotPrice: '0.1', + }), }); }); @@ -367,8 +395,16 @@ describe('create runner', () => { }, }; mockSSMClient.on(GetParameterCommand).resolves(paramValue); - await createRunner(createRunnerConfig({ ...defaultRunnerConfig, amiIdSsmParameterName: 'my-ami-id-param' })); - const expectedRequest = expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, imageId: 'ami-123' }); + await createRunner( + createRunnerConfig({ + ...defaultRunnerConfig, + amiIdSsmParameterName: 'my-ami-id-param', + }), + ); + const expectedRequest = expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + imageId: 'ami-123', + }); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, expectedRequest); expect(mockSSMClient).toHaveReceivedCommandWith(GetParameterCommand, { Name: 'my-ami-id-param', @@ -380,7 +416,10 @@ describe('create runner', () => { await createRunner(createRunnerConfig({ ...defaultRunnerConfig, tracingEnabled: true })); expect(mockEC2Client).toHaveReceivedCommandWith(CreateFleetCommand, { - ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, tracingEnabled: true }), + ...expectedCreateFleetRequest({ + ...defaultExpectedFleetRequestValues, + tracingEnabled: true, + }), }); }); }); @@ -419,9 +458,12 @@ describe('create runner with errors', () => { }); it('test ScaleError with multiple error.', async () => { - createFleetMockWithErrors(['UnfulfillableCapacity', 'SomeError']); + createFleetMockWithErrors(['UnfulfillableCapacity', 'MaxSpotInstanceCountExceeded', 'NotMappedError']); - await expect(createRunner(createRunnerConfig(defaultRunnerConfig))).rejects.toBeInstanceOf(ScaleError); + await expect(createRunner(createRunnerConfig(defaultRunnerConfig))).rejects.toMatchObject({ + name: 'ScaleError', + failedInstanceCount: 2, + }); expect(mockEC2Client).toHaveReceivedCommandWith( CreateFleetCommand, expectedCreateFleetRequest(defaultExpectedFleetRequestValues), @@ -465,7 +507,12 @@ describe('create runner with errors', () => { mockSSMClient.on(GetParameterCommand).rejects(new Error('Some error')); await expect( - createRunner(createRunnerConfig({ ...defaultRunnerConfig, amiIdSsmParameterName: 'my-ami-id-param' })), + createRunner( + createRunnerConfig({ + ...defaultRunnerConfig, + amiIdSsmParameterName: 'my-ami-id-param', + }), + ), ).rejects.toBeInstanceOf(Error); expect(mockEC2Client).not.toHaveReceivedCommand(CreateFleetCommand); expect(mockSSMClient).not.toHaveReceivedCommand(PutParameterCommand); @@ -530,7 +577,7 @@ describe('create runner with errors fail over to OnDemand', () => { }), }); - // second call with with OnDemand failback + // second call with with OnDemand fallback expect(mockEC2Client).toHaveReceivedNthCommandWith(2, CreateFleetCommand, { ...expectedCreateFleetRequest({ ...defaultExpectedFleetRequestValues, @@ -540,17 +587,25 @@ describe('create runner with errors fail over to OnDemand', () => { }); }); - it('test InsufficientInstanceCapacity no failback.', async () => { + it('test InsufficientInstanceCapacity no fallback.', async () => { await expect( - createRunner(createRunnerConfig({ ...defaultRunnerConfig, onDemandFailoverOnError: [] })), + createRunner( + createRunnerConfig({ + ...defaultRunnerConfig, + onDemandFailoverOnError: [], + }), + ), ).rejects.toBeInstanceOf(Error); }); - it('test InsufficientInstanceCapacity with mutlipte instances and fallback to on demand .', async () => { + it('test InsufficientInstanceCapacity with multiple instances and fallback to on demand .', async () => { const instancesIds = ['i-123', 'i-456']; createFleetMockWithWithOnDemandFallback(['InsufficientInstanceCapacity'], instancesIds); - const instancesResult = await createRunner({ ...createRunnerConfig(defaultRunnerConfig), numberOfRunners: 2 }); + const instancesResult = await createRunner({ + ...createRunnerConfig(defaultRunnerConfig), + numberOfRunners: 2, + }); expect(instancesResult).toEqual(instancesIds); expect(mockEC2Client).toHaveReceivedCommandTimes(CreateFleetCommand, 2); @@ -580,7 +635,10 @@ describe('create runner with errors fail over to OnDemand', () => { createFleetMockWithWithOnDemandFallback(['UnfulfillableCapacity'], instancesIds); await expect( - createRunner({ ...createRunnerConfig(defaultRunnerConfig), numberOfRunners: 2 }), + createRunner({ + ...createRunnerConfig(defaultRunnerConfig), + numberOfRunners: 2, + }), ).rejects.toBeInstanceOf(Error); expect(mockEC2Client).toHaveReceivedCommandTimes(CreateFleetCommand, 1); @@ -626,7 +684,10 @@ function createFleetMockWithWithOnDemandFallback(errors: string[], instances?: s mockEC2Client .on(CreateFleetCommand) - .resolvesOnce({ Instances: [instanceesFirstCall], Errors: errors.map((e) => ({ ErrorCode: e })) }) + .resolvesOnce({ + Instances: [instanceesFirstCall], + Errors: errors.map((e) => ({ ErrorCode: e })), + }) .resolvesOnce({ Instances: [instancesSecondCall] }); } @@ -673,7 +734,10 @@ interface ExpectedFleetRequestValues { function expectedCreateFleetRequest(expectedValues: ExpectedFleetRequestValues): CreateFleetCommandInput { const tags = [ { Key: 'ghr:Application', Value: 'github-action-runner' }, - { Key: 'ghr:created_by', Value: expectedValues.totalTargetCapacity > 1 ? 'pool-lambda' : 'scale-up-lambda' }, + { + Key: 'ghr:created_by', + Value: expectedValues.totalTargetCapacity > 1 ? 'pool-lambda' : 'scale-up-lambda', + }, { Key: 'ghr:Type', Value: expectedValues.type }, { Key: 'ghr:Owner', Value: REPO_NAME }, ]; diff --git a/lambdas/functions/control-plane/src/aws/runners.ts b/lambdas/functions/control-plane/src/aws/runners.ts index 6779dd39d2..d95dc99fa4 100644 --- a/lambdas/functions/control-plane/src/aws/runners.ts +++ b/lambdas/functions/control-plane/src/aws/runners.ts @@ -166,53 +166,62 @@ async function processFleetResult( ): Promise { const instances: string[] = fleet.Instances?.flatMap((i) => i.InstanceIds?.flatMap((j) => j) || []) || []; - if (instances.length !== runnerParameters.numberOfRunners) { - logger.warn( - `${ - instances.length === 0 ? 'No' : instances.length + ' off ' + runnerParameters.numberOfRunners - } instances created.`, - { data: fleet }, - ); - const errors = fleet.Errors?.flatMap((e) => e.ErrorCode || '') || []; - - // Educated guess of errors that would make sense to retry based on the list - // https://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html - const scaleErrors = [ - 'UnfulfillableCapacity', - 'MaxSpotInstanceCountExceeded', - 'TargetCapacityLimitExceededException', - 'RequestLimitExceeded', - 'ResourceLimitExceeded', - 'MaxSpotInstanceCountExceeded', - 'MaxSpotFleetRequestCountExceeded', - 'InsufficientInstanceCapacity', - ]; - - if ( - errors.some((e) => runnerParameters.onDemandFailoverOnError?.includes(e)) && - runnerParameters.ec2instanceCriteria.targetCapacityType === 'spot' - ) { - logger.warn(`Create fleet failed, initatiing fall back to on demand instances.`); - logger.debug('Create fleet failed.', { data: fleet.Errors }); - const numberOfInstances = runnerParameters.numberOfRunners - instances.length; - const instancesOnDemand = await createRunner({ - ...runnerParameters, - numberOfRunners: numberOfInstances, - onDemandFailoverOnError: ['InsufficientInstanceCapacity'], - ec2instanceCriteria: { ...runnerParameters.ec2instanceCriteria, targetCapacityType: 'on-demand' }, - }); - instances.push(...instancesOnDemand); - return instances; - } else if (errors.some((e) => scaleErrors.includes(e))) { - logger.warn('Create fleet failed, ScaleError will be thrown to trigger retry for ephemeral runners.'); - logger.debug('Create fleet failed.', { data: fleet.Errors }); - throw new ScaleError('Failed to create instance, create fleet failed.'); - } else { - logger.warn('Create fleet failed, error not recognized as scaling error.', { data: fleet.Errors }); - throw Error('Create fleet failed, no instance created.'); - } + if (instances.length === runnerParameters.numberOfRunners) { + return instances; } - return instances; + + logger.warn( + `${ + instances.length === 0 ? 'No' : instances.length + ' off ' + runnerParameters.numberOfRunners + } instances created.`, + { data: fleet }, + ); + + const errors = fleet.Errors?.flatMap((e) => e.ErrorCode || '') || []; + + if ( + errors.some((e) => runnerParameters.onDemandFailoverOnError?.includes(e)) && + runnerParameters.ec2instanceCriteria.targetCapacityType === 'spot' + ) { + logger.warn(`Create fleet failed, initatiing fall back to on demand instances.`); + logger.debug('Create fleet failed.', { data: fleet.Errors }); + const numberOfInstances = runnerParameters.numberOfRunners - instances.length; + const instancesOnDemand = await createRunner({ + ...runnerParameters, + numberOfRunners: numberOfInstances, + onDemandFailoverOnError: ['InsufficientInstanceCapacity'], + ec2instanceCriteria: { ...runnerParameters.ec2instanceCriteria, targetCapacityType: 'on-demand' }, + }); + instances.push(...instancesOnDemand); + return instances; + } + + // Educated guess of errors that would make sense to retry based on the list + // https://docs.aws.amazon.com/AWSEC2/latest/APIReference/errors-overview.html + const scaleErrors = [ + 'UnfulfillableCapacity', + 'MaxSpotInstanceCountExceeded', + 'TargetCapacityLimitExceededException', + 'RequestLimitExceeded', + 'ResourceLimitExceeded', + 'MaxSpotInstanceCountExceeded', + 'MaxSpotFleetRequestCountExceeded', + 'InsufficientInstanceCapacity', + ]; + + const failedCount = countScaleErrors(errors, scaleErrors); + if (failedCount > 0) { + logger.warn('Create fleet failed, ScaleError will be thrown to trigger retry for ephemeral runners.'); + logger.debug('Create fleet failed.', { data: fleet.Errors }); + throw new ScaleError(failedCount); + } + + logger.warn('Create fleet failed, error not recognized as scaling error.', { data: fleet.Errors }); + throw Error('Create fleet failed, no instance created.'); +} + +function countScaleErrors(errors: string[], scaleErrors: string[]): number { + return errors.reduce((acc, e) => (scaleErrors.includes(e) ? acc + 1 : acc), 0); } async function getAmiIdOverride(runnerParameters: Runners.RunnerInputParameters): Promise { diff --git a/lambdas/functions/control-plane/src/lambda.test.ts b/lambdas/functions/control-plane/src/lambda.test.ts index 2c54a4d541..2c9a98e420 100644 --- a/lambdas/functions/control-plane/src/lambda.test.ts +++ b/lambdas/functions/control-plane/src/lambda.test.ts @@ -8,7 +8,7 @@ import { scaleDown } from './scale-runners/scale-down'; import { ActionRequestMessage, scaleUp } from './scale-runners/scale-up'; import { cleanSSMTokens } from './scale-runners/ssm-housekeeper'; import { checkAndRetryJob } from './scale-runners/job-retry'; -import { describe, it, expect, vi, MockedFunction } from 'vitest'; +import { describe, it, expect, vi, MockedFunction, beforeEach } from 'vitest'; const body: ActionRequestMessage = { eventType: 'workflow_job', @@ -28,11 +28,11 @@ const sqsRecord: SQSRecord = { }, awsRegion: '', body: JSON.stringify(body), - eventSource: 'aws:SQS', + eventSource: 'aws:sqs', eventSourceARN: '', md5OfBody: '', messageAttributes: {}, - messageId: '', + messageId: 'abcd1234', receiptHandle: '', }; @@ -70,100 +70,190 @@ vi.mock('@aws-github-runner/aws-powertools-util'); vi.mock('@aws-github-runner/aws-ssm-util'); describe('Test scale up lambda wrapper.', () => { - it('Do not handle multiple record sets.', async () => { - await testInvalidRecords([sqsRecord, sqsRecord]); + it('Do not handle empty record sets.', async () => { + const sqsEventMultipleRecords: SQSEvent = { + Records: [], + }; + + await expect(scaleUpHandler(sqsEventMultipleRecords, context)).resolves.not.toThrow(); }); - it('Do not handle empty record sets.', async () => { - await testInvalidRecords([]); + it('Ignores non-sqs event sources.', async () => { + const record = { + ...sqsRecord, + eventSource: 'aws:non-sqs', + }; + + const sqsEventMultipleRecordsNonSQS: SQSEvent = { + Records: [record], + }; + + await expect(scaleUpHandler(sqsEventMultipleRecordsNonSQS, context)).resolves.not.toThrow(); + expect(scaleUp).toHaveBeenCalledWith([]); }); it('Scale without error should resolve.', async () => { - const mock = vi.fn(scaleUp); - mock.mockImplementation(() => { - return new Promise((resolve) => { - resolve(); - }); - }); + vi.mocked(scaleUp).mockResolvedValue([]); await expect(scaleUpHandler(sqsEvent, context)).resolves.not.toThrow(); }); it('Non scale should resolve.', async () => { const error = new Error('Non scale should resolve.'); - const mock = vi.fn(scaleUp); - mock.mockRejectedValue(error); + vi.mocked(scaleUp).mockRejectedValue(error); await expect(scaleUpHandler(sqsEvent, context)).resolves.not.toThrow(); }); - it('Scale should be rejected', async () => { - const error = new ScaleError('Scale should be rejected'); - const mock = vi.fn() as MockedFunction; - mock.mockImplementation(() => { - return Promise.reject(error); + it('Scale should create a batch failure message', async () => { + const error = new ScaleError(); + vi.mocked(scaleUp).mockRejectedValue(error); + await expect(scaleUpHandler(sqsEvent, context)).resolves.toEqual({ + batchItemFailures: [{ itemIdentifier: sqsRecord.messageId }], }); - vi.mocked(scaleUp).mockImplementation(mock); - await expect(scaleUpHandler(sqsEvent, context)).rejects.toThrow(error); }); -}); -async function testInvalidRecords(sqsRecords: SQSRecord[]) { - const mock = vi.fn(scaleUp); - const logWarnSpy = vi.spyOn(logger, 'warn'); - mock.mockImplementation(() => { - return new Promise((resolve) => { - resolve(); + describe('Batch processing', () => { + beforeEach(() => { + vi.clearAllMocks(); }); - }); - const sqsEventMultipleRecords: SQSEvent = { - Records: sqsRecords, - }; - await expect(scaleUpHandler(sqsEventMultipleRecords, context)).resolves.not.toThrow(); + const createMultipleRecords = (count: number, eventSource = 'aws:sqs'): SQSRecord[] => { + return Array.from({ length: count }, (_, i) => ({ + ...sqsRecord, + eventSource, + messageId: `message-${i}`, + body: JSON.stringify({ + ...body, + id: i + 1, + }), + })); + }; - expect(logWarnSpy).toHaveBeenCalledWith( - expect.stringContaining( - 'Event ignored, only one record at the time can be handled, ensure the lambda batch size is set to 1.', - ), - ); -} + it('Should handle multiple SQS records in a single invocation', async () => { + const records = createMultipleRecords(3); + const multiRecordEvent: SQSEvent = { Records: records }; -describe('Test scale down lambda wrapper.', () => { - it('Scaling down no error.', async () => { - const mock = vi.fn(scaleDown); - mock.mockImplementation(() => { - return new Promise((resolve) => { - resolve(); + vi.mocked(scaleUp).mockResolvedValue([]); + + await expect(scaleUpHandler(multiRecordEvent, context)).resolves.not.toThrow(); + expect(scaleUp).toHaveBeenCalledWith( + expect.arrayContaining([ + expect.objectContaining({ messageId: 'message-0' }), + expect.objectContaining({ messageId: 'message-1' }), + expect.objectContaining({ messageId: 'message-2' }), + ]), + ); + }); + + it('Should return batch item failures for rejected messages', async () => { + const records = createMultipleRecords(3); + const multiRecordEvent: SQSEvent = { Records: records }; + + vi.mocked(scaleUp).mockResolvedValue(['message-1', 'message-2']); + + const result = await scaleUpHandler(multiRecordEvent, context); + expect(result).toEqual({ + batchItemFailures: [{ itemIdentifier: 'message-1' }, { itemIdentifier: 'message-2' }], + }); + }); + + it('Should filter out non-SQS event sources', async () => { + const sqsRecords = createMultipleRecords(2, 'aws:sqs'); + const nonSqsRecords = createMultipleRecords(1, 'aws:sns'); + const mixedEvent: SQSEvent = { + Records: [...sqsRecords, ...nonSqsRecords], + }; + + vi.mocked(scaleUp).mockResolvedValue([]); + + await scaleUpHandler(mixedEvent, context); + expect(scaleUp).toHaveBeenCalledWith( + expect.arrayContaining([ + expect.objectContaining({ messageId: 'message-0' }), + expect.objectContaining({ messageId: 'message-1' }), + ]), + ); + expect(scaleUp).not.toHaveBeenCalledWith( + expect.arrayContaining([expect.objectContaining({ messageId: 'message-2' })]), + ); + }); + + it('Should sort messages by retry count', async () => { + const records = [ + { + ...sqsRecord, + messageId: 'high-retry', + body: JSON.stringify({ ...body, retryCounter: 5 }), + }, + { + ...sqsRecord, + messageId: 'low-retry', + body: JSON.stringify({ ...body, retryCounter: 1 }), + }, + { + ...sqsRecord, + messageId: 'no-retry', + body: JSON.stringify({ ...body }), + }, + ]; + const multiRecordEvent: SQSEvent = { Records: records }; + + vi.mocked(scaleUp).mockImplementation((messages) => { + // Verify messages are sorted by retry count (ascending) + expect(messages[0].messageId).toBe('no-retry'); + expect(messages[1].messageId).toBe('low-retry'); + expect(messages[2].messageId).toBe('high-retry'); + return Promise.resolve([]); + }); + + await scaleUpHandler(multiRecordEvent, context); + }); + + it('Should return all failed messages when scaleUp throws non-ScaleError', async () => { + const records = createMultipleRecords(2); + const multiRecordEvent: SQSEvent = { Records: records }; + + vi.mocked(scaleUp).mockRejectedValue(new Error('Generic error')); + + const result = await scaleUpHandler(multiRecordEvent, context); + expect(result).toEqual({ batchItemFailures: [] }); + }); + + it('Should throw when scaleUp throws ScaleError', async () => { + const records = createMultipleRecords(2); + const multiRecordEvent: SQSEvent = { Records: records }; + + const error = new ScaleError(2); + vi.mocked(scaleUp).mockRejectedValue(error); + + await expect(scaleUpHandler(multiRecordEvent, context)).resolves.toEqual({ + batchItemFailures: [{ itemIdentifier: 'message-0' }, { itemIdentifier: 'message-1' }], }); }); + }); +}); + +describe('Test scale down lambda wrapper.', () => { + it('Scaling down no error.', async () => { + vi.mocked(scaleDown).mockResolvedValue(); await expect(scaleDownHandler({}, context)).resolves.not.toThrow(); }); it('Scaling down with error.', async () => { const error = new Error('Scaling down with error.'); - const mock = vi.fn(scaleDown); - mock.mockRejectedValue(error); + vi.mocked(scaleDown).mockRejectedValue(error); await expect(scaleDownHandler({}, context)).resolves.not.toThrow(); }); }); describe('Adjust pool.', () => { it('Receive message to adjust pool.', async () => { - const mock = vi.fn(adjust); - mock.mockImplementation(() => { - return new Promise((resolve) => { - resolve(); - }); - }); + vi.mocked(adjust).mockResolvedValue(); await expect(adjustPool({ poolSize: 2 }, context)).resolves.not.toThrow(); }); it('Handle error for adjusting pool.', async () => { const error = new Error('Handle error for adjusting pool.'); - const mock = vi.fn() as MockedFunction; - mock.mockImplementation(() => { - return Promise.reject(error); - }); - vi.mocked(adjust).mockImplementation(mock); + vi.mocked(adjust).mockRejectedValue(error); const logSpy = vi.spyOn(logger, 'error'); await adjustPool({ poolSize: 0 }, context); expect(logSpy).toHaveBeenCalledWith(`Handle error for adjusting pool. ${error.message}`, { error }); @@ -180,12 +270,7 @@ describe('Test middleware', () => { describe('Test ssm housekeeper lambda wrapper.', () => { it('Invoke without errors.', async () => { - const mock = vi.fn(cleanSSMTokens); - mock.mockImplementation(() => { - return new Promise((resolve) => { - resolve(); - }); - }); + vi.mocked(cleanSSMTokens).mockResolvedValue(); process.env.SSM_CLEANUP_CONFIG = JSON.stringify({ dryRun: false, @@ -197,29 +282,20 @@ describe('Test ssm housekeeper lambda wrapper.', () => { }); it('Errors not throws.', async () => { - const mock = vi.fn(cleanSSMTokens); - mock.mockRejectedValue(new Error()); + vi.mocked(cleanSSMTokens).mockRejectedValue(new Error()); await expect(ssmHousekeeper({}, context)).resolves.not.toThrow(); }); }); describe('Test job retry check wrapper', () => { it('Handle without error should resolve.', async () => { - const mock = vi.fn() as MockedFunction; - mock.mockImplementation(() => { - return Promise.resolve(); - }); - vi.mocked(checkAndRetryJob).mockImplementation(mock); + vi.mocked(checkAndRetryJob).mockResolvedValue(); await expect(jobRetryCheck(sqsEvent, context)).resolves.not.toThrow(); }); it('Handle with error should resolve and log only a warning.', async () => { const error = new Error('Error handling retry check.'); - const mock = vi.fn() as MockedFunction; - mock.mockImplementation(() => { - return Promise.reject(error); - }); - vi.mocked(checkAndRetryJob).mockImplementation(mock); + vi.mocked(checkAndRetryJob).mockRejectedValue(error); const logSpyWarn = vi.spyOn(logger, 'warn'); await expect(jobRetryCheck(sqsEvent, context)).resolves.not.toThrow(); diff --git a/lambdas/functions/control-plane/src/lambda.ts b/lambdas/functions/control-plane/src/lambda.ts index 3e3ab90557..e2a0451c95 100644 --- a/lambdas/functions/control-plane/src/lambda.ts +++ b/lambdas/functions/control-plane/src/lambda.ts @@ -1,34 +1,66 @@ import middy from '@middy/core'; import { logger, setContext } from '@aws-github-runner/aws-powertools-util'; import { captureLambdaHandler, tracer } from '@aws-github-runner/aws-powertools-util'; -import { Context, SQSEvent } from 'aws-lambda'; +import { Context, type SQSBatchItemFailure, type SQSBatchResponse, SQSEvent } from 'aws-lambda'; import { PoolEvent, adjust } from './pool/pool'; import ScaleError from './scale-runners/ScaleError'; import { scaleDown } from './scale-runners/scale-down'; -import { scaleUp } from './scale-runners/scale-up'; +import { type ActionRequestMessage, type ActionRequestMessageSQS, scaleUp } from './scale-runners/scale-up'; import { SSMCleanupOptions, cleanSSMTokens } from './scale-runners/ssm-housekeeper'; import { checkAndRetryJob } from './scale-runners/job-retry'; -export async function scaleUpHandler(event: SQSEvent, context: Context): Promise { +export async function scaleUpHandler(event: SQSEvent, context: Context): Promise { setContext(context, 'lambda.ts'); logger.logEventIfEnabled(event); - if (event.Records.length !== 1) { - logger.warn('Event ignored, only one record at the time can be handled, ensure the lambda batch size is set to 1.'); - return Promise.resolve(); + const sqsMessages: ActionRequestMessageSQS[] = []; + const warnedEventSources = new Set(); + + for (const { body, eventSource, messageId } of event.Records) { + if (eventSource !== 'aws:sqs') { + if (!warnedEventSources.has(eventSource)) { + logger.warn('Ignoring non-sqs event source', { eventSource }); + warnedEventSources.add(eventSource); + } + + continue; + } + + const payload = JSON.parse(body) as ActionRequestMessage; + sqsMessages.push({ ...payload, messageId }); } + // Sort messages by their retry count, so that we retry the same messages if + // there's a persistent failure. This should cause messages to be dropped + // quicker than if we retried in an arbitrary order. + sqsMessages.sort((l, r) => { + return (l.retryCounter ?? 0) - (r.retryCounter ?? 0); + }); + + const batchItemFailures: SQSBatchItemFailure[] = []; + try { - await scaleUp(event.Records[0].eventSource, JSON.parse(event.Records[0].body)); - return Promise.resolve(); + const rejectedMessageIds = await scaleUp(sqsMessages); + + for (const messageId of rejectedMessageIds) { + batchItemFailures.push({ + itemIdentifier: messageId, + }); + } + + return { batchItemFailures }; } catch (e) { if (e instanceof ScaleError) { - return Promise.reject(e); + batchItemFailures.push(...e.toBatchItemFailures(sqsMessages)); + logger.warn(`${e.detailedMessage} A retry will be attempted via SQS.`, { error: e }); } else { - logger.warn(`Ignoring error: ${e}`); - return Promise.resolve(); + logger.error(`Error processing batch (size: ${sqsMessages.length}): ${(e as Error).message}, ignoring batch`, { + error: e, + }); } + + return { batchItemFailures }; } } diff --git a/lambdas/functions/control-plane/src/local.ts b/lambdas/functions/control-plane/src/local.ts index 2166da58fd..0b06335c8a 100644 --- a/lambdas/functions/control-plane/src/local.ts +++ b/lambdas/functions/control-plane/src/local.ts @@ -1,21 +1,21 @@ import { logger } from '@aws-github-runner/aws-powertools-util'; -import { ActionRequestMessage, scaleUp } from './scale-runners/scale-up'; +import { scaleUpHandler } from './lambda'; +import { Context, SQSEvent } from 'aws-lambda'; -const sqsEvent = { +const sqsEvent: SQSEvent = { Records: [ { messageId: 'e8d74d08-644e-42ca-bf82-a67daa6c4dad', receiptHandle: - // eslint-disable-next-line max-len 'AQEBCpLYzDEKq4aKSJyFQCkJduSKZef8SJVOperbYyNhXqqnpFG5k74WygVAJ4O0+9nybRyeOFThvITOaS21/jeHiI5fgaM9YKuI0oGYeWCIzPQsluW5CMDmtvqv1aA8sXQ5n2x0L9MJkzgdIHTC3YWBFLQ2AxSveOyIHwW+cHLIFCAcZlOaaf0YtaLfGHGkAC4IfycmaijV8NSlzYgDuxrC9sIsWJ0bSvk5iT4ru/R4+0cjm7qZtGlc04k9xk5Fu6A+wRxMaIyiFRY+Ya19ykcevQldidmEjEWvN6CRToLgclk=', - body: { + body: JSON.stringify({ repositoryName: 'self-hosted', repositoryOwner: 'test-runners', eventType: 'workflow_job', id: 987654, installationId: 123456789, - }, + }), attributes: { ApproximateReceiveCount: '1', SentTimestamp: '1626450047230', @@ -34,12 +34,34 @@ const sqsEvent = { ], }; +const context: Context = { + awsRequestId: '1', + callbackWaitsForEmptyEventLoop: false, + functionName: '', + functionVersion: '', + getRemainingTimeInMillis: () => 0, + invokedFunctionArn: '', + logGroupName: '', + logStreamName: '', + memoryLimitInMB: '', + done: () => { + return; + }, + fail: () => { + return; + }, + succeed: () => { + return; + }, +}; + export function run(): void { - scaleUp(sqsEvent.Records[0].eventSource, sqsEvent.Records[0].body as ActionRequestMessage) - .then() - .catch((e) => { - logger.error(e); - }); + try { + scaleUpHandler(sqsEvent, context); + } catch (e: unknown) { + const message = e instanceof Error ? e.message : `${e}`; + logger.error(message, e instanceof Error ? { error: e } : {}); + } } run(); diff --git a/lambdas/functions/control-plane/src/pool/pool.test.ts b/lambdas/functions/control-plane/src/pool/pool.test.ts index 6dd389873b..c05a8b8cb7 100644 --- a/lambdas/functions/control-plane/src/pool/pool.test.ts +++ b/lambdas/functions/control-plane/src/pool/pool.test.ts @@ -190,11 +190,7 @@ describe('Test simple pool.', () => { it('Top up pool with pool size 2 registered.', async () => { await adjust({ poolSize: 3 }); expect(createRunners).toHaveBeenCalledTimes(1); - expect(createRunners).toHaveBeenCalledWith( - expect.anything(), - expect.objectContaining({ numberOfRunners: 1 }), - expect.anything(), - ); + expect(createRunners).toHaveBeenCalledWith(expect.anything(), expect.anything(), 1, expect.anything()); }); it('Should not top up if pool size is reached.', async () => { @@ -270,11 +266,7 @@ describe('Test simple pool.', () => { it('Top up if the pool size is set to 5', async () => { await adjust({ poolSize: 5 }); // 2 idle, top up with 3 to match a pool of 5 - expect(createRunners).toHaveBeenCalledWith( - expect.anything(), - expect.objectContaining({ numberOfRunners: 3 }), - expect.anything(), - ); + expect(createRunners).toHaveBeenCalledWith(expect.anything(), expect.anything(), 3, expect.anything()); }); }); @@ -289,11 +281,7 @@ describe('Test simple pool.', () => { it('Top up if the pool size is set to 5', async () => { await adjust({ poolSize: 5 }); // 2 idle, top up with 3 to match a pool of 5 - expect(createRunners).toHaveBeenCalledWith( - expect.anything(), - expect.objectContaining({ numberOfRunners: 3 }), - expect.anything(), - ); + expect(createRunners).toHaveBeenCalledWith(expect.anything(), expect.anything(), 3, expect.anything()); }); }); @@ -343,11 +331,7 @@ describe('Test simple pool.', () => { await adjust({ poolSize: 5 }); // 2 idle, 2 prefixed idle top up with 1 to match a pool of 5 - expect(createRunners).toHaveBeenCalledWith( - expect.anything(), - expect.objectContaining({ numberOfRunners: 1 }), - expect.anything(), - ); + expect(createRunners).toHaveBeenCalledWith(expect.anything(), expect.anything(), 1, expect.anything()); }); }); }); diff --git a/lambdas/functions/control-plane/src/pool/pool.ts b/lambdas/functions/control-plane/src/pool/pool.ts index 07477572ce..aa690e97f6 100644 --- a/lambdas/functions/control-plane/src/pool/pool.ts +++ b/lambdas/functions/control-plane/src/pool/pool.ts @@ -92,11 +92,11 @@ export async function adjust(event: PoolEvent): Promise { environment, launchTemplateName, subnets, - numberOfRunners: topUp, amiIdSsmParameterName, tracingEnabled, onDemandFailoverOnError, }, + topUp, githubInstallationClient, ); } else { diff --git a/lambdas/functions/control-plane/src/scale-runners/ScaleError.test.ts b/lambdas/functions/control-plane/src/scale-runners/ScaleError.test.ts new file mode 100644 index 0000000000..0a7478c12f --- /dev/null +++ b/lambdas/functions/control-plane/src/scale-runners/ScaleError.test.ts @@ -0,0 +1,76 @@ +import { describe, expect, it } from 'vitest'; +import type { ActionRequestMessageSQS } from './scale-up'; +import ScaleError from './ScaleError'; + +describe('ScaleError', () => { + describe('detailedMessage', () => { + it('should format message for single instance failure', () => { + const error = new ScaleError(1); + + expect(error.detailedMessage).toBe( + 'Failed to create instance, create fleet failed. (Failed to create 1 instance)', + ); + }); + + it('should format message for multiple instance failures', () => { + const error = new ScaleError(3); + + expect(error.detailedMessage).toBe( + 'Failed to create instance, create fleet failed. (Failed to create 3 instances)', + ); + }); + }); + + describe('toBatchItemFailures', () => { + const mockMessages: ActionRequestMessageSQS[] = [ + { messageId: 'msg-1', id: 1, eventType: 'workflow_job' }, + { messageId: 'msg-2', id: 2, eventType: 'workflow_job' }, + { messageId: 'msg-3', id: 3, eventType: 'workflow_job' }, + { messageId: 'msg-4', id: 4, eventType: 'workflow_job' }, + ]; + + it.each([ + { failedCount: 1, expected: [{ itemIdentifier: 'msg-1' }], description: 'default instance count' }, + { + failedCount: 2, + expected: [{ itemIdentifier: 'msg-1' }, { itemIdentifier: 'msg-2' }], + description: 'less than message count', + }, + { + failedCount: 4, + expected: [ + { itemIdentifier: 'msg-1' }, + { itemIdentifier: 'msg-2' }, + { itemIdentifier: 'msg-3' }, + { itemIdentifier: 'msg-4' }, + ], + description: 'equal to message count', + }, + { + failedCount: 10, + expected: [ + { itemIdentifier: 'msg-1' }, + { itemIdentifier: 'msg-2' }, + { itemIdentifier: 'msg-3' }, + { itemIdentifier: 'msg-4' }, + ], + description: 'more than message count', + }, + { failedCount: 0, expected: [], description: 'zero failed instances' }, + { failedCount: -1, expected: [], description: 'negative failed instances' }, + { failedCount: -10, expected: [], description: 'large negative failed instances' }, + ])('should handle $description (failedCount=$failedCount)', ({ failedCount, expected }) => { + const error = new ScaleError(failedCount); + const failures = error.toBatchItemFailures(mockMessages); + + expect(failures).toEqual(expected); + }); + + it('should handle empty message array', () => { + const error = new ScaleError(3); + const failures = error.toBatchItemFailures([]); + + expect(failures).toEqual([]); + }); + }); +}); diff --git a/lambdas/functions/control-plane/src/scale-runners/ScaleError.ts b/lambdas/functions/control-plane/src/scale-runners/ScaleError.ts index d7e71f8c33..9c1f474d17 100644 --- a/lambdas/functions/control-plane/src/scale-runners/ScaleError.ts +++ b/lambdas/functions/control-plane/src/scale-runners/ScaleError.ts @@ -1,8 +1,28 @@ +import type { SQSBatchItemFailure } from 'aws-lambda'; +import type { ActionRequestMessageSQS } from './scale-up'; + class ScaleError extends Error { - constructor(public message: string) { - super(message); + constructor(public readonly failedInstanceCount: number = 1) { + super('Failed to create instance, create fleet failed.'); this.name = 'ScaleError'; - this.stack = new Error().stack; + } + + /** + * Gets a formatted error message including the failed instance count + */ + public get detailedMessage(): string { + return `${this.message} (Failed to create ${this.failedInstanceCount} instance${this.failedInstanceCount !== 1 ? 's' : ''})`; + } + + /** + * Generate SQS batch item failures for the failed instances + */ + public toBatchItemFailures(messages: ActionRequestMessageSQS[]): SQSBatchItemFailure[] { + // Ensure we don't retry negative counts or more messages than available + const messagesToRetry = Math.max(0, Math.min(this.failedInstanceCount, messages.length)); + return messages.slice(0, messagesToRetry).map(({ messageId }) => ({ + itemIdentifier: messageId, + })); } } diff --git a/lambdas/functions/control-plane/src/scale-runners/job-retry.test.ts b/lambdas/functions/control-plane/src/scale-runners/job-retry.test.ts index c401ab4c2d..f807d06d8a 100644 --- a/lambdas/functions/control-plane/src/scale-runners/job-retry.test.ts +++ b/lambdas/functions/control-plane/src/scale-runners/job-retry.test.ts @@ -2,9 +2,11 @@ import { publishMessage } from '../aws/sqs'; import { publishRetryMessage, checkAndRetryJob } from './job-retry'; import { ActionRequestMessage, ActionRequestMessageRetry } from './scale-up'; import { getOctokit } from '../github/octokit'; +import { jobRetryCheck } from '../lambda'; import { Octokit } from '@octokit/rest'; import { createSingleMetric } from '@aws-github-runner/aws-powertools-util'; import { describe, it, expect, beforeEach, vi } from 'vitest'; +import type { SQSRecord } from 'aws-lambda'; vi.mock('../aws/sqs', async () => ({ publishMessage: vi.fn(), @@ -269,3 +271,93 @@ describe(`Test job retry check`, () => { expect(publishMessage).not.toHaveBeenCalled(); }); }); + +describe('Test job retry handler (batch processing)', () => { + const context = { + requestId: 'request-id', + functionName: 'function-name', + functionVersion: 'function-version', + invokedFunctionArn: 'invoked-function-arn', + memoryLimitInMB: '128', + awsRequestId: 'aws-request-id', + logGroupName: 'log-group-name', + logStreamName: 'log-stream-name', + remainingTimeInMillis: () => 30000, + done: () => {}, + fail: () => {}, + succeed: () => {}, + getRemainingTimeInMillis: () => 30000, + callbackWaitsForEmptyEventLoop: false, + }; + + function createSQSRecord(messageId: string): SQSRecord { + return { + messageId, + receiptHandle: 'receipt-handle', + body: JSON.stringify({ + eventType: 'workflow_job', + id: 123, + installationId: 456, + repositoryName: 'test-repo', + repositoryOwner: 'test-owner', + repoOwnerType: 'Organization', + retryCounter: 0, + }), + attributes: { + ApproximateReceiveCount: '1', + SentTimestamp: '1234567890', + SenderId: 'sender-id', + ApproximateFirstReceiveTimestamp: '1234567891', + }, + messageAttributes: {}, + md5OfBody: 'md5', + eventSource: 'aws:sqs', + eventSourceARN: 'arn:aws:sqs:region:account:queue', + awsRegion: 'us-east-1', + }; + } + + beforeEach(() => { + vi.clearAllMocks(); + process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; + process.env.JOB_QUEUE_SCALE_UP_URL = 'https://sqs.example.com/queue'; + }); + + it('should handle multiple records in a single batch', async () => { + mockOctokit.actions.getJobForWorkflowRun.mockImplementation(() => ({ + data: { + status: 'queued', + }, + headers: {}, + })); + + const event = { + Records: [createSQSRecord('msg-1'), createSQSRecord('msg-2'), createSQSRecord('msg-3')], + }; + + await expect(jobRetryCheck(event, context)).resolves.not.toThrow(); + expect(publishMessage).toHaveBeenCalledTimes(3); + }); + + it('should continue processing other records when one fails', async () => { + mockCreateOctokitClient + .mockResolvedValueOnce(new Octokit()) // First record succeeds + .mockRejectedValueOnce(new Error('API error')) // Second record fails + .mockResolvedValueOnce(new Octokit()); // Third record succeeds + + mockOctokit.actions.getJobForWorkflowRun.mockImplementation(() => ({ + data: { + status: 'queued', + }, + headers: {}, + })); + + const event = { + Records: [createSQSRecord('msg-1'), createSQSRecord('msg-2'), createSQSRecord('msg-3')], + }; + + await expect(jobRetryCheck(event, context)).resolves.not.toThrow(); + // There were two successful calls to publishMessage + expect(publishMessage).toHaveBeenCalledTimes(2); + }); +}); diff --git a/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts b/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts index 477ef147fb..b876d31d50 100644 --- a/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts +++ b/lambdas/functions/control-plane/src/scale-runners/scale-up.test.ts @@ -1,5 +1,4 @@ import { PutParameterCommand, SSMClient } from '@aws-sdk/client-ssm'; -import { Octokit } from '@octokit/rest'; import { mockClient } from 'aws-sdk-client-mock'; import 'aws-sdk-client-mock-jest/vitest'; // Using vi.mocked instead of jest-mock @@ -9,10 +8,10 @@ import { performance } from 'perf_hooks'; import * as ghAuth from '../github/auth'; import { createRunner, listEC2Runners } from './../aws/runners'; import { RunnerInputParameters } from './../aws/runners.d'; -import ScaleError from './ScaleError'; import * as scaleUpModule from './scale-up'; import { getParameter } from '@aws-github-runner/aws-ssm-util'; import { describe, it, expect, beforeEach, vi } from 'vitest'; +import type { Octokit } from '@octokit/rest'; const mockOctokit = { paginate: vi.fn(), @@ -29,6 +28,7 @@ const mockOctokit = { getRepoInstallation: vi.fn(), }, }; + const mockCreateRunner = vi.mocked(createRunner); const mockListRunners = vi.mocked(listEC2Runners); const mockSSMClient = mockClient(SSMClient); @@ -68,26 +68,33 @@ export type RunnerType = 'ephemeral' | 'non-ephemeral'; // for ephemeral and non-ephemeral runners const RUNNER_TYPES: RunnerType[] = ['ephemeral', 'non-ephemeral']; -const mocktokit = Octokit as vi.MockedClass; const mockedAppAuth = vi.mocked(ghAuth.createGithubAppAuth); const mockedInstallationAuth = vi.mocked(ghAuth.createGithubInstallationAuth); const mockCreateClient = vi.mocked(ghAuth.createOctokitClient); -const TEST_DATA: scaleUpModule.ActionRequestMessage = { +const TEST_DATA_SINGLE: scaleUpModule.ActionRequestMessageSQS = { id: 1, eventType: 'workflow_job', repositoryName: 'hello-world', repositoryOwner: 'Codertocat', installationId: 2, repoOwnerType: 'Organization', + messageId: 'foobar', }; +const TEST_DATA: scaleUpModule.ActionRequestMessageSQS[] = [ + { + ...TEST_DATA_SINGLE, + messageId: 'foobar', + }, +]; + const cleanEnv = process.env; const EXPECTED_RUNNER_PARAMS: RunnerInputParameters = { environment: 'unit-test-environment', runnerType: 'Org', - runnerOwner: TEST_DATA.repositoryOwner, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, numberOfRunners: 1, launchTemplateName: 'lt-1', ec2instanceCriteria: { @@ -134,14 +141,14 @@ beforeEach(() => { instanceId: 'i-1234', launchTime: new Date(), type: 'Org', - owner: TEST_DATA.repositoryOwner, + owner: TEST_DATA_SINGLE.repositoryOwner, }, ]); mockedAppAuth.mockResolvedValue({ type: 'app', token: 'token', - appId: TEST_DATA.installationId, + appId: TEST_DATA_SINGLE.installationId, expiresAt: 'some-date', }); mockedInstallationAuth.mockResolvedValue({ @@ -155,7 +162,7 @@ beforeEach(() => { installationId: 0, }); - mockCreateClient.mockResolvedValue(new mocktokit()); + mockCreateClient.mockResolvedValue(mockOctokit as unknown as Octokit); }); describe('scaleUp with GHES', () => { @@ -163,17 +170,12 @@ describe('scaleUp with GHES', () => { process.env.GHES_URL = 'https://github.enterprise.something'; }); - it('ignores non-sqs events', async () => { - expect.assertions(1); - await expect(scaleUpModule.scaleUp('aws:s3', TEST_DATA)).rejects.toEqual(Error('Cannot handle non-SQS events!')); - }); - it('checks queued workflows', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).toBeCalledWith({ - job_id: TEST_DATA.id, - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + job_id: TEST_DATA_SINGLE.id, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); @@ -181,7 +183,7 @@ describe('scaleUp with GHES', () => { mockOctokit.actions.getJobForWorkflowRun.mockImplementation(() => ({ data: { total_count: 0 }, })); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).not.toBeCalled(); }); @@ -200,18 +202,18 @@ describe('scaleUp with GHES', () => { }); it('gets the current org level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Org', - runnerOwner: TEST_DATA.repositoryOwner, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); @@ -219,35 +221,35 @@ describe('scaleUp with GHES', () => { it('does create a runner if maximum is set to -1', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '-1'; process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).not.toHaveBeenCalled(); expect(createRunner).toHaveBeenCalled(); }); it('creates a token when maximum runners has not been reached', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).toBeCalledWith({ - org: TEST_DATA.repositoryOwner, + org: TEST_DATA_SINGLE.repositoryOwner, }); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a runner with correct config', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with labels in a specific group', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with ami id override from ssm parameter', async () => { process.env.AMI_ID_SSM_PARAMETER_NAME = 'my-ami-id-param'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith({ ...expectedRunnerParams, amiIdSsmParameterName: 'my-ami-id-param' }); }); @@ -256,15 +258,15 @@ describe('scaleUp with GHES', () => { mockSSMgetParameter.mockImplementation(async () => { throw new Error('ParameterNotFound'); }); - await expect(scaleUpModule.scaleUp('aws:sqs', TEST_DATA)).rejects.toBeInstanceOf(Error); + await expect(scaleUpModule.scaleUp(TEST_DATA)).rejects.toBeInstanceOf(Error); expect(mockOctokit.paginate).toHaveBeenCalledTimes(1); }); it('Discards event if it is a User repo and org level runners is enabled', async () => { process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; - const USER_REPO_TEST_DATA = { ...TEST_DATA }; - USER_REPO_TEST_DATA.repoOwnerType = 'User'; - await scaleUpModule.scaleUp('aws:sqs', USER_REPO_TEST_DATA); + const USER_REPO_TEST_DATA = structuredClone(TEST_DATA); + USER_REPO_TEST_DATA[0].repoOwnerType = 'User'; + await scaleUpModule.scaleUp(USER_REPO_TEST_DATA); expect(createRunner).not.toHaveBeenCalled(); }); @@ -272,7 +274,7 @@ describe('scaleUp with GHES', () => { mockSSMgetParameter.mockImplementation(async () => { throw new Error('ParameterNotFound'); }); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.paginate).toHaveBeenCalledTimes(1); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 2); expect(mockSSMClient).toHaveReceivedNthSpecificCommandWith(1, PutParameterCommand, { @@ -283,7 +285,7 @@ describe('scaleUp with GHES', () => { }); it('Does not create SSM parameter for runner group id if it exists', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.paginate).toHaveBeenCalledTimes(0); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 1); }); @@ -291,9 +293,9 @@ describe('scaleUp with GHES', () => { it('create start runner config for ephemeral runners ', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.generateRunnerJitconfigForOrg).toBeCalledWith({ - org: TEST_DATA.repositoryOwner, + org: TEST_DATA_SINGLE.repositoryOwner, name: 'unit-test-i-12345', runner_group_id: 1, labels: ['label1', 'label2'], @@ -314,7 +316,7 @@ describe('scaleUp with GHES', () => { it('create start runner config for non-ephemeral runners ', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; process.env.RUNNERS_MAXIMUM_COUNT = '2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.generateRunnerJitconfigForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForOrg).toBeCalled(); expect(mockSSMClient).toHaveReceivedNthSpecificCommandWith(1, PutParameterCommand, { @@ -385,7 +387,7 @@ describe('scaleUp with GHES', () => { 'i-150', 'i-151', ]; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); const endTime = performance.now(); expect(endTime - startTime).toBeGreaterThan(1000); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 40); @@ -399,87 +401,307 @@ describe('scaleUp with GHES', () => { process.env.RUNNER_NAME_PREFIX = 'unit-test'; expectedRunnerParams = { ...EXPECTED_RUNNER_PARAMS }; expectedRunnerParams.runnerType = 'Repo'; - expectedRunnerParams.runnerOwner = `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`; - // `--url https://github.enterprise.something/${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`, + expectedRunnerParams.runnerOwner = `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`; + // `--url https://github.enterprise.something/${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`, // `--token 1234abcd`, // ]; }); it('gets the current repo level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Repo', - runnerOwner: `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`, + runnerOwner: `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a token when maximum runners has not been reached', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).toBeCalledWith({ - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('uses the default runner max count', async () => { process.env.RUNNERS_MAXIMUM_COUNT = undefined; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForRepo).toBeCalledWith({ - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('creates a runner with correct config and labels', async () => { process.env.RUNNER_LABELS = 'label1,label2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner and ensure the group argument is ignored', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP_IGNORED'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('Check error is thrown', async () => { const mockCreateRunners = vi.mocked(createRunner); mockCreateRunners.mockRejectedValue(new Error('no retry')); - await expect(scaleUpModule.scaleUp('aws:sqs', TEST_DATA)).rejects.toThrow('no retry'); + await expect(scaleUpModule.scaleUp(TEST_DATA)).rejects.toThrow('no retry'); mockCreateRunners.mockReset(); }); }); -}); -describe('scaleUp with public GH', () => { - it('ignores non-sqs events', async () => { - expect.assertions(1); - await expect(scaleUpModule.scaleUp('aws:s3', TEST_DATA)).rejects.toEqual(Error('Cannot handle non-SQS events!')); + describe('Batch processing', () => { + beforeEach(() => { + process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + process.env.RUNNERS_MAXIMUM_COUNT = '10'; + }); + + const createTestMessages = ( + count: number, + overrides: Partial[] = [], + ): scaleUpModule.ActionRequestMessageSQS[] => { + return Array.from({ length: count }, (_, i) => ({ + ...TEST_DATA_SINGLE, + id: i + 1, + messageId: `message-${i}`, + ...overrides[i], + })); + }; + + it('Should handle multiple messages for the same organization', async () => { + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(1); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 3, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, + }), + ); + }); + + it('Should handle multiple messages for different organizations', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'org1' }, + { repositoryOwner: 'org2' }, + { repositoryOwner: 'org1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'org1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'org2', + }), + ); + }); + + it('Should handle multiple messages for different repositories when org-level is disabled', async () => { + process.env.ENABLE_ORGANIZATION_RUNNERS = 'false'; + const messages = createTestMessages(3, [ + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + { repositoryOwner: 'owner1', repositoryName: 'repo2' }, + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'owner1/repo1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'owner1/repo2', + }), + ); + }); + + it('Should reject messages when maximum runners limit is reached', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '1'; // Set to 1 so with 1 existing, no new ones can be created + mockListRunners.mockImplementation(async () => [ + { + instanceId: 'i-existing', + launchTime: new Date(), + type: 'Org', + owner: TEST_DATA_SINGLE.repositoryOwner, + }, + ]); + + const messages = createTestMessages(3); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).not.toHaveBeenCalled(); // No runners should be created + expect(rejectedMessages).toHaveLength(3); // All 3 messages should be rejected + }); + + it('Should handle partial EC2 instance creation failures', async () => { + mockCreateRunner.mockImplementation(async () => ['i-12345']); // Only creates 1 instead of requested 3 + + const messages = createTestMessages(3); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(rejectedMessages).toHaveLength(2); // 3 requested - 1 created = 2 failed + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should filter out invalid event types for ephemeral runners', async () => { + const messages = createTestMessages(3, [ + { eventType: 'workflow_job' }, + { eventType: 'check_run' }, + { eventType: 'workflow_job' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only workflow_job events processed + }), + ); + expect(rejectedMessages).toContain('message-1'); // check_run event rejected + }); + + it('Should skip invalid repo owner types but not reject them', async () => { + const messages = createTestMessages(3, [ + { repoOwnerType: 'Organization' }, + { repoOwnerType: 'User' }, // Invalid for org-level runners + { repoOwnerType: 'Organization' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only Organization events processed + }), + ); + expect(rejectedMessages).not.toContain('message-1'); // User repo not rejected, just skipped + }); + + it('Should skip messages when jobs are not queued', async () => { + mockOctokit.actions.getJobForWorkflowRun.mockImplementation((params) => { + const isQueued = params.job_id === 1 || params.job_id === 3; // Only jobs 1 and 3 are queued + return { + data: { + status: isQueued ? 'queued' : 'completed', + }, + }; + }); + + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only queued jobs processed + }), + ); + }); + + it('Should create separate GitHub clients for different installations', async () => { + // Override the default mock to return different installation IDs + mockOctokit.apps.getOrgInstallation.mockReset(); + mockOctokit.apps.getOrgInstallation.mockImplementation((params) => ({ + data: { + id: params.org === 'org1' ? 100 : 200, + }, + })); + + const messages = createTestMessages(2, [ + { repositoryOwner: 'org1', installationId: 0 }, + { repositoryOwner: 'org2', installationId: 0 }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(3); // 1 app client, 2 repo installation clients + expect(mockedInstallationAuth).toHaveBeenCalledWith(100, 'https://github.enterprise.something/api/v3'); + expect(mockedInstallationAuth).toHaveBeenCalledWith(200, 'https://github.enterprise.something/api/v3'); + }); + + it('Should reuse GitHub clients for same installation', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(2); // 1 app client, 1 installation client + expect(mockedInstallationAuth).toHaveBeenCalledTimes(1); + }); + + it('Should return empty array when no valid messages to process', async () => { + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + const messages = createTestMessages(2, [ + { eventType: 'check_run' }, // Invalid for ephemeral + { eventType: 'check_run' }, // Invalid for ephemeral + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).not.toHaveBeenCalled(); + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should handle unlimited runners configuration', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '-1'; + const messages = createTestMessages(10); + + await scaleUpModule.scaleUp(messages); + + expect(listEC2Runners).not.toHaveBeenCalled(); // No need to check current runners + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 10, // All messages processed + }), + ); + }); }); +}); +describe('scaleUp with public GH', () => { it('checks queued workflows', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).toBeCalledWith({ - job_id: TEST_DATA.id, - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + job_id: TEST_DATA_SINGLE.id, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('not checking queued workflows', async () => { process.env.ENABLE_JOB_QUEUED_CHECK = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).not.toBeCalled(); }); @@ -487,7 +709,7 @@ describe('scaleUp with public GH', () => { mockOctokit.actions.getJobForWorkflowRun.mockImplementation(() => ({ data: { status: 'completed' }, })); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).not.toBeCalled(); }); @@ -499,38 +721,38 @@ describe('scaleUp with public GH', () => { }); it('gets the current org level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Org', - runnerOwner: TEST_DATA.repositoryOwner, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a token when maximum runners has not been reached', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).toBeCalledWith({ - org: TEST_DATA.repositoryOwner, + org: TEST_DATA_SINGLE.repositoryOwner, }); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a runner with correct config', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with labels in s specific group', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); }); @@ -543,44 +765,44 @@ describe('scaleUp with public GH', () => { process.env.RUNNER_NAME_PREFIX = 'unit-test'; expectedRunnerParams = { ...EXPECTED_RUNNER_PARAMS }; expectedRunnerParams.runnerType = 'Repo'; - expectedRunnerParams.runnerOwner = `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`; + expectedRunnerParams.runnerOwner = `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`; }); it('gets the current repo level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Repo', - runnerOwner: `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`, + runnerOwner: `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a token when maximum runners has not been reached', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).toBeCalledWith({ - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('creates a runner with correct config and labels', async () => { process.env.RUNNER_LABELS = 'label1,label2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with correct config and labels and on demand failover enabled.', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.ENABLE_ON_DEMAND_FAILOVER_FOR_ERRORS = JSON.stringify(['InsufficientInstanceCapacity']); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith({ ...expectedRunnerParams, onDemandFailoverOnError: ['InsufficientInstanceCapacity'], @@ -590,26 +812,25 @@ describe('scaleUp with public GH', () => { it('creates a runner and ensure the group argument is ignored', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP_IGNORED'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('ephemeral runners only run with workflow_job event, others should fail.', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; process.env.ENABLE_JOB_QUEUED_CHECK = 'false'; - await expect( - scaleUpModule.scaleUp('aws:sqs', { - ...TEST_DATA, - eventType: 'check_run', - }), - ).rejects.toBeInstanceOf(Error); + + const USER_REPO_TEST_DATA = structuredClone(TEST_DATA); + USER_REPO_TEST_DATA[0].eventType = 'check_run'; + + await expect(scaleUpModule.scaleUp(USER_REPO_TEST_DATA)).resolves.toEqual(['foobar']); }); it('creates a ephemeral runner with JIT config.', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; process.env.ENABLE_JOB_QUEUED_CHECK = 'false'; process.env.SSM_TOKEN_PATH = '/github-action-runners/default/runners/config'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).not.toBeCalled(); expect(createRunner).toBeCalledWith(expectedRunnerParams); @@ -631,7 +852,7 @@ describe('scaleUp with public GH', () => { process.env.ENABLE_JIT_CONFIG = 'false'; process.env.ENABLE_JOB_QUEUED_CHECK = 'false'; process.env.SSM_TOKEN_PATH = '/github-action-runners/default/runners/config'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).not.toBeCalled(); expect(createRunner).toBeCalledWith(expectedRunnerParams); @@ -654,7 +875,7 @@ describe('scaleUp with public GH', () => { process.env.ENABLE_JOB_QUEUED_CHECK = 'false'; process.env.RUNNER_LABELS = 'jit'; process.env.SSM_TOKEN_PATH = '/github-action-runners/default/runners/config'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).not.toBeCalled(); expect(createRunner).toBeCalledWith(expectedRunnerParams); @@ -674,21 +895,247 @@ describe('scaleUp with public GH', () => { it('creates a ephemeral runner after checking job is queued.', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; process.env.ENABLE_JOB_QUEUED_CHECK = 'true'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).toBeCalled(); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('disable auto update on the runner.', async () => { process.env.DISABLE_RUNNER_AUTOUPDATE = 'true'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); - it('Scaling error should cause reject so retry can be triggered.', async () => { + it('Scaling error should return failed message IDs so retry can be triggered.', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; - await expect(scaleUpModule.scaleUp('aws:sqs', TEST_DATA)).rejects.toBeInstanceOf(ScaleError); + await expect(scaleUpModule.scaleUp(TEST_DATA)).resolves.toEqual(['foobar']); + }); + }); + + describe('Batch processing', () => { + const createTestMessages = ( + count: number, + overrides: Partial[] = [], + ): scaleUpModule.ActionRequestMessageSQS[] => { + return Array.from({ length: count }, (_, i) => ({ + ...TEST_DATA_SINGLE, + id: i + 1, + messageId: `message-${i}`, + ...overrides[i], + })); + }; + + beforeEach(() => { + setDefaults(); + process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + process.env.RUNNERS_MAXIMUM_COUNT = '10'; + }); + + it('Should handle multiple messages for the same organization', async () => { + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(1); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 3, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, + }), + ); + }); + + it('Should handle multiple messages for different organizations', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'org1' }, + { repositoryOwner: 'org2' }, + { repositoryOwner: 'org1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'org1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'org2', + }), + ); + }); + + it('Should handle multiple messages for different repositories when org-level is disabled', async () => { + process.env.ENABLE_ORGANIZATION_RUNNERS = 'false'; + const messages = createTestMessages(3, [ + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + { repositoryOwner: 'owner1', repositoryName: 'repo2' }, + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'owner1/repo1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'owner1/repo2', + }), + ); + }); + + it('Should reject messages when maximum runners limit is reached', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '1'; // Set to 1 so with 1 existing, no new ones can be created + mockListRunners.mockImplementation(async () => [ + { + instanceId: 'i-existing', + launchTime: new Date(), + type: 'Org', + owner: TEST_DATA_SINGLE.repositoryOwner, + }, + ]); + + const messages = createTestMessages(3); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).not.toHaveBeenCalled(); // No runners should be created + expect(rejectedMessages).toHaveLength(3); // All 3 messages should be rejected + }); + + it('Should handle partial EC2 instance creation failures', async () => { + mockCreateRunner.mockImplementation(async () => ['i-12345']); // Only creates 1 instead of requested 3 + + const messages = createTestMessages(3); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(rejectedMessages).toHaveLength(2); // 3 requested - 1 created = 2 failed + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should filter out invalid event types for ephemeral runners', async () => { + const messages = createTestMessages(3, [ + { eventType: 'workflow_job' }, + { eventType: 'check_run' }, + { eventType: 'workflow_job' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only workflow_job events processed + }), + ); + expect(rejectedMessages).toContain('message-1'); // check_run event rejected + }); + + it('Should skip invalid repo owner types but not reject them', async () => { + const messages = createTestMessages(3, [ + { repoOwnerType: 'Organization' }, + { repoOwnerType: 'User' }, // Invalid for org-level runners + { repoOwnerType: 'Organization' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only Organization events processed + }), + ); + expect(rejectedMessages).not.toContain('message-1'); // User repo not rejected, just skipped + }); + + it('Should skip messages when jobs are not queued', async () => { + mockOctokit.actions.getJobForWorkflowRun.mockImplementation((params) => { + const isQueued = params.job_id === 1 || params.job_id === 3; // Only jobs 1 and 3 are queued + return { + data: { + status: isQueued ? 'queued' : 'completed', + }, + }; + }); + + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only queued jobs processed + }), + ); + }); + + it('Should create separate GitHub clients for different installations', async () => { + // Override the default mock to return different installation IDs + mockOctokit.apps.getOrgInstallation.mockReset(); + mockOctokit.apps.getOrgInstallation.mockImplementation((params) => ({ + data: { + id: params.org === 'org1' ? 100 : 200, + }, + })); + + const messages = createTestMessages(2, [ + { repositoryOwner: 'org1', installationId: 0 }, + { repositoryOwner: 'org2', installationId: 0 }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(3); // 1 app client, 2 repo installation clients + expect(mockedInstallationAuth).toHaveBeenCalledWith(100, ''); + expect(mockedInstallationAuth).toHaveBeenCalledWith(200, ''); + }); + + it('Should reuse GitHub clients for same installation', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(2); // 1 app client, 1 installation client + expect(mockedInstallationAuth).toHaveBeenCalledTimes(1); + }); + + it('Should return empty array when no valid messages to process', async () => { + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + const messages = createTestMessages(2, [ + { eventType: 'check_run' }, // Invalid for ephemeral + { eventType: 'check_run' }, // Invalid for ephemeral + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).not.toHaveBeenCalled(); + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should handle unlimited runners configuration', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '-1'; + const messages = createTestMessages(10); + + await scaleUpModule.scaleUp(messages); + + expect(listEC2Runners).not.toHaveBeenCalled(); // No need to check current runners + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 10, // All messages processed + }), + ); }); }); }); @@ -698,17 +1145,12 @@ describe('scaleUp with Github Data Residency', () => { process.env.GHES_URL = 'https://companyname.ghe.com'; }); - it('ignores non-sqs events', async () => { - expect.assertions(1); - await expect(scaleUpModule.scaleUp('aws:s3', TEST_DATA)).rejects.toEqual(Error('Cannot handle non-SQS events!')); - }); - it('checks queued workflows', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.getJobForWorkflowRun).toBeCalledWith({ - job_id: TEST_DATA.id, - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + job_id: TEST_DATA_SINGLE.id, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); @@ -716,7 +1158,7 @@ describe('scaleUp with Github Data Residency', () => { mockOctokit.actions.getJobForWorkflowRun.mockImplementation(() => ({ data: { total_count: 0 }, })); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).not.toBeCalled(); }); @@ -735,18 +1177,18 @@ describe('scaleUp with Github Data Residency', () => { }); it('gets the current org level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Org', - runnerOwner: TEST_DATA.repositoryOwner, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); @@ -754,35 +1196,35 @@ describe('scaleUp with Github Data Residency', () => { it('does create a runner if maximum is set to -1', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '-1'; process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).not.toHaveBeenCalled(); expect(createRunner).toHaveBeenCalled(); }); it('creates a token when maximum runners has not been reached', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).toBeCalledWith({ - org: TEST_DATA.repositoryOwner, + org: TEST_DATA_SINGLE.repositoryOwner, }); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a runner with correct config', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with labels in a specific group', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner with ami id override from ssm parameter', async () => { process.env.AMI_ID_SSM_PARAMETER_NAME = 'my-ami-id-param'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith({ ...expectedRunnerParams, amiIdSsmParameterName: 'my-ami-id-param' }); }); @@ -791,15 +1233,15 @@ describe('scaleUp with Github Data Residency', () => { mockSSMgetParameter.mockImplementation(async () => { throw new Error('ParameterNotFound'); }); - await expect(scaleUpModule.scaleUp('aws:sqs', TEST_DATA)).rejects.toBeInstanceOf(Error); + await expect(scaleUpModule.scaleUp(TEST_DATA)).rejects.toBeInstanceOf(Error); expect(mockOctokit.paginate).toHaveBeenCalledTimes(1); }); it('Discards event if it is a User repo and org level runners is enabled', async () => { process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; - const USER_REPO_TEST_DATA = { ...TEST_DATA }; - USER_REPO_TEST_DATA.repoOwnerType = 'User'; - await scaleUpModule.scaleUp('aws:sqs', USER_REPO_TEST_DATA); + const USER_REPO_TEST_DATA = structuredClone(TEST_DATA); + USER_REPO_TEST_DATA[0].repoOwnerType = 'User'; + await scaleUpModule.scaleUp(USER_REPO_TEST_DATA); expect(createRunner).not.toHaveBeenCalled(); }); @@ -807,7 +1249,7 @@ describe('scaleUp with Github Data Residency', () => { mockSSMgetParameter.mockImplementation(async () => { throw new Error('ParameterNotFound'); }); - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.paginate).toHaveBeenCalledTimes(1); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 2); expect(mockSSMClient).toHaveReceivedNthSpecificCommandWith(1, PutParameterCommand, { @@ -818,7 +1260,7 @@ describe('scaleUp with Github Data Residency', () => { }); it('Does not create SSM parameter for runner group id if it exists', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.paginate).toHaveBeenCalledTimes(0); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 1); }); @@ -826,9 +1268,9 @@ describe('scaleUp with Github Data Residency', () => { it('create start runner config for ephemeral runners ', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.generateRunnerJitconfigForOrg).toBeCalledWith({ - org: TEST_DATA.repositoryOwner, + org: TEST_DATA_SINGLE.repositoryOwner, name: 'unit-test-i-12345', runner_group_id: 1, labels: ['label1', 'label2'], @@ -849,7 +1291,7 @@ describe('scaleUp with Github Data Residency', () => { it('create start runner config for non-ephemeral runners ', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; process.env.RUNNERS_MAXIMUM_COUNT = '2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.generateRunnerJitconfigForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForOrg).toBeCalled(); expect(mockSSMClient).toHaveReceivedNthSpecificCommandWith(1, PutParameterCommand, { @@ -920,7 +1362,7 @@ describe('scaleUp with Github Data Residency', () => { 'i-150', 'i-151', ]; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); const endTime = performance.now(); expect(endTime - startTime).toBeGreaterThan(1000); expect(mockSSMClient).toHaveReceivedCommandTimes(PutParameterCommand, 40); @@ -934,67 +1376,295 @@ describe('scaleUp with Github Data Residency', () => { process.env.RUNNER_NAME_PREFIX = 'unit-test'; expectedRunnerParams = { ...EXPECTED_RUNNER_PARAMS }; expectedRunnerParams.runnerType = 'Repo'; - expectedRunnerParams.runnerOwner = `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`; - // `--url https://companyname.ghe.com${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`, + expectedRunnerParams.runnerOwner = `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`; + // `--url https://companyname.ghe.com${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`, // `--token 1234abcd`, // ]; }); it('gets the current repo level runners', async () => { - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(listEC2Runners).toBeCalledWith({ environment: 'unit-test-environment', runnerType: 'Repo', - runnerOwner: `${TEST_DATA.repositoryOwner}/${TEST_DATA.repositoryName}`, + runnerOwner: `${TEST_DATA_SINGLE.repositoryOwner}/${TEST_DATA_SINGLE.repositoryName}`, }); }); it('does not create a token when maximum runners has been reached', async () => { process.env.RUNNERS_MAXIMUM_COUNT = '1'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).not.toBeCalled(); }); it('creates a token when maximum runners has not been reached', async () => { process.env.ENABLE_EPHEMERAL_RUNNERS = 'false'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForOrg).not.toBeCalled(); expect(mockOctokit.actions.createRegistrationTokenForRepo).toBeCalledWith({ - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('uses the default runner max count', async () => { process.env.RUNNERS_MAXIMUM_COUNT = undefined; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(mockOctokit.actions.createRegistrationTokenForRepo).toBeCalledWith({ - owner: TEST_DATA.repositoryOwner, - repo: TEST_DATA.repositoryName, + owner: TEST_DATA_SINGLE.repositoryOwner, + repo: TEST_DATA_SINGLE.repositoryName, }); }); it('creates a runner with correct config and labels', async () => { process.env.RUNNER_LABELS = 'label1,label2'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('creates a runner and ensure the group argument is ignored', async () => { process.env.RUNNER_LABELS = 'label1,label2'; process.env.RUNNER_GROUP_NAME = 'TEST_GROUP_IGNORED'; - await scaleUpModule.scaleUp('aws:sqs', TEST_DATA); + await scaleUpModule.scaleUp(TEST_DATA); expect(createRunner).toBeCalledWith(expectedRunnerParams); }); it('Check error is thrown', async () => { const mockCreateRunners = vi.mocked(createRunner); mockCreateRunners.mockRejectedValue(new Error('no retry')); - await expect(scaleUpModule.scaleUp('aws:sqs', TEST_DATA)).rejects.toThrow('no retry'); + await expect(scaleUpModule.scaleUp(TEST_DATA)).rejects.toThrow('no retry'); mockCreateRunners.mockReset(); }); }); + + describe('Batch processing', () => { + const createTestMessages = ( + count: number, + overrides: Partial[] = [], + ): scaleUpModule.ActionRequestMessageSQS[] => { + return Array.from({ length: count }, (_, i) => ({ + ...TEST_DATA_SINGLE, + id: i + 1, + messageId: `message-${i}`, + ...overrides[i], + })); + }; + + beforeEach(() => { + setDefaults(); + process.env.ENABLE_ORGANIZATION_RUNNERS = 'true'; + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + process.env.RUNNERS_MAXIMUM_COUNT = '10'; + }); + + it('Should handle multiple messages for the same organization', async () => { + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(1); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 3, + runnerOwner: TEST_DATA_SINGLE.repositoryOwner, + }), + ); + }); + + it('Should handle multiple messages for different organizations', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'org1' }, + { repositoryOwner: 'org2' }, + { repositoryOwner: 'org1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'org1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'org2', + }), + ); + }); + + it('Should handle multiple messages for different repositories when org-level is disabled', async () => { + process.env.ENABLE_ORGANIZATION_RUNNERS = 'false'; + const messages = createTestMessages(3, [ + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + { repositoryOwner: 'owner1', repositoryName: 'repo2' }, + { repositoryOwner: 'owner1', repositoryName: 'repo1' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledTimes(2); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, + runnerOwner: 'owner1/repo1', + }), + ); + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, + runnerOwner: 'owner1/repo2', + }), + ); + }); + + it('Should reject messages when maximum runners limit is reached', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '2'; + mockListRunners.mockImplementation(async () => [ + { + instanceId: 'i-existing', + launchTime: new Date(), + type: 'Org', + owner: TEST_DATA_SINGLE.repositoryOwner, + }, + ]); + + const messages = createTestMessages(5); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 1, // 2 max - 1 existing = 1 new + }), + ); + expect(rejectedMessages).toHaveLength(4); // 5 requested - 1 created = 4 rejected + }); + + it('Should handle partial EC2 instance creation failures', async () => { + mockCreateRunner.mockImplementation(async () => ['i-12345']); // Only creates 1 instead of requested 3 + + const messages = createTestMessages(3); + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(rejectedMessages).toHaveLength(2); // 3 requested - 1 created = 2 failed + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should filter out invalid event types for ephemeral runners', async () => { + const messages = createTestMessages(3, [ + { eventType: 'workflow_job' }, + { eventType: 'check_run' }, + { eventType: 'workflow_job' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only workflow_job events processed + }), + ); + expect(rejectedMessages).toContain('message-1'); // check_run event rejected + }); + + it('Should skip invalid repo owner types but not reject them', async () => { + const messages = createTestMessages(3, [ + { repoOwnerType: 'Organization' }, + { repoOwnerType: 'User' }, // Invalid for org-level runners + { repoOwnerType: 'Organization' }, + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only Organization events processed + }), + ); + expect(rejectedMessages).not.toContain('message-1'); // User repo not rejected, just skipped + }); + + it('Should skip messages when jobs are not queued', async () => { + mockOctokit.actions.getJobForWorkflowRun.mockImplementation((params) => { + const isQueued = params.job_id === 1 || params.job_id === 3; // Only jobs 1 and 3 are queued + return { + data: { + status: isQueued ? 'queued' : 'completed', + }, + }; + }); + + const messages = createTestMessages(3); + await scaleUpModule.scaleUp(messages); + + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 2, // Only queued jobs processed + }), + ); + }); + + it('Should create separate GitHub clients for different installations', async () => { + mockOctokit.apps.getOrgInstallation.mockImplementation((params) => ({ + data: { + id: params.org === 'org1' ? 100 : 200, + }, + })); + + const messages = createTestMessages(2, [ + { repositoryOwner: 'org1', installationId: 0 }, + { repositoryOwner: 'org2', installationId: 0 }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(3); // 1 app client, 2 repo installation clients + expect(mockedInstallationAuth).toHaveBeenCalledWith(100, ''); + expect(mockedInstallationAuth).toHaveBeenCalledWith(200, ''); + }); + + it('Should reuse GitHub clients for same installation', async () => { + const messages = createTestMessages(3, [ + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + { repositoryOwner: 'same-org' }, + ]); + + await scaleUpModule.scaleUp(messages); + + expect(mockCreateClient).toHaveBeenCalledTimes(2); // 1 app client, 1 installation client + expect(mockedInstallationAuth).toHaveBeenCalledTimes(1); + }); + + it('Should return empty array when no valid messages to process', async () => { + process.env.ENABLE_EPHEMERAL_RUNNERS = 'true'; + const messages = createTestMessages(2, [ + { eventType: 'check_run' }, // Invalid for ephemeral + { eventType: 'check_run' }, // Invalid for ephemeral + ]); + + const rejectedMessages = await scaleUpModule.scaleUp(messages); + + expect(createRunner).not.toHaveBeenCalled(); + expect(rejectedMessages).toEqual(['message-0', 'message-1']); + }); + + it('Should handle unlimited runners configuration', async () => { + process.env.RUNNERS_MAXIMUM_COUNT = '-1'; + const messages = createTestMessages(10); + + await scaleUpModule.scaleUp(messages); + + expect(listEC2Runners).not.toHaveBeenCalled(); // No need to check current runners + expect(createRunner).toHaveBeenCalledWith( + expect.objectContaining({ + numberOfRunners: 10, // All messages processed + }), + ); + }); + }); }); function defaultOctokitMockImpl() { @@ -1034,12 +1704,12 @@ function defaultOctokitMockImpl() { }; const mockInstallationIdReturnValueOrgs = { data: { - id: TEST_DATA.installationId, + id: TEST_DATA_SINGLE.installationId, }, }; const mockInstallationIdReturnValueRepos = { data: { - id: TEST_DATA.installationId, + id: TEST_DATA_SINGLE.installationId, }, }; diff --git a/lambdas/functions/control-plane/src/scale-runners/scale-up.ts b/lambdas/functions/control-plane/src/scale-runners/scale-up.ts index 638edd3232..35df7ea5d7 100644 --- a/lambdas/functions/control-plane/src/scale-runners/scale-up.ts +++ b/lambdas/functions/control-plane/src/scale-runners/scale-up.ts @@ -6,8 +6,6 @@ import yn from 'yn'; import { createGithubAppAuth, createGithubInstallationAuth, createOctokitClient } from '../github/auth'; import { createRunner, listEC2Runners, tag } from './../aws/runners'; import { RunnerInputParameters } from './../aws/runners.d'; -import ScaleError from './ScaleError'; -import { publishRetryMessage } from './job-retry'; import { metricGitHubAppRateLimit } from '../github/rate-limit'; const logger = createChildLogger('scale-up'); @@ -33,6 +31,10 @@ export interface ActionRequestMessage { retryCounter?: number; } +export interface ActionRequestMessageSQS extends ActionRequestMessage { + messageId: string; +} + export interface ActionRequestMessageRetry extends ActionRequestMessage { retryCounter: number; } @@ -114,7 +116,7 @@ function removeTokenFromLogging(config: string[]): string[] { } export async function getInstallationId( - ghesApiUrl: string, + githubAppClient: Octokit, enableOrgLevel: boolean, payload: ActionRequestMessage, ): Promise { @@ -122,16 +124,14 @@ export async function getInstallationId( return payload.installationId; } - const ghAuth = await createGithubAppAuth(undefined, ghesApiUrl); - const githubClient = await createOctokitClient(ghAuth.token, ghesApiUrl); return enableOrgLevel ? ( - await githubClient.apps.getOrgInstallation({ + await githubAppClient.apps.getOrgInstallation({ org: payload.repositoryOwner, }) ).data.id : ( - await githubClient.apps.getRepoInstallation({ + await githubAppClient.apps.getRepoInstallation({ owner: payload.repositoryOwner, repo: payload.repositoryName, }) @@ -211,23 +211,27 @@ async function getRunnerGroupByName(ghClient: Octokit, githubRunnerConfig: Creat export async function createRunners( githubRunnerConfig: CreateGitHubRunnerConfig, ec2RunnerConfig: CreateEC2RunnerConfig, + numberOfRunners: number, ghClient: Octokit, -): Promise { +): Promise { const instances = await createRunner({ runnerType: githubRunnerConfig.runnerType, runnerOwner: githubRunnerConfig.runnerOwner, - numberOfRunners: 1, + numberOfRunners, ...ec2RunnerConfig, }); if (instances.length !== 0) { await createStartRunnerConfig(githubRunnerConfig, instances, ghClient); } + + return instances; } -export async function scaleUp(eventSource: string, payload: ActionRequestMessage): Promise { - logger.info(`Received ${payload.eventType} from ${payload.repositoryOwner}/${payload.repositoryName}`); +export async function scaleUp(payloads: ActionRequestMessageSQS[]): Promise { + logger.info('Received scale up requests', { + n_requests: payloads.length, + }); - if (eventSource !== 'aws:sqs') throw Error('Cannot handle non-SQS events!'); const enableOrgLevel = yn(process.env.ENABLE_ORGANIZATION_RUNNERS, { default: true }); const maximumRunners = parseInt(process.env.RUNNERS_MAXIMUM_COUNT || '3'); const runnerLabels = process.env.RUNNER_LABELS || ''; @@ -252,103 +256,202 @@ export async function scaleUp(eventSource: string, payload: ActionRequestMessage ? (JSON.parse(process.env.ENABLE_ON_DEMAND_FAILOVER_FOR_ERRORS) as [string]) : []; - if (ephemeralEnabled && payload.eventType !== 'workflow_job') { - logger.warn(`${payload.eventType} event is not supported in combination with ephemeral runners.`); - throw Error( - `The event type ${payload.eventType} is not supported in combination with ephemeral runners.` + - `Please ensure you have enabled workflow_job events.`, - ); - } + const { ghesApiUrl, ghesBaseUrl } = getGitHubEnterpriseApiUrl(); - if (!isValidRepoOwnerTypeIfOrgLevelEnabled(payload, enableOrgLevel)) { - logger.warn( - `Repository ${payload.repositoryOwner}/${payload.repositoryName} does not belong to a GitHub` + - `organization and organization runners are enabled. This is not supported. Not scaling up for this event.` + - `Not throwing error to prevent re-queueing and just ignoring the event.`, - ); - return; + const ghAuth = await createGithubAppAuth(undefined, ghesApiUrl); + const githubAppClient = await createOctokitClient(ghAuth.token, ghesApiUrl); + + // A map of either owner or owner/repo name to Octokit client, so we use a + // single client per installation (set of messages), depending on how the app + // is installed. This is for a couple of reasons: + // - Sharing clients opens up the possibility of caching API calls. + // - Fetching a client for an installation actually requires a couple of API + // calls itself, which would get expensive if done for every message in a + // batch. + type MessagesWithClient = { + messages: ActionRequestMessageSQS[]; + githubInstallationClient: Octokit; + }; + + const validMessages = new Map(); + const invalidMessages: string[] = []; + for (const payload of payloads) { + const { eventType, messageId, repositoryName, repositoryOwner } = payload; + if (ephemeralEnabled && eventType !== 'workflow_job') { + logger.warn( + 'Event is not supported in combination with ephemeral runners. Please ensure you have enabled workflow_job events.', + { eventType, messageId }, + ); + + invalidMessages.push(messageId); + + continue; + } + + if (!isValidRepoOwnerTypeIfOrgLevelEnabled(payload, enableOrgLevel)) { + logger.warn( + `Repository does not belong to a GitHub organization and organization runners are enabled. This is not supported. Not scaling up for this event. Not throwing error to prevent re-queueing and just ignoring the event.`, + { + repository: `${repositoryOwner}/${repositoryName}`, + messageId, + }, + ); + + continue; + } + + const key = enableOrgLevel ? payload.repositoryOwner : `${payload.repositoryOwner}/${payload.repositoryName}`; + + let entry = validMessages.get(key); + + // If we've not seen this owner/repo before, we'll need to create a GitHub + // client for it. + if (entry === undefined) { + const installationId = await getInstallationId(githubAppClient, enableOrgLevel, payload); + const ghAuth = await createGithubInstallationAuth(installationId, ghesApiUrl); + const githubInstallationClient = await createOctokitClient(ghAuth.token, ghesApiUrl); + + entry = { + messages: [], + githubInstallationClient, + }; + + validMessages.set(key, entry); + } + + entry.messages.push(payload); } - const ephemeral = ephemeralEnabled && payload.eventType === 'workflow_job'; const runnerType = enableOrgLevel ? 'Org' : 'Repo'; - const runnerOwner = enableOrgLevel ? payload.repositoryOwner : `${payload.repositoryOwner}/${payload.repositoryName}`; addPersistentContextToChildLogger({ runner: { + ephemeral: ephemeralEnabled, type: runnerType, - owner: runnerOwner, namePrefix: runnerNamePrefix, - }, - github: { - event: payload.eventType, - workflow_job_id: payload.id.toString(), + n_events: Array.from(validMessages.values()).reduce((acc, group) => acc + group.messages.length, 0), }, }); - logger.info(`Received event`); + logger.info(`Received events`); - const { ghesApiUrl, ghesBaseUrl } = getGitHubEnterpriseApiUrl(); + for (const [group, { githubInstallationClient, messages }] of validMessages.entries()) { + // Work out how much we want to scale up by. + let scaleUp = 0; - const installationId = await getInstallationId(ghesApiUrl, enableOrgLevel, payload); - const ghAuth = await createGithubInstallationAuth(installationId, ghesApiUrl); - const githubInstallationClient = await createOctokitClient(ghAuth.token, ghesApiUrl); + for (const message of messages) { + const messageLogger = logger.createChild({ + persistentKeys: { + eventType: message.eventType, + group, + messageId: message.messageId, + repository: `${message.repositoryOwner}/${message.repositoryName}`, + }, + }); - if (!enableJobQueuedCheck || (await isJobQueued(githubInstallationClient, payload))) { - let scaleUp = true; - if (maximumRunners !== -1) { - const currentRunners = await listEC2Runners({ - environment, - runnerType, - runnerOwner, + if (enableJobQueuedCheck && !(await isJobQueued(githubInstallationClient, message))) { + messageLogger.info('No runner will be created, job is not queued.'); + + continue; + } + + scaleUp++; + } + + if (scaleUp === 0) { + logger.info('No runners will be created for this group, no valid messages found.'); + + continue; + } + + // Don't call the EC2 API if we can create an unlimited number of runners. + const currentRunners = + maximumRunners === -1 ? 0 : (await listEC2Runners({ environment, runnerType, runnerOwner: group })).length; + + logger.info('Current runners', { + currentRunners, + maximumRunners, + }); + + // Calculate how many runners we want to create. + const newRunners = + maximumRunners === -1 + ? // If we don't have an upper limit, scale up by the number of new jobs. + scaleUp + : // Otherwise, we do have a limit, so work out if `scaleUp` would exceed it. + Math.min(scaleUp, maximumRunners - currentRunners); + + const missingInstanceCount = Math.max(0, scaleUp - newRunners); + + if (missingInstanceCount > 0) { + logger.info('Not all runners will be created for this group, maximum number of runners reached.', { + desiredNewRunners: scaleUp, }); - logger.info(`Current runners: ${currentRunners.length} of ${maximumRunners}`); - scaleUp = currentRunners.length < maximumRunners; + + if (ephemeralEnabled) { + // This removes `missingInstanceCount` items from the start of the array + // so that, if we retry more messages later, we pick fresh ones. + invalidMessages.push(...messages.splice(0, missingInstanceCount).map(({ messageId }) => messageId)); + } + + // No runners will be created, so skip calling the EC2 API. + if (missingInstanceCount === scaleUp) { + continue; + } } - if (scaleUp) { - logger.info(`Attempting to launch a new runner`); + logger.info(`Attempting to launch new runners`, { + newRunners, + }); - await createRunners( - { - ephemeral, - enableJitConfig, - ghesBaseUrl, - runnerLabels, - runnerGroup, - runnerNamePrefix, - runnerOwner, - runnerType, - disableAutoUpdate, - ssmTokenPath, - ssmConfigPath, - }, - { - ec2instanceCriteria: { - instanceTypes, - targetCapacityType: instanceTargetCapacityType, - maxSpotPrice: instanceMaxSpotPrice, - instanceAllocationStrategy: instanceAllocationStrategy, - }, - environment, - launchTemplateName, - subnets, - amiIdSsmParameterName, - tracingEnabled, - onDemandFailoverOnError, + const instances = await createRunners( + { + ephemeral: ephemeralEnabled, + enableJitConfig, + ghesBaseUrl, + runnerLabels, + runnerGroup, + runnerNamePrefix, + runnerOwner: group, + runnerType, + disableAutoUpdate, + ssmTokenPath, + ssmConfigPath, + }, + { + ec2instanceCriteria: { + instanceTypes, + targetCapacityType: instanceTargetCapacityType, + maxSpotPrice: instanceMaxSpotPrice, + instanceAllocationStrategy: instanceAllocationStrategy, }, - githubInstallationClient, - ); + environment, + launchTemplateName, + subnets, + amiIdSsmParameterName, + tracingEnabled, + onDemandFailoverOnError, + }, + newRunners, + githubInstallationClient, + ); - await publishRetryMessage(payload); - } else { - logger.info('No runner will be created, maximum number of runners reached.'); - if (ephemeral) { - throw new ScaleError('No runners create: maximum of runners reached.'); - } + // Not all runners we wanted were created, let's reject enough items so that + // number of entries will be retried. + if (instances.length !== newRunners) { + const failedInstanceCount = newRunners - instances.length; + + logger.warn('Some runners failed to be created, rejecting some messages so the requests are retried', { + wanted: newRunners, + got: instances.length, + failedInstanceCount, + }); + + invalidMessages.push(...messages.slice(0, failedInstanceCount).map(({ messageId }) => messageId)); } - } else { - logger.info('No runner will be created, job is not queued.'); } + + return invalidMessages; } export function getGitHubEnterpriseApiUrl() { diff --git a/lambdas/libs/aws-powertools-util/src/logger/index.ts b/lambdas/libs/aws-powertools-util/src/logger/index.ts index 195b552a74..2bad191a83 100644 --- a/lambdas/libs/aws-powertools-util/src/logger/index.ts +++ b/lambdas/libs/aws-powertools-util/src/logger/index.ts @@ -9,7 +9,7 @@ const defaultValues = { }; function setContext(context: Context, module?: string) { - logger.addPersistentLogAttributes({ + logger.appendPersistentKeys({ 'aws-request-id': context.awsRequestId, 'function-name': context.functionName, module: module, @@ -17,7 +17,7 @@ function setContext(context: Context, module?: string) { // Add the context to all child loggers childLoggers.forEach((childLogger) => { - childLogger.addPersistentLogAttributes({ + childLogger.appendPersistentKeys({ 'aws-request-id': context.awsRequestId, 'function-name': context.functionName, }); @@ -25,14 +25,14 @@ function setContext(context: Context, module?: string) { } const logger = new Logger({ - persistentLogAttributes: { + persistentKeys: { ...defaultValues, }, }); function createChildLogger(module: string): Logger { const childLogger = logger.createChild({ - persistentLogAttributes: { + persistentKeys: { module: module, }, }); @@ -47,7 +47,7 @@ type LogAttributes = { function addPersistentContextToChildLogger(attributes: LogAttributes) { childLoggers.forEach((childLogger) => { - childLogger.addPersistentLogAttributes(attributes); + childLogger.appendPersistentKeys(attributes); }); } diff --git a/main.tf b/main.tf index 74c4c54ec6..f0dadd6b66 100644 --- a/main.tf +++ b/main.tf @@ -210,28 +210,30 @@ module "runners" { credit_specification = var.runner_credit_specification cpu_options = var.runner_cpu_options - enable_runner_binaries_syncer = var.enable_runner_binaries_syncer - lambda_s3_bucket = var.lambda_s3_bucket - runners_lambda_s3_key = var.runners_lambda_s3_key - runners_lambda_s3_object_version = var.runners_lambda_s3_object_version - lambda_runtime = var.lambda_runtime - lambda_architecture = var.lambda_architecture - lambda_zip = var.runners_lambda_zip - lambda_scale_up_memory_size = var.runners_scale_up_lambda_memory_size - lambda_scale_down_memory_size = var.runners_scale_down_lambda_memory_size - lambda_timeout_scale_up = var.runners_scale_up_lambda_timeout - lambda_timeout_scale_down = var.runners_scale_down_lambda_timeout - lambda_subnet_ids = var.lambda_subnet_ids - lambda_security_group_ids = var.lambda_security_group_ids - lambda_tags = var.lambda_tags - tracing_config = var.tracing_config - logging_retention_in_days = var.logging_retention_in_days - logging_kms_key_id = var.logging_kms_key_id - enable_cloudwatch_agent = var.enable_cloudwatch_agent - cloudwatch_config = var.cloudwatch_config - runner_log_files = var.runner_log_files - runner_group_name = var.runner_group_name - runner_name_prefix = var.runner_name_prefix + enable_runner_binaries_syncer = var.enable_runner_binaries_syncer + lambda_s3_bucket = var.lambda_s3_bucket + runners_lambda_s3_key = var.runners_lambda_s3_key + runners_lambda_s3_object_version = var.runners_lambda_s3_object_version + lambda_runtime = var.lambda_runtime + lambda_architecture = var.lambda_architecture + lambda_event_source_mapping_batch_size = var.lambda_event_source_mapping_batch_size + lambda_event_source_mapping_maximum_batching_window_in_seconds = var.lambda_event_source_mapping_maximum_batching_window_in_seconds + lambda_zip = var.runners_lambda_zip + lambda_scale_up_memory_size = var.runners_scale_up_lambda_memory_size + lambda_scale_down_memory_size = var.runners_scale_down_lambda_memory_size + lambda_timeout_scale_up = var.runners_scale_up_lambda_timeout + lambda_timeout_scale_down = var.runners_scale_down_lambda_timeout + lambda_subnet_ids = var.lambda_subnet_ids + lambda_security_group_ids = var.lambda_security_group_ids + lambda_tags = var.lambda_tags + tracing_config = var.tracing_config + logging_retention_in_days = var.logging_retention_in_days + logging_kms_key_id = var.logging_kms_key_id + enable_cloudwatch_agent = var.enable_cloudwatch_agent + cloudwatch_config = var.cloudwatch_config + runner_log_files = var.runner_log_files + runner_group_name = var.runner_group_name + runner_name_prefix = var.runner_name_prefix scale_up_reserved_concurrent_executions = var.scale_up_reserved_concurrent_executions diff --git a/modules/multi-runner/README.md b/modules/multi-runner/README.md index 198078feee..0515763f4c 100644 --- a/modules/multi-runner/README.md +++ b/modules/multi-runner/README.md @@ -137,6 +137,8 @@ module "multi-runner" { | [key\_name](#input\_key\_name) | Key pair name | `string` | `null` | no | | [kms\_key\_arn](#input\_kms\_key\_arn) | Optional CMK Key ARN to be used for Parameter Store. | `string` | `null` | no | | [lambda\_architecture](#input\_lambda\_architecture) | AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. | `string` | `"arm64"` | no | +| [lambda\_event\_source\_mapping\_batch\_size](#input\_lambda\_event\_source\_mapping\_batch\_size) | Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used. | `number` | `10` | no | +| [lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds](#input\_lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds) | Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10. Defaults to 0. | `number` | `0` | no | | [lambda\_principals](#input\_lambda\_principals) | (Optional) add extra principals to the role created for execution of the lambda, e.g. for local testing. |
list(object({
type = string
identifiers = list(string)
}))
| `[]` | no | | [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | diff --git a/modules/multi-runner/runners.tf b/modules/multi-runner/runners.tf index 811ab36260..d58e61f6ac 100644 --- a/modules/multi-runner/runners.tf +++ b/modules/multi-runner/runners.tf @@ -58,28 +58,30 @@ module "runners" { credit_specification = each.value.runner_config.credit_specification cpu_options = each.value.runner_config.cpu_options - enable_runner_binaries_syncer = each.value.runner_config.enable_runner_binaries_syncer - lambda_s3_bucket = var.lambda_s3_bucket - runners_lambda_s3_key = var.runners_lambda_s3_key - runners_lambda_s3_object_version = var.runners_lambda_s3_object_version - lambda_runtime = var.lambda_runtime - lambda_architecture = var.lambda_architecture - lambda_zip = var.runners_lambda_zip - lambda_scale_up_memory_size = var.scale_up_lambda_memory_size - lambda_timeout_scale_up = var.runners_scale_up_lambda_timeout - lambda_scale_down_memory_size = var.scale_down_lambda_memory_size - lambda_timeout_scale_down = var.runners_scale_down_lambda_timeout - lambda_subnet_ids = var.lambda_subnet_ids - lambda_security_group_ids = var.lambda_security_group_ids - lambda_tags = var.lambda_tags - tracing_config = var.tracing_config - logging_retention_in_days = var.logging_retention_in_days - logging_kms_key_id = var.logging_kms_key_id - enable_cloudwatch_agent = each.value.runner_config.enable_cloudwatch_agent - cloudwatch_config = try(coalesce(each.value.runner_config.cloudwatch_config, var.cloudwatch_config), null) - runner_log_files = each.value.runner_config.runner_log_files - runner_group_name = each.value.runner_config.runner_group_name - runner_name_prefix = each.value.runner_config.runner_name_prefix + enable_runner_binaries_syncer = each.value.runner_config.enable_runner_binaries_syncer + lambda_s3_bucket = var.lambda_s3_bucket + runners_lambda_s3_key = var.runners_lambda_s3_key + runners_lambda_s3_object_version = var.runners_lambda_s3_object_version + lambda_runtime = var.lambda_runtime + lambda_architecture = var.lambda_architecture + lambda_zip = var.runners_lambda_zip + lambda_scale_up_memory_size = var.scale_up_lambda_memory_size + lambda_event_source_mapping_batch_size = var.lambda_event_source_mapping_batch_size + lambda_event_source_mapping_maximum_batching_window_in_seconds = var.lambda_event_source_mapping_maximum_batching_window_in_seconds + lambda_timeout_scale_up = var.runners_scale_up_lambda_timeout + lambda_scale_down_memory_size = var.scale_down_lambda_memory_size + lambda_timeout_scale_down = var.runners_scale_down_lambda_timeout + lambda_subnet_ids = var.lambda_subnet_ids + lambda_security_group_ids = var.lambda_security_group_ids + lambda_tags = var.lambda_tags + tracing_config = var.tracing_config + logging_retention_in_days = var.logging_retention_in_days + logging_kms_key_id = var.logging_kms_key_id + enable_cloudwatch_agent = each.value.runner_config.enable_cloudwatch_agent + cloudwatch_config = try(coalesce(each.value.runner_config.cloudwatch_config, var.cloudwatch_config), null) + runner_log_files = each.value.runner_config.runner_log_files + runner_group_name = each.value.runner_config.runner_group_name + runner_name_prefix = each.value.runner_config.runner_name_prefix scale_up_reserved_concurrent_executions = each.value.runner_config.scale_up_reserved_concurrent_executions diff --git a/modules/multi-runner/variables.tf b/modules/multi-runner/variables.tf index 119af5c36c..5c839e1104 100644 --- a/modules/multi-runner/variables.tf +++ b/modules/multi-runner/variables.tf @@ -724,3 +724,15 @@ variable "user_agent" { type = string default = "github-aws-runners" } + +variable "lambda_event_source_mapping_batch_size" { + description = "Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used." + type = number + default = 10 +} + +variable "lambda_event_source_mapping_maximum_batching_window_in_seconds" { + description = "Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch_size is greater than 10. Defaults to 0." + type = number + default = 0 +} diff --git a/modules/runners/README.md b/modules/runners/README.md index eddf7edb40..169cee7eac 100644 --- a/modules/runners/README.md +++ b/modules/runners/README.md @@ -177,6 +177,8 @@ yarn run dist | [key\_name](#input\_key\_name) | Key pair name | `string` | `null` | no | | [kms\_key\_arn](#input\_kms\_key\_arn) | Optional CMK Key ARN to be used for Parameter Store. | `string` | `null` | no | | [lambda\_architecture](#input\_lambda\_architecture) | AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. | `string` | `"arm64"` | no | +| [lambda\_event\_source\_mapping\_batch\_size](#input\_lambda\_event\_source\_mapping\_batch\_size) | Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used. | `number` | `10` | no | +| [lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds](#input\_lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds) | Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10. Defaults to 0. | `number` | `0` | no | | [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | | [lambda\_scale\_down\_memory\_size](#input\_lambda\_scale\_down\_memory\_size) | Memory size limit in MB for scale down lambda. | `number` | `512` | no | diff --git a/modules/runners/job-retry.tf b/modules/runners/job-retry.tf index e51c3903d4..130992667f 100644 --- a/modules/runners/job-retry.tf +++ b/modules/runners/job-retry.tf @@ -3,30 +3,32 @@ locals { job_retry_enabled = var.job_retry != null && var.job_retry.enable ? true : false job_retry = { - prefix = var.prefix - tags = local.tags - aws_partition = var.aws_partition - architecture = var.lambda_architecture - runtime = var.lambda_runtime - security_group_ids = var.lambda_security_group_ids - subnet_ids = var.lambda_subnet_ids - kms_key_arn = var.kms_key_arn - lambda_tags = var.lambda_tags - log_level = var.log_level - logging_kms_key_id = var.logging_kms_key_id - logging_retention_in_days = var.logging_retention_in_days - metrics = var.metrics - role_path = var.role_path - role_permissions_boundary = var.role_permissions_boundary - s3_bucket = var.lambda_s3_bucket - s3_key = var.runners_lambda_s3_key - s3_object_version = var.runners_lambda_s3_object_version - zip = var.lambda_zip - tracing_config = var.tracing_config - github_app_parameters = var.github_app_parameters - enable_organization_runners = var.enable_organization_runners - sqs_build_queue = var.sqs_build_queue - ghes_url = var.ghes_url + prefix = var.prefix + tags = local.tags + aws_partition = var.aws_partition + architecture = var.lambda_architecture + runtime = var.lambda_runtime + security_group_ids = var.lambda_security_group_ids + subnet_ids = var.lambda_subnet_ids + kms_key_arn = var.kms_key_arn + lambda_tags = var.lambda_tags + log_level = var.log_level + logging_kms_key_id = var.logging_kms_key_id + logging_retention_in_days = var.logging_retention_in_days + metrics = var.metrics + role_path = var.role_path + role_permissions_boundary = var.role_permissions_boundary + s3_bucket = var.lambda_s3_bucket + s3_key = var.runners_lambda_s3_key + s3_object_version = var.runners_lambda_s3_object_version + zip = var.lambda_zip + tracing_config = var.tracing_config + github_app_parameters = var.github_app_parameters + enable_organization_runners = var.enable_organization_runners + sqs_build_queue = var.sqs_build_queue + ghes_url = var.ghes_url + lambda_event_source_mapping_batch_size = var.lambda_event_source_mapping_batch_size + lambda_event_source_mapping_maximum_batching_window_in_seconds = var.lambda_event_source_mapping_maximum_batching_window_in_seconds } } diff --git a/modules/runners/job-retry/README.md b/modules/runners/job-retry/README.md index 168f2d324e..f54b943855 100644 --- a/modules/runners/job-retry/README.md +++ b/modules/runners/job-retry/README.md @@ -42,7 +42,7 @@ The module is an inner module and used by the runner module when the opt-in feat | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| -| [config](#input\_config) | Configuration for the spot termination watcher lambda function.

`aws_partition`: Partition for the base arn if not 'aws'
`architecture`: AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions.
`environment_variables`: Environment variables for the lambda.
`enable_organization_runners`: Enable organization runners.
`enable_metric`: Enable metric for the lambda. If `spot_warning` is set to true, the lambda will emit a metric when it detects a spot termination warning.
'ghes\_url': Optional GitHub Enterprise Server URL.
'user\_agent': Optional User-Agent header for GitHub API requests.
'github\_app\_parameters': Parameter Store for GitHub App Parameters.
'kms\_key\_arn': Optional CMK Key ARN instead of using the default AWS managed key.
`lambda_principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing.
`lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'.
`logging_kms_key_id`: Specifies the kms key id to encrypt the logs with
`logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.
`memory_size`: Memory size limit in MB of the lambda.
`metrics`: Configuration to enable metrics creation by the lambda.
`prefix`: The prefix used for naming resources.
`role_path`: The path that will be added to the role, if not set the environment name will be used.
`role_permissions_boundary`: Permissions boundary that will be added to the created role for the lambda.
`runtime`: AWS Lambda runtime.
`s3_bucket`: S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly.
`s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas.
`s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket.
`security_group_ids`: List of security group IDs associated with the Lambda function.
'sqs\_build\_queue': SQS queue for build events to re-publish job request.
`subnet_ids`: List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`.
`tag_filters`: Map of tags that will be used to filter the resources to be tracked. Only for which all tags are present and starting with the same value as the value in the map will be tracked.
`tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`timeout`: Time out of the lambda in seconds.
`tracing_config`: Configuration for lambda tracing.
`zip`: File location of the lambda zip file. |
object({
aws_partition = optional(string, null)
architecture = optional(string, null)
enable_organization_runners = bool
environment_variables = optional(map(string), {})
ghes_url = optional(string, null)
user_agent = optional(string, null)
github_app_parameters = object({
key_base64 = map(string)
id = map(string)
})
kms_key_arn = optional(string, null)
lambda_tags = optional(map(string), {})
log_level = optional(string, null)
logging_kms_key_id = optional(string, null)
logging_retention_in_days = optional(number, null)
memory_size = optional(number, null)
metrics = optional(object({
enable = optional(bool, false)
namespace = optional(string, null)
metric = optional(object({
enable_github_app_rate_limit = optional(bool, true)
enable_job_retry = optional(bool, true)
}), {})
}), {})
prefix = optional(string, null)
principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
queue_encryption = optional(object({
kms_data_key_reuse_period_seconds = optional(number, null)
kms_master_key_id = optional(string, null)
sqs_managed_sse_enabled = optional(bool, true)
}), {})
role_path = optional(string, null)
role_permissions_boundary = optional(string, null)
runtime = optional(string, null)
security_group_ids = optional(list(string), [])
subnet_ids = optional(list(string), [])
s3_bucket = optional(string, null)
s3_key = optional(string, null)
s3_object_version = optional(string, null)
sqs_build_queue = object({
url = string
arn = string
})
tags = optional(map(string), {})
timeout = optional(number, 30)
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
zip = optional(string, null)
})
| n/a | yes | +| [config](#input\_config) | Configuration for the spot termination watcher lambda function.

`aws_partition`: Partition for the base arn if not 'aws'
`architecture`: AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions.
`environment_variables`: Environment variables for the lambda.
`enable_organization_runners`: Enable organization runners.
`enable_metric`: Enable metric for the lambda. If `spot_warning` is set to true, the lambda will emit a metric when it detects a spot termination warning.
'ghes\_url': Optional GitHub Enterprise Server URL.
'user\_agent': Optional User-Agent header for GitHub API requests.
'github\_app\_parameters': Parameter Store for GitHub App Parameters.
'kms\_key\_arn': Optional CMK Key ARN instead of using the default AWS managed key.
`lambda_event_source_mapping_batch_size`: Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default will be used.
`lambda_event_source_mapping_maximum_batching_window_in_seconds`: Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10.
`lambda_principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing.
`lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'.
`logging_kms_key_id`: Specifies the kms key id to encrypt the logs with
`logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.
`memory_size`: Memory size limit in MB of the lambda.
`metrics`: Configuration to enable metrics creation by the lambda.
`prefix`: The prefix used for naming resources.
`role_path`: The path that will be added to the role, if not set the environment name will be used.
`role_permissions_boundary`: Permissions boundary that will be added to the created role for the lambda.
`runtime`: AWS Lambda runtime.
`s3_bucket`: S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly.
`s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas.
`s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket.
`security_group_ids`: List of security group IDs associated with the Lambda function.
'sqs\_build\_queue': SQS queue for build events to re-publish job request.
`subnet_ids`: List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`.
`tag_filters`: Map of tags that will be used to filter the resources to be tracked. Only for which all tags are present and starting with the same value as the value in the map will be tracked.
`tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`timeout`: Time out of the lambda in seconds.
`tracing_config`: Configuration for lambda tracing.
`zip`: File location of the lambda zip file. |
object({
aws_partition = optional(string, null)
architecture = optional(string, null)
enable_organization_runners = bool
environment_variables = optional(map(string), {})
ghes_url = optional(string, null)
user_agent = optional(string, null)
github_app_parameters = object({
key_base64 = map(string)
id = map(string)
})
kms_key_arn = optional(string, null)
lambda_event_source_mapping_batch_size = optional(number, 10)
lambda_event_source_mapping_maximum_batching_window_in_seconds = optional(number, 0)
lambda_tags = optional(map(string), {})
log_level = optional(string, null)
logging_kms_key_id = optional(string, null)
logging_retention_in_days = optional(number, null)
memory_size = optional(number, null)
metrics = optional(object({
enable = optional(bool, false)
namespace = optional(string, null)
metric = optional(object({
enable_github_app_rate_limit = optional(bool, true)
enable_job_retry = optional(bool, true)
}), {})
}), {})
prefix = optional(string, null)
principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
queue_encryption = optional(object({
kms_data_key_reuse_period_seconds = optional(number, null)
kms_master_key_id = optional(string, null)
sqs_managed_sse_enabled = optional(bool, true)
}), {})
role_path = optional(string, null)
role_permissions_boundary = optional(string, null)
runtime = optional(string, null)
security_group_ids = optional(list(string), [])
subnet_ids = optional(list(string), [])
s3_bucket = optional(string, null)
s3_key = optional(string, null)
s3_object_version = optional(string, null)
sqs_build_queue = object({
url = string
arn = string
})
tags = optional(map(string), {})
timeout = optional(number, 30)
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
zip = optional(string, null)
})
| n/a | yes | ## Outputs diff --git a/modules/runners/job-retry/main.tf b/modules/runners/job-retry/main.tf index 807f52a49a..eba478b214 100644 --- a/modules/runners/job-retry/main.tf +++ b/modules/runners/job-retry/main.tf @@ -44,9 +44,10 @@ module "job_retry" { } resource "aws_lambda_event_source_mapping" "job_retry" { - event_source_arn = aws_sqs_queue.job_retry_check_queue.arn - function_name = module.job_retry.lambda.function.arn - batch_size = 1 + event_source_arn = aws_sqs_queue.job_retry_check_queue.arn + function_name = module.job_retry.lambda.function.arn + batch_size = var.config.lambda_event_source_mapping_batch_size + maximum_batching_window_in_seconds = var.config.lambda_event_source_mapping_maximum_batching_window_in_seconds } resource "aws_lambda_permission" "job_retry" { diff --git a/modules/runners/job-retry/variables.tf b/modules/runners/job-retry/variables.tf index 4741dd1b45..7ccfdf63b3 100644 --- a/modules/runners/job-retry/variables.tf +++ b/modules/runners/job-retry/variables.tf @@ -11,6 +11,8 @@ variable "config" { 'user_agent': Optional User-Agent header for GitHub API requests. 'github_app_parameters': Parameter Store for GitHub App Parameters. 'kms_key_arn': Optional CMK Key ARN instead of using the default AWS managed key. + `lambda_event_source_mapping_batch_size`: Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default will be used. + `lambda_event_source_mapping_maximum_batching_window_in_seconds`: Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch_size is greater than 10. `lambda_principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing. `lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment. `log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'. @@ -45,12 +47,14 @@ variable "config" { key_base64 = map(string) id = map(string) }) - kms_key_arn = optional(string, null) - lambda_tags = optional(map(string), {}) - log_level = optional(string, null) - logging_kms_key_id = optional(string, null) - logging_retention_in_days = optional(number, null) - memory_size = optional(number, null) + kms_key_arn = optional(string, null) + lambda_event_source_mapping_batch_size = optional(number, 10) + lambda_event_source_mapping_maximum_batching_window_in_seconds = optional(number, 0) + lambda_tags = optional(map(string), {}) + log_level = optional(string, null) + logging_kms_key_id = optional(string, null) + logging_retention_in_days = optional(number, null) + memory_size = optional(number, null) metrics = optional(object({ enable = optional(bool, false) namespace = optional(string, null) diff --git a/modules/runners/scale-up.tf b/modules/runners/scale-up.tf index 89d95a50d0..b1ea88652d 100644 --- a/modules/runners/scale-up.tf +++ b/modules/runners/scale-up.tf @@ -87,10 +87,12 @@ resource "aws_cloudwatch_log_group" "scale_up" { } resource "aws_lambda_event_source_mapping" "scale_up" { - event_source_arn = var.sqs_build_queue.arn - function_name = aws_lambda_function.scale_up.arn - batch_size = 1 - tags = var.tags + event_source_arn = var.sqs_build_queue.arn + function_name = aws_lambda_function.scale_up.arn + function_response_types = ["ReportBatchItemFailures"] + batch_size = var.lambda_event_source_mapping_batch_size + maximum_batching_window_in_seconds = var.lambda_event_source_mapping_maximum_batching_window_in_seconds + tags = var.tags } resource "aws_lambda_permission" "scale_runners_lambda" { diff --git a/modules/runners/variables.tf b/modules/runners/variables.tf index 352285e786..a45075bb52 100644 --- a/modules/runners/variables.tf +++ b/modules/runners/variables.tf @@ -770,3 +770,23 @@ variable "user_agent" { type = string default = null } + +variable "lambda_event_source_mapping_batch_size" { + description = "Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used." + type = number + default = 10 + validation { + condition = var.lambda_event_source_mapping_batch_size >= 1 && var.lambda_event_source_mapping_batch_size <= 1000 + error_message = "The batch size for the lambda event source mapping must be between 1 and 1000." + } +} + +variable "lambda_event_source_mapping_maximum_batching_window_in_seconds" { + description = "Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch_size is greater than 10. Defaults to 0." + type = number + default = 0 + validation { + condition = var.lambda_event_source_mapping_maximum_batching_window_in_seconds >= 0 && var.lambda_event_source_mapping_maximum_batching_window_in_seconds <= 300 + error_message = "Maximum batching window must be between 0 and 300 seconds." + } +} diff --git a/modules/webhook-github-app/README.md b/modules/webhook-github-app/README.md index 0c09a761c5..6de85ee30d 100644 --- a/modules/webhook-github-app/README.md +++ b/modules/webhook-github-app/README.md @@ -34,7 +34,7 @@ No modules. | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| -| [github\_app](#input\_github\_app) | GitHub app parameters, see your github app. Ensure the key is the base64-encoded `.pem` file (the output of `base64 app.private-key.pem`, not the content of `private-key.pem`). |
object({
key_base64 = string
id = string
webhook_secret = string
})
| n/a | yes | +| [github\_app](#input\_github\_app) | GitHub app parameters, see your GitHub app. Ensure the key is the base64-encoded `.pem` file (the output of `base64 app.private-key.pem`, not the content of `private-key.pem`). |
object({
key_base64 = string
id = string
webhook_secret = string
})
| n/a | yes | | [webhook\_endpoint](#input\_webhook\_endpoint) | The endpoint to use for the webhook, defaults to the endpoint of the runners module. | `string` | n/a | yes | ## Outputs diff --git a/variables.tf b/variables.tf index 0bf3563145..7ff6ecece4 100644 --- a/variables.tf +++ b/variables.tf @@ -1021,3 +1021,19 @@ variable "user_agent" { type = string default = "github-aws-runners" } + +variable "lambda_event_source_mapping_batch_size" { + description = "Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used." + type = number + default = 10 +} + +variable "lambda_event_source_mapping_maximum_batching_window_in_seconds" { + description = "Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch_size is greater than 10. Defaults to 0." + type = number + default = 0 + validation { + condition = var.lambda_event_source_mapping_maximum_batching_window_in_seconds >= 0 && var.lambda_event_source_mapping_maximum_batching_window_in_seconds <= 300 + error_message = "Maximum batching window must be between 0 and 300 seconds." + } +} From d2d09ffb41a9f88553f5e532239294d11db87096 Mon Sep 17 00:00:00 2001 From: Niek Palm Date: Mon, 8 Dec 2025 09:57:45 +0100 Subject: [PATCH 2/4] feat!: Upgrade lambda runtime to Node24.x (#4911) Upgrade to Lambda runtime Node24.x - Upgrade minimal AWS Terraform provider - Upgrade all lambda runtimes by default to 24x - Breaking change! ## Dependency and environment upgrades: * Updated all references to Node.js from version 22 to version 24 in GitHub Actions workflows (`.github/workflows/lambda.yml`, `.github/workflows/release.yml`) and Dockerfiles (`.ci/Dockerfile`, `.devcontainer/Dockerfile`). [[1]](diffhunk://#diff-b0732b88b9e5a3df48561602571a10179d2b28cbb21ba8032025932bc9606426L23-R23) [[2]](diffhunk://#diff-87db21a973eed4fef5f32b267aa60fcee5cbdf03c67fafdc2a9b553bb0b15f34L33-R33) [[3]](diffhunk://#diff-fd0c8401badda82156f9e7bd621fa3a0e586d8128e4a80af17c7cbff70bee11eL2-R2) [[4]](diffhunk://#diff-13bd9d7a30bf46656bc81f1ad5b908a627f9247be3f7d76df862b0578b534fc6L1-R1) * Upgraded the base Docker images for both the CI and devcontainer environments to use newer Node.js image digests. [[1]](diffhunk://#diff-fd0c8401badda82156f9e7bd621fa3a0e586d8128e4a80af17c7cbff70bee11eL2-R2) [[2]](diffhunk://#diff-13bd9d7a30bf46656bc81f1ad5b908a627f9247be3f7d76df862b0578b534fc6L1-R1) Terraform provider updates: * Increased the minimum required version for the AWS Terraform provider to `>= 6.21` in all example `versions.tf` files. [[1]](diffhunk://#diff-61160e0ae9e70de675b98889710708e1a9edcccd5194a4a580aa234caa5103aeL5-R5) [[2]](diffhunk://#diff-debb96ea7aef889f9deb4de51c61ca44a7e23832098e1c9d8b09fa54b1a96582L5-R5) * Updated the `.terraform.lock.hcl` files in all examples to lock the AWS provider at version `6.22.1`, the local provider at `2.6.1`, and the null provider at `3.2.4` where applicable, along with updated hash values and constraints. [[1]](diffhunk://#diff-101dfb4a445c2ab4a46c53cbc92db3a43f3423ba1e8ee7b8a11b393ebe835539L5-R43) [[2]](diffhunk://#diff-2a8b3082767f86cfdb88b00e963894a8cdd2ebcf705c8d757d46b55a98452a6cL5-R43) --------- Co-authored-by: github-aws-runners-pr|bot Co-authored-by: Guilherme Caulada --- .ci/Dockerfile | 2 +- .devcontainer/Dockerfile | 2 +- .github/workflows/lambda.yml | 2 +- .github/workflows/release.yml | 2 +- .tflint.hcl | 2 +- README.md | 6 +- examples/base/README.md | 2 +- examples/base/versions.tf | 2 +- examples/default/.terraform.lock.hcl | 60 +++++----- examples/default/README.md | 2 +- examples/default/versions.tf | 2 +- examples/ephemeral/.terraform.lock.hcl | 60 +++++----- examples/ephemeral/README.md | 2 +- examples/ephemeral/versions.tf | 2 +- .../.terraform.lock.hcl | 60 +++++----- .../external-managed-ssm-secrets/README.md | 2 +- .../external-managed-ssm-secrets/versions.tf | 2 +- examples/lambdas-download/.terraform.lock.hcl | 60 +++++----- examples/multi-runner/.terraform.lock.hcl | 112 +++++++++--------- examples/multi-runner/README.md | 6 +- examples/multi-runner/versions.tf | 2 +- .../permissions-boundary/.terraform.lock.hcl | 112 +++++++++--------- examples/permissions-boundary/README.md | 6 +- examples/permissions-boundary/setup/README.md | 4 +- .../permissions-boundary/setup/versions.tf | 10 +- examples/permissions-boundary/versions.tf | 2 +- examples/prebuilt/.terraform.lock.hcl | 112 +++++++++--------- examples/prebuilt/README.md | 6 +- examples/prebuilt/versions.tf | 2 +- .../termination-watcher/.terraform.lock.hcl | 34 +++--- lambdas/.nvmrc | 2 +- modules/ami-housekeeper/README.md | 6 +- modules/ami-housekeeper/variables.tf | 2 +- modules/ami-housekeeper/versions.tf | 2 +- modules/download-lambda/README.md | 2 +- modules/download-lambda/versions.tf | 2 +- modules/lambda/README.md | 6 +- modules/lambda/variables.tf | 2 +- modules/lambda/versions.tf | 2 +- modules/multi-runner/README.md | 6 +- modules/multi-runner/variables.tf | 2 +- modules/multi-runner/versions.tf | 2 +- modules/runner-binaries-syncer/README.md | 6 +- modules/runner-binaries-syncer/variables.tf | 2 +- modules/runner-binaries-syncer/versions.tf | 2 +- modules/runners/README.md | 6 +- modules/runners/job-retry/README.md | 4 +- modules/runners/job-retry/versions.tf | 2 +- modules/runners/pool/README.md | 4 +- modules/runners/pool/versions.tf | 2 +- modules/runners/variables.tf | 2 +- modules/runners/versions.tf | 2 +- modules/setup-iam-permissions/README.md | 4 +- modules/setup-iam-permissions/versions.tf | 2 +- modules/ssm/README.md | 4 +- modules/ssm/versions.tf | 2 +- modules/termination-watcher/README.md | 2 +- .../notification/README.md | 4 +- .../notification/versions.tf | 2 +- .../termination-watcher/termination/README.md | 4 +- .../termination/versions.tf | 2 +- modules/termination-watcher/versions.tf | 2 +- modules/webhook/README.md | 6 +- modules/webhook/direct/README.md | 6 +- modules/webhook/direct/variables.tf | 2 +- modules/webhook/direct/versions.tf | 2 +- modules/webhook/eventbridge/README.md | 6 +- modules/webhook/eventbridge/variables.tf | 2 +- modules/webhook/eventbridge/versions.tf | 2 +- modules/webhook/variables.tf | 2 +- modules/webhook/versions.tf | 2 +- variables.tf | 2 +- versions.tf | 2 +- 73 files changed, 410 insertions(+), 400 deletions(-) diff --git a/.ci/Dockerfile b/.ci/Dockerfile index 2aa2dd93d2..3566cc1251 100644 --- a/.ci/Dockerfile +++ b/.ci/Dockerfile @@ -1,5 +1,5 @@ #syntax=docker/dockerfile:1.2 -FROM node@sha256:0c0734eb7051babbb3e95cd74e684f940552b31472152edf0bb23e54ab44a0d7 as build +FROM node@sha256:1501d5fd51032aa10701a7dcc9e6c72ab1e611a033ffcf08b6d5882e9165f63e as build WORKDIR /lambdas RUN apt-get update \ && apt-get install -y zip \ diff --git a/.devcontainer/Dockerfile b/.devcontainer/Dockerfile index 2e7b5badb0..d20b2ced50 100644 --- a/.devcontainer/Dockerfile +++ b/.devcontainer/Dockerfile @@ -1 +1 @@ -FROM mcr.microsoft.com/vscode/devcontainers/typescript-node@sha256:acdce1045a2ddce4c66846d5cd09adf746d157fce9233124e4925b647f192b2e +FROM mcr.microsoft.com/vscode/devcontainers/typescript-node@sha256:d09eac5cd85fb4bd70770fa3f88ee9dfdd0b09f8b85455a0e039048677276749 diff --git a/.github/workflows/lambda.yml b/.github/workflows/lambda.yml index c0ff7774e8..50cbe8a6e8 100644 --- a/.github/workflows/lambda.yml +++ b/.github/workflows/lambda.yml @@ -20,7 +20,7 @@ jobs: name: Build and test lambda functions runs-on: ubuntu-latest container: - image: node:22@sha256:2bb201f33898d2c0ce638505b426f4dd038cc00e5b2b4cbba17b069f0fff1496 + image: node:24@sha256:aa648b387728c25f81ff811799bbf8de39df66d7e2d9b3ab55cc6300cb9175d9 defaults: run: working-directory: ./lambdas diff --git a/.github/workflows/release.yml b/.github/workflows/release.yml index 5c87727470..a78de21732 100644 --- a/.github/workflows/release.yml +++ b/.github/workflows/release.yml @@ -30,7 +30,7 @@ jobs: - uses: actions/setup-node@2028fbc5c25fe9cf00d9f06a71cc4710d4507903 # v6.0.0 with: - node-version: 22 + node-version: 24 package-manager-cache: false - uses: actions/checkout@1af3b93b6815bc44a9784bd300feb67ff0d1eeb3 # v6.0.0 with: diff --git a/.tflint.hcl b/.tflint.hcl index 227338085f..1fa77a630a 100644 --- a/.tflint.hcl +++ b/.tflint.hcl @@ -5,7 +5,7 @@ config { plugin "aws" { enabled = true - version = "0.36.0" + version = "0.44.0" source = "github.com/terraform-linters/tflint-ruleset-aws" } diff --git a/README.md b/README.md index f242fe734b..75e4727fc1 100644 --- a/README.md +++ b/README.md @@ -70,14 +70,14 @@ Join our discord community via [this invite link](https://discord.gg/bxgXW8jJGh) | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.77 | +| [aws](#requirement\_aws) | >= 6.21 | | [random](#requirement\_random) | ~> 3.0 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.77 | +| [aws](#provider\_aws) | >= 6.21 | | [random](#provider\_random) | ~> 3.0 | ## Modules @@ -158,7 +158,7 @@ Join our discord community via [this invite link](https://discord.gg/bxgXW8jJGh) | [lambda\_event\_source\_mapping\_batch\_size](#input\_lambda\_event\_source\_mapping\_batch\_size) | Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used. | `number` | `10` | no | | [lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds](#input\_lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds) | Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10. Defaults to 0. | `number` | `0` | no | | [lambda\_principals](#input\_lambda\_principals) | (Optional) add extra principals to the role created for execution of the lambda, e.g. for local testing. |
list(object({
type = string
identifiers = list(string)
}))
| `[]` | no | -| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | +| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs24.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | | [lambda\_security\_group\_ids](#input\_lambda\_security\_group\_ids) | List of security group IDs associated with the Lambda function. | `list(string)` | `[]` | no | | [lambda\_subnet\_ids](#input\_lambda\_subnet\_ids) | List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. | `list(string)` | `[]` | no | diff --git a/examples/base/README.md b/examples/base/README.md index 761f9ed457..95b6fcee52 100644 --- a/examples/base/README.md +++ b/examples/base/README.md @@ -4,7 +4,7 @@ | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers diff --git a/examples/base/versions.tf b/examples/base/versions.tf index d6eaf0ca72..b8ede4f3b6 100644 --- a/examples/base/versions.tf +++ b/examples/base/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" # ensure backwards compatibility with v5.x + version = ">= 6.21" # ensure backwards compatibility with v6.x } } required_version = ">= 1" diff --git a/examples/default/.terraform.lock.hcl b/examples/default/.terraform.lock.hcl index a3e7346b15..0f6cc37765 100644 --- a/examples/default/.terraform.lock.hcl +++ b/examples/default/.terraform.lock.hcl @@ -2,45 +2,45 @@ # Manual edits may be lost in future updates. provider "registry.terraform.io/hashicorp/aws" { - version = "6.0.0" - constraints = ">= 5.0.0, >= 5.27.0, >= 5.77.0, >= 6.0.0" + version = "6.22.1" + constraints = ">= 5.0.0, >= 6.0.0, >= 6.21.0" hashes = [ - "h1:dbRRZ1NzH1QV/+83xT/X3MLYaZobMXt8DNwbqnJojpo=", - "zh:16b1bb786719b7ebcddba3ab751b976ebf4006f7144afeebcb83f0c5f41f8eb9", - "zh:1fbc08b817b9eaf45a2b72ccba59f4ea19e7fcf017be29f5a9552b623eccc5bc", - "zh:304f58f3333dbe846cfbdfc2227e6ed77041ceea33b6183972f3f8ab51bd065f", - "zh:4cd447b5c24f14553bd6e1a0e4fea3c7d7b218cbb2316a3d93f1c5cb562c181b", - "zh:589472b56be8277558616075fc5480fcd812ba6dc70e8979375fc6d8750f83ef", - "zh:5d78484ba43c26f1ef6067c4150550b06fd39c5d4bfb790f92c4a6f7d9d0201b", - "zh:5f470ce664bffb22ace736643d2abe7ad45858022b652143bcd02d71d38d4e42", - "zh:7a9cbb947aaab8c885096bce5da22838ca482196cf7d04ffb8bdf7fd28003e47", - "zh:854df3e4c50675e727705a0eaa4f8d42ccd7df6a5efa2456f0205a9901ace019", - "zh:87162c0f47b1260f5969679dccb246cb528f27f01229d02fd30a8e2f9869ba2c", - "zh:9a145404d506b52078cd7060e6cbb83f8fc7953f3f63a5e7137d41f69d6317a3", + "h1:PTgxp+nMDBd6EFHAIH6ceFfvwa2blqkCwXglZn6Dqa8=", + "zh:3995ca97e6c2c1ed9e231c453287585d3dc1ca2a304683ac0b269b3448fda7c0", + "zh:4f69f70d2edeb0dde9c693b7cd7e8e21c781b2fac7062bed5300092dbadb71e1", + "zh:5c76042fdf3df56a1f581bc477e5d6fc3e099d4d6544fe725b3747e9990726bd", + "zh:6ff8221340955f4b3ba9230918bb026c4414a5aebe9d0967845c43e8e8908aec", + "zh:73cdd8638cb52bbe25887cd5b7946cc3fcb891867de11bcb0fde9b35c4f70a41", + "zh:7af5aec2fd01fa5e5f600f1db1bcf200aaadc05a2c8ffcbb4b6b61cd2bd3e33b", + "zh:7e055cfa7f40b667f5f7af564db9544f46aa189cdbe5530ad812e027647132f5", "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a4eab2649f5afe06cc406ce2aaf9fd44dcf311123f48d344c255e93454c08921", - "zh:bea09141c6186a3e133413ae3a2e3d1aaf4f43466a6a468827287527edf21710", - "zh:d7ea2a35ff55ddfe639ab3b04331556b772a8698eca01f5d74151615d9f336db", + "zh:aba898190c668ade4471da65c96db414679367174ac5b73e8ce7551056c77e3e", + "zh:aedaa8d7d71e6d58cdc09a7e3bcb8031b3ea496a7ac142376eb679d1756057f3", + "zh:cb9739952d467b3f6d72d57722943956e80ab235b58a0e34758538381dcc386c", + "zh:e12a2681028a70cb08eaf4c3364ddab386416502f966067bf99e79ba6be0d7b6", + "zh:e32a922a7d6fd5df69b3cc92932fc2689dc195b0f8b493dcd686abdd892b06cd", + "zh:f2dea7dead6f34b51e8b6aae177a8b333834a41d25529baa634a087d99ea32f6", + "zh:f6eee6df0366e8452d912cfd498792579aede88de3b67c15d36b8949e37479b1", ] } provider "registry.terraform.io/hashicorp/local" { - version = "2.5.3" + version = "2.6.1" constraints = "~> 2.0" hashes = [ - "h1:MCzg+hs1/ZQ32u56VzJMWP9ONRQPAAqAjuHuzbyshvI=", - "zh:284d4b5b572eacd456e605e94372f740f6de27b71b4e1fd49b63745d8ecd4927", - "zh:40d9dfc9c549e406b5aab73c023aa485633c1b6b730c933d7bcc2fa67fd1ae6e", - "zh:6243509bb208656eb9dc17d3c525c89acdd27f08def427a0dce22d5db90a4c8b", + "h1:DbiR/D2CPigzCGweYIyJH0N0x04oyI5xiZ9wSW/s3kQ=", + "zh:10050d08f416de42a857e4b6f76809aae63ea4ec6f5c852a126a915dede814b4", + "zh:2df2a3ebe9830d4759c59b51702e209fe053f47453cb4688f43c063bac8746b7", + "zh:2e759568bcc38c86ca0e43701d34cf29945736fdc8e429c5b287ddc2703c7b18", + "zh:6a62a34e48500ab4aea778e355e162ebde03260b7a9eb9edc7e534c84fbca4c6", + "zh:74373728ba32a1d5450a3a88ac45624579e32755b086cd4e51e88d9aca240ef6", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:885d85869f927853b6fe330e235cd03c337ac3b933b0d9ae827ec32fa1fdcdbf", - "zh:bab66af51039bdfcccf85b25fe562cbba2f54f6b3812202f4873ade834ec201d", - "zh:c505ff1bf9442a889ac7dca3ac05a8ee6f852e0118dd9a61796a2f6ff4837f09", - "zh:d36c0b5770841ddb6eaf0499ba3de48e5d4fc99f4829b6ab66b0fab59b1aaf4f", - "zh:ddb6a407c7f3ec63efb4dad5f948b54f7f4434ee1a2607a49680d494b1776fe1", - "zh:e0dafdd4500bec23d3ff221e3a9b60621c5273e5df867bc59ef6b7e41f5c91f6", - "zh:ece8742fd2882a8fc9d6efd20e2590010d43db386b920b2a9c220cfecc18de47", - "zh:f4c6b3eb8f39105004cf720e202f04f57e3578441cfb76ca27611139bc116a82", + "zh:8dddae588971a996f622e7589cd8b9da7834c744ac12bfb59c97fa77ded95255", + "zh:946f82f66353bb97aefa8d95c4ca86db227f9b7c50b82415289ac47e4e74d08d", + "zh:e9a5c09e6f35e510acf15b666fd0b34a30164cecdcd81ce7cda0f4b2dade8d91", + "zh:eafe5b873ef42b32feb2f969c38ff8652507e695620cbaf03b9db714bee52249", + "zh:ec146289fa27650c9d433bb5c7847379180c0b7a323b1b94e6e7ad5d2a7dbe71", + "zh:fc882c35ce05631d76c0973b35adde26980778fc81d9da81a2fade2b9d73423b", ] } diff --git a/examples/default/README.md b/examples/default/README.md index 618db0b633..28d1baa141 100644 --- a/examples/default/README.md +++ b/examples/default/README.md @@ -34,7 +34,7 @@ terraform output -raw webhook_secret | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 6.0 | +| [aws](#requirement\_aws) | >= 6.21 | | [local](#requirement\_local) | ~> 2.0 | | [random](#requirement\_random) | ~> 3.0 | diff --git a/examples/default/versions.tf b/examples/default/versions.tf index 734eaa0b3e..af642af83b 100644 --- a/examples/default/versions.tf +++ b/examples/default/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 6.0" + version = ">= 6.21" } local = { source = "hashicorp/local" diff --git a/examples/ephemeral/.terraform.lock.hcl b/examples/ephemeral/.terraform.lock.hcl index a3e7346b15..0f6cc37765 100644 --- a/examples/ephemeral/.terraform.lock.hcl +++ b/examples/ephemeral/.terraform.lock.hcl @@ -2,45 +2,45 @@ # Manual edits may be lost in future updates. provider "registry.terraform.io/hashicorp/aws" { - version = "6.0.0" - constraints = ">= 5.0.0, >= 5.27.0, >= 5.77.0, >= 6.0.0" + version = "6.22.1" + constraints = ">= 5.0.0, >= 6.0.0, >= 6.21.0" hashes = [ - "h1:dbRRZ1NzH1QV/+83xT/X3MLYaZobMXt8DNwbqnJojpo=", - "zh:16b1bb786719b7ebcddba3ab751b976ebf4006f7144afeebcb83f0c5f41f8eb9", - "zh:1fbc08b817b9eaf45a2b72ccba59f4ea19e7fcf017be29f5a9552b623eccc5bc", - "zh:304f58f3333dbe846cfbdfc2227e6ed77041ceea33b6183972f3f8ab51bd065f", - "zh:4cd447b5c24f14553bd6e1a0e4fea3c7d7b218cbb2316a3d93f1c5cb562c181b", - "zh:589472b56be8277558616075fc5480fcd812ba6dc70e8979375fc6d8750f83ef", - "zh:5d78484ba43c26f1ef6067c4150550b06fd39c5d4bfb790f92c4a6f7d9d0201b", - "zh:5f470ce664bffb22ace736643d2abe7ad45858022b652143bcd02d71d38d4e42", - "zh:7a9cbb947aaab8c885096bce5da22838ca482196cf7d04ffb8bdf7fd28003e47", - "zh:854df3e4c50675e727705a0eaa4f8d42ccd7df6a5efa2456f0205a9901ace019", - "zh:87162c0f47b1260f5969679dccb246cb528f27f01229d02fd30a8e2f9869ba2c", - "zh:9a145404d506b52078cd7060e6cbb83f8fc7953f3f63a5e7137d41f69d6317a3", + "h1:PTgxp+nMDBd6EFHAIH6ceFfvwa2blqkCwXglZn6Dqa8=", + "zh:3995ca97e6c2c1ed9e231c453287585d3dc1ca2a304683ac0b269b3448fda7c0", + "zh:4f69f70d2edeb0dde9c693b7cd7e8e21c781b2fac7062bed5300092dbadb71e1", + "zh:5c76042fdf3df56a1f581bc477e5d6fc3e099d4d6544fe725b3747e9990726bd", + "zh:6ff8221340955f4b3ba9230918bb026c4414a5aebe9d0967845c43e8e8908aec", + "zh:73cdd8638cb52bbe25887cd5b7946cc3fcb891867de11bcb0fde9b35c4f70a41", + "zh:7af5aec2fd01fa5e5f600f1db1bcf200aaadc05a2c8ffcbb4b6b61cd2bd3e33b", + "zh:7e055cfa7f40b667f5f7af564db9544f46aa189cdbe5530ad812e027647132f5", "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a4eab2649f5afe06cc406ce2aaf9fd44dcf311123f48d344c255e93454c08921", - "zh:bea09141c6186a3e133413ae3a2e3d1aaf4f43466a6a468827287527edf21710", - "zh:d7ea2a35ff55ddfe639ab3b04331556b772a8698eca01f5d74151615d9f336db", + "zh:aba898190c668ade4471da65c96db414679367174ac5b73e8ce7551056c77e3e", + "zh:aedaa8d7d71e6d58cdc09a7e3bcb8031b3ea496a7ac142376eb679d1756057f3", + "zh:cb9739952d467b3f6d72d57722943956e80ab235b58a0e34758538381dcc386c", + "zh:e12a2681028a70cb08eaf4c3364ddab386416502f966067bf99e79ba6be0d7b6", + "zh:e32a922a7d6fd5df69b3cc92932fc2689dc195b0f8b493dcd686abdd892b06cd", + "zh:f2dea7dead6f34b51e8b6aae177a8b333834a41d25529baa634a087d99ea32f6", + "zh:f6eee6df0366e8452d912cfd498792579aede88de3b67c15d36b8949e37479b1", ] } provider "registry.terraform.io/hashicorp/local" { - version = "2.5.3" + version = "2.6.1" constraints = "~> 2.0" hashes = [ - "h1:MCzg+hs1/ZQ32u56VzJMWP9ONRQPAAqAjuHuzbyshvI=", - "zh:284d4b5b572eacd456e605e94372f740f6de27b71b4e1fd49b63745d8ecd4927", - "zh:40d9dfc9c549e406b5aab73c023aa485633c1b6b730c933d7bcc2fa67fd1ae6e", - "zh:6243509bb208656eb9dc17d3c525c89acdd27f08def427a0dce22d5db90a4c8b", + "h1:DbiR/D2CPigzCGweYIyJH0N0x04oyI5xiZ9wSW/s3kQ=", + "zh:10050d08f416de42a857e4b6f76809aae63ea4ec6f5c852a126a915dede814b4", + "zh:2df2a3ebe9830d4759c59b51702e209fe053f47453cb4688f43c063bac8746b7", + "zh:2e759568bcc38c86ca0e43701d34cf29945736fdc8e429c5b287ddc2703c7b18", + "zh:6a62a34e48500ab4aea778e355e162ebde03260b7a9eb9edc7e534c84fbca4c6", + "zh:74373728ba32a1d5450a3a88ac45624579e32755b086cd4e51e88d9aca240ef6", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:885d85869f927853b6fe330e235cd03c337ac3b933b0d9ae827ec32fa1fdcdbf", - "zh:bab66af51039bdfcccf85b25fe562cbba2f54f6b3812202f4873ade834ec201d", - "zh:c505ff1bf9442a889ac7dca3ac05a8ee6f852e0118dd9a61796a2f6ff4837f09", - "zh:d36c0b5770841ddb6eaf0499ba3de48e5d4fc99f4829b6ab66b0fab59b1aaf4f", - "zh:ddb6a407c7f3ec63efb4dad5f948b54f7f4434ee1a2607a49680d494b1776fe1", - "zh:e0dafdd4500bec23d3ff221e3a9b60621c5273e5df867bc59ef6b7e41f5c91f6", - "zh:ece8742fd2882a8fc9d6efd20e2590010d43db386b920b2a9c220cfecc18de47", - "zh:f4c6b3eb8f39105004cf720e202f04f57e3578441cfb76ca27611139bc116a82", + "zh:8dddae588971a996f622e7589cd8b9da7834c744ac12bfb59c97fa77ded95255", + "zh:946f82f66353bb97aefa8d95c4ca86db227f9b7c50b82415289ac47e4e74d08d", + "zh:e9a5c09e6f35e510acf15b666fd0b34a30164cecdcd81ce7cda0f4b2dade8d91", + "zh:eafe5b873ef42b32feb2f969c38ff8652507e695620cbaf03b9db714bee52249", + "zh:ec146289fa27650c9d433bb5c7847379180c0b7a323b1b94e6e7ad5d2a7dbe71", + "zh:fc882c35ce05631d76c0973b35adde26980778fc81d9da81a2fade2b9d73423b", ] } diff --git a/examples/ephemeral/README.md b/examples/ephemeral/README.md index 705f2d13b4..04f2177d7e 100644 --- a/examples/ephemeral/README.md +++ b/examples/ephemeral/README.md @@ -33,7 +33,7 @@ terraform output webhook_secret | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 6.0 | +| [aws](#requirement\_aws) | >= 6.21 | | [local](#requirement\_local) | ~> 2.0 | | [random](#requirement\_random) | ~> 3.0 | diff --git a/examples/ephemeral/versions.tf b/examples/ephemeral/versions.tf index 734eaa0b3e..af642af83b 100644 --- a/examples/ephemeral/versions.tf +++ b/examples/ephemeral/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 6.0" + version = ">= 6.21" } local = { source = "hashicorp/local" diff --git a/examples/external-managed-ssm-secrets/.terraform.lock.hcl b/examples/external-managed-ssm-secrets/.terraform.lock.hcl index a3e7346b15..0f6cc37765 100644 --- a/examples/external-managed-ssm-secrets/.terraform.lock.hcl +++ b/examples/external-managed-ssm-secrets/.terraform.lock.hcl @@ -2,45 +2,45 @@ # Manual edits may be lost in future updates. provider "registry.terraform.io/hashicorp/aws" { - version = "6.0.0" - constraints = ">= 5.0.0, >= 5.27.0, >= 5.77.0, >= 6.0.0" + version = "6.22.1" + constraints = ">= 5.0.0, >= 6.0.0, >= 6.21.0" hashes = [ - "h1:dbRRZ1NzH1QV/+83xT/X3MLYaZobMXt8DNwbqnJojpo=", - "zh:16b1bb786719b7ebcddba3ab751b976ebf4006f7144afeebcb83f0c5f41f8eb9", - "zh:1fbc08b817b9eaf45a2b72ccba59f4ea19e7fcf017be29f5a9552b623eccc5bc", - "zh:304f58f3333dbe846cfbdfc2227e6ed77041ceea33b6183972f3f8ab51bd065f", - "zh:4cd447b5c24f14553bd6e1a0e4fea3c7d7b218cbb2316a3d93f1c5cb562c181b", - "zh:589472b56be8277558616075fc5480fcd812ba6dc70e8979375fc6d8750f83ef", - "zh:5d78484ba43c26f1ef6067c4150550b06fd39c5d4bfb790f92c4a6f7d9d0201b", - "zh:5f470ce664bffb22ace736643d2abe7ad45858022b652143bcd02d71d38d4e42", - "zh:7a9cbb947aaab8c885096bce5da22838ca482196cf7d04ffb8bdf7fd28003e47", - "zh:854df3e4c50675e727705a0eaa4f8d42ccd7df6a5efa2456f0205a9901ace019", - "zh:87162c0f47b1260f5969679dccb246cb528f27f01229d02fd30a8e2f9869ba2c", - "zh:9a145404d506b52078cd7060e6cbb83f8fc7953f3f63a5e7137d41f69d6317a3", + "h1:PTgxp+nMDBd6EFHAIH6ceFfvwa2blqkCwXglZn6Dqa8=", + "zh:3995ca97e6c2c1ed9e231c453287585d3dc1ca2a304683ac0b269b3448fda7c0", + "zh:4f69f70d2edeb0dde9c693b7cd7e8e21c781b2fac7062bed5300092dbadb71e1", + "zh:5c76042fdf3df56a1f581bc477e5d6fc3e099d4d6544fe725b3747e9990726bd", + "zh:6ff8221340955f4b3ba9230918bb026c4414a5aebe9d0967845c43e8e8908aec", + "zh:73cdd8638cb52bbe25887cd5b7946cc3fcb891867de11bcb0fde9b35c4f70a41", + "zh:7af5aec2fd01fa5e5f600f1db1bcf200aaadc05a2c8ffcbb4b6b61cd2bd3e33b", + "zh:7e055cfa7f40b667f5f7af564db9544f46aa189cdbe5530ad812e027647132f5", "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a4eab2649f5afe06cc406ce2aaf9fd44dcf311123f48d344c255e93454c08921", - "zh:bea09141c6186a3e133413ae3a2e3d1aaf4f43466a6a468827287527edf21710", - "zh:d7ea2a35ff55ddfe639ab3b04331556b772a8698eca01f5d74151615d9f336db", + "zh:aba898190c668ade4471da65c96db414679367174ac5b73e8ce7551056c77e3e", + "zh:aedaa8d7d71e6d58cdc09a7e3bcb8031b3ea496a7ac142376eb679d1756057f3", + "zh:cb9739952d467b3f6d72d57722943956e80ab235b58a0e34758538381dcc386c", + "zh:e12a2681028a70cb08eaf4c3364ddab386416502f966067bf99e79ba6be0d7b6", + "zh:e32a922a7d6fd5df69b3cc92932fc2689dc195b0f8b493dcd686abdd892b06cd", + "zh:f2dea7dead6f34b51e8b6aae177a8b333834a41d25529baa634a087d99ea32f6", + "zh:f6eee6df0366e8452d912cfd498792579aede88de3b67c15d36b8949e37479b1", ] } provider "registry.terraform.io/hashicorp/local" { - version = "2.5.3" + version = "2.6.1" constraints = "~> 2.0" hashes = [ - "h1:MCzg+hs1/ZQ32u56VzJMWP9ONRQPAAqAjuHuzbyshvI=", - "zh:284d4b5b572eacd456e605e94372f740f6de27b71b4e1fd49b63745d8ecd4927", - "zh:40d9dfc9c549e406b5aab73c023aa485633c1b6b730c933d7bcc2fa67fd1ae6e", - "zh:6243509bb208656eb9dc17d3c525c89acdd27f08def427a0dce22d5db90a4c8b", + "h1:DbiR/D2CPigzCGweYIyJH0N0x04oyI5xiZ9wSW/s3kQ=", + "zh:10050d08f416de42a857e4b6f76809aae63ea4ec6f5c852a126a915dede814b4", + "zh:2df2a3ebe9830d4759c59b51702e209fe053f47453cb4688f43c063bac8746b7", + "zh:2e759568bcc38c86ca0e43701d34cf29945736fdc8e429c5b287ddc2703c7b18", + "zh:6a62a34e48500ab4aea778e355e162ebde03260b7a9eb9edc7e534c84fbca4c6", + "zh:74373728ba32a1d5450a3a88ac45624579e32755b086cd4e51e88d9aca240ef6", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:885d85869f927853b6fe330e235cd03c337ac3b933b0d9ae827ec32fa1fdcdbf", - "zh:bab66af51039bdfcccf85b25fe562cbba2f54f6b3812202f4873ade834ec201d", - "zh:c505ff1bf9442a889ac7dca3ac05a8ee6f852e0118dd9a61796a2f6ff4837f09", - "zh:d36c0b5770841ddb6eaf0499ba3de48e5d4fc99f4829b6ab66b0fab59b1aaf4f", - "zh:ddb6a407c7f3ec63efb4dad5f948b54f7f4434ee1a2607a49680d494b1776fe1", - "zh:e0dafdd4500bec23d3ff221e3a9b60621c5273e5df867bc59ef6b7e41f5c91f6", - "zh:ece8742fd2882a8fc9d6efd20e2590010d43db386b920b2a9c220cfecc18de47", - "zh:f4c6b3eb8f39105004cf720e202f04f57e3578441cfb76ca27611139bc116a82", + "zh:8dddae588971a996f622e7589cd8b9da7834c744ac12bfb59c97fa77ded95255", + "zh:946f82f66353bb97aefa8d95c4ca86db227f9b7c50b82415289ac47e4e74d08d", + "zh:e9a5c09e6f35e510acf15b666fd0b34a30164cecdcd81ce7cda0f4b2dade8d91", + "zh:eafe5b873ef42b32feb2f969c38ff8652507e695620cbaf03b9db714bee52249", + "zh:ec146289fa27650c9d433bb5c7847379180c0b7a323b1b94e6e7ad5d2a7dbe71", + "zh:fc882c35ce05631d76c0973b35adde26980778fc81d9da81a2fade2b9d73423b", ] } diff --git a/examples/external-managed-ssm-secrets/README.md b/examples/external-managed-ssm-secrets/README.md index fc208e02b0..5a9a725dd3 100644 --- a/examples/external-managed-ssm-secrets/README.md +++ b/examples/external-managed-ssm-secrets/README.md @@ -80,7 +80,7 @@ terraform output -raw webhook_secret | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 6.0 | +| [aws](#requirement\_aws) | >= 6.21 | | [local](#requirement\_local) | ~> 2.0 | | [random](#requirement\_random) | ~> 3.0 | diff --git a/examples/external-managed-ssm-secrets/versions.tf b/examples/external-managed-ssm-secrets/versions.tf index 734eaa0b3e..af642af83b 100644 --- a/examples/external-managed-ssm-secrets/versions.tf +++ b/examples/external-managed-ssm-secrets/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 6.0" + version = ">= 6.21" } local = { source = "hashicorp/local" diff --git a/examples/lambdas-download/.terraform.lock.hcl b/examples/lambdas-download/.terraform.lock.hcl index f09822f0e2..bbebce4350 100644 --- a/examples/lambdas-download/.terraform.lock.hcl +++ b/examples/lambdas-download/.terraform.lock.hcl @@ -2,44 +2,44 @@ # Manual edits may be lost in future updates. provider "registry.terraform.io/hashicorp/aws" { - version = "5.82.1" - constraints = "~> 5.27" + version = "6.22.1" + constraints = ">= 6.21.0" hashes = [ - "h1:QTOtDMehUfiD3wDbbDuXYuTqGgLDkKK9Agkd5NCUEic=", - "zh:0fde8533282973f1f5d33b2c4f82d962a2c78860d39b42ac20a9ce399f06f62c", - "zh:1fd1a252bffe91668f35be8eac4e0a980f022120254eae1674c3c05049aff88a", - "zh:31bbd380cd7d74bf9a8c961fc64da4222bed40ffbdb27b011e637fa8b2d33641", - "zh:333ee400cf6f62fa199dc1270bf8efac6ffe56659f86918070b8351b8636e03b", - "zh:42ea9fee0a152d344d548eab43583299a13bcd73fae9e53e7e1a708720ac1315", - "zh:4b78f25a8cda3316eb56aa01909a403ec2f325a2eb0512c9a73966068c26cf29", - "zh:5e9cf9a275eda8f7940a41e32abe0b92ba76b5744def4af5124b343b5f33eb94", - "zh:6a46c8630c16b9e1338c2daed6006118db951420108b58b8b886403c69317439", - "zh:6efe11cf1a01f98a8d8043cdcd8c0ee5fe93a0e582c2b69ebb73ea073f5068c3", - "zh:88ab5c768c7d8133dab94eff48071e764424ad2b7cfeee5abe6d5bb16e4b85c6", + "h1:PTgxp+nMDBd6EFHAIH6ceFfvwa2blqkCwXglZn6Dqa8=", + "zh:3995ca97e6c2c1ed9e231c453287585d3dc1ca2a304683ac0b269b3448fda7c0", + "zh:4f69f70d2edeb0dde9c693b7cd7e8e21c781b2fac7062bed5300092dbadb71e1", + "zh:5c76042fdf3df56a1f581bc477e5d6fc3e099d4d6544fe725b3747e9990726bd", + "zh:6ff8221340955f4b3ba9230918bb026c4414a5aebe9d0967845c43e8e8908aec", + "zh:73cdd8638cb52bbe25887cd5b7946cc3fcb891867de11bcb0fde9b35c4f70a41", + "zh:7af5aec2fd01fa5e5f600f1db1bcf200aaadc05a2c8ffcbb4b6b61cd2bd3e33b", + "zh:7e055cfa7f40b667f5f7af564db9544f46aa189cdbe5530ad812e027647132f5", "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a614beb312574342b27dbc34d65b450997f63fa3e948d0d30f441e4f69337380", - "zh:c1f486e27130610a9b64cacb0bd928009c433d62b3be515488185e6467b4aa1f", - "zh:dccd166e89e1a02e7ce658df3c42d040edec4b09c6f7906aa5743938518148b1", - "zh:e75a3ae0fb42b7ea5a0bb5dffd8f8468004c9700fcc934eb04c264fda2ba9984", + "zh:aba898190c668ade4471da65c96db414679367174ac5b73e8ce7551056c77e3e", + "zh:aedaa8d7d71e6d58cdc09a7e3bcb8031b3ea496a7ac142376eb679d1756057f3", + "zh:cb9739952d467b3f6d72d57722943956e80ab235b58a0e34758538381dcc386c", + "zh:e12a2681028a70cb08eaf4c3364ddab386416502f966067bf99e79ba6be0d7b6", + "zh:e32a922a7d6fd5df69b3cc92932fc2689dc195b0f8b493dcd686abdd892b06cd", + "zh:f2dea7dead6f34b51e8b6aae177a8b333834a41d25529baa634a087d99ea32f6", + "zh:f6eee6df0366e8452d912cfd498792579aede88de3b67c15d36b8949e37479b1", ] } provider "registry.terraform.io/hashicorp/null" { - version = "3.2.3" + version = "3.2.4" constraints = "~> 3.0" hashes = [ - "h1:I0Um8UkrMUb81Fxq/dxbr3HLP2cecTH2WMJiwKSrwQY=", - "zh:22d062e5278d872fe7aed834f5577ba0a5afe34a3bdac2b81f828d8d3e6706d2", - "zh:23dead00493ad863729495dc212fd6c29b8293e707b055ce5ba21ee453ce552d", - "zh:28299accf21763ca1ca144d8f660688d7c2ad0b105b7202554ca60b02a3856d3", - "zh:55c9e8a9ac25a7652df8c51a8a9a422bd67d784061b1de2dc9fe6c3cb4e77f2f", - "zh:756586535d11698a216291c06b9ed8a5cc6a4ec43eee1ee09ecd5c6a9e297ac1", + "h1:L5V05xwp/Gto1leRryuesxjMfgZwjb7oool4WS1UEFQ=", + "zh:59f6b52ab4ff35739647f9509ee6d93d7c032985d9f8c6237d1f8a59471bbbe2", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:9d5eea62fdb587eeb96a8c4d782459f4e6b73baeece4d04b4a40e44faaee9301", - "zh:a6355f596a3fb8fc85c2fb054ab14e722991533f87f928e7169a486462c74670", - "zh:b5a65a789cff4ada58a5baffc76cb9767dc26ec6b45c00d2ec8b1b027f6db4ed", - "zh:db5ab669cf11d0e9f81dc380a6fdfcac437aea3d69109c7aef1a5426639d2d65", - "zh:de655d251c470197bcbb5ac45d289595295acb8f829f6c781d4a75c8c8b7c7dd", - "zh:f5c68199f2e6076bce92a12230434782bf768103a427e9bb9abee99b116af7b5", + "zh:795c897119ff082133150121d39ff26cb5f89a730a2c8c26f3a9c1abf81a9c43", + "zh:7b9c7b16f118fbc2b05a983817b8ce2f86df125857966ad356353baf4bff5c0a", + "zh:85e33ab43e0e1726e5f97a874b8e24820b6565ff8076523cc2922ba671492991", + "zh:9d32ac3619cfc93eb3c4f423492a8e0f79db05fec58e449dee9b2d5873d5f69f", + "zh:9e15c3c9dd8e0d1e3731841d44c34571b6c97f5b95e8296a45318b94e5287a6e", + "zh:b4c2ab35d1b7696c30b64bf2c0f3a62329107bd1a9121ce70683dec58af19615", + "zh:c43723e8cc65bcdf5e0c92581dcbbdcbdcf18b8d2037406a5f2033b1e22de442", + "zh:ceb5495d9c31bfb299d246ab333f08c7fb0d67a4f82681fbf47f2a21c3e11ab5", + "zh:e171026b3659305c558d9804062762d168f50ba02b88b231d20ec99578a6233f", + "zh:ed0fe2acdb61330b01841fa790be00ec6beaac91d41f311fb8254f74eb6a711f", ] } diff --git a/examples/multi-runner/.terraform.lock.hcl b/examples/multi-runner/.terraform.lock.hcl index 045fb7350a..0f6cc37765 100644 --- a/examples/multi-runner/.terraform.lock.hcl +++ b/examples/multi-runner/.terraform.lock.hcl @@ -2,84 +2,84 @@ # Manual edits may be lost in future updates. provider "registry.terraform.io/hashicorp/aws" { - version = "5.82.1" - constraints = ">= 5.0.0, ~> 5.0, ~> 5.27" + version = "6.22.1" + constraints = ">= 5.0.0, >= 6.0.0, >= 6.21.0" hashes = [ - "h1:QTOtDMehUfiD3wDbbDuXYuTqGgLDkKK9Agkd5NCUEic=", - "zh:0fde8533282973f1f5d33b2c4f82d962a2c78860d39b42ac20a9ce399f06f62c", - "zh:1fd1a252bffe91668f35be8eac4e0a980f022120254eae1674c3c05049aff88a", - "zh:31bbd380cd7d74bf9a8c961fc64da4222bed40ffbdb27b011e637fa8b2d33641", - "zh:333ee400cf6f62fa199dc1270bf8efac6ffe56659f86918070b8351b8636e03b", - "zh:42ea9fee0a152d344d548eab43583299a13bcd73fae9e53e7e1a708720ac1315", - "zh:4b78f25a8cda3316eb56aa01909a403ec2f325a2eb0512c9a73966068c26cf29", - "zh:5e9cf9a275eda8f7940a41e32abe0b92ba76b5744def4af5124b343b5f33eb94", - "zh:6a46c8630c16b9e1338c2daed6006118db951420108b58b8b886403c69317439", - "zh:6efe11cf1a01f98a8d8043cdcd8c0ee5fe93a0e582c2b69ebb73ea073f5068c3", - "zh:88ab5c768c7d8133dab94eff48071e764424ad2b7cfeee5abe6d5bb16e4b85c6", + "h1:PTgxp+nMDBd6EFHAIH6ceFfvwa2blqkCwXglZn6Dqa8=", + "zh:3995ca97e6c2c1ed9e231c453287585d3dc1ca2a304683ac0b269b3448fda7c0", + "zh:4f69f70d2edeb0dde9c693b7cd7e8e21c781b2fac7062bed5300092dbadb71e1", + "zh:5c76042fdf3df56a1f581bc477e5d6fc3e099d4d6544fe725b3747e9990726bd", + "zh:6ff8221340955f4b3ba9230918bb026c4414a5aebe9d0967845c43e8e8908aec", + "zh:73cdd8638cb52bbe25887cd5b7946cc3fcb891867de11bcb0fde9b35c4f70a41", + "zh:7af5aec2fd01fa5e5f600f1db1bcf200aaadc05a2c8ffcbb4b6b61cd2bd3e33b", + "zh:7e055cfa7f40b667f5f7af564db9544f46aa189cdbe5530ad812e027647132f5", "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a614beb312574342b27dbc34d65b450997f63fa3e948d0d30f441e4f69337380", - "zh:c1f486e27130610a9b64cacb0bd928009c433d62b3be515488185e6467b4aa1f", - "zh:dccd166e89e1a02e7ce658df3c42d040edec4b09c6f7906aa5743938518148b1", - "zh:e75a3ae0fb42b7ea5a0bb5dffd8f8468004c9700fcc934eb04c264fda2ba9984", + "zh:aba898190c668ade4471da65c96db414679367174ac5b73e8ce7551056c77e3e", + "zh:aedaa8d7d71e6d58cdc09a7e3bcb8031b3ea496a7ac142376eb679d1756057f3", + "zh:cb9739952d467b3f6d72d57722943956e80ab235b58a0e34758538381dcc386c", + "zh:e12a2681028a70cb08eaf4c3364ddab386416502f966067bf99e79ba6be0d7b6", + "zh:e32a922a7d6fd5df69b3cc92932fc2689dc195b0f8b493dcd686abdd892b06cd", + "zh:f2dea7dead6f34b51e8b6aae177a8b333834a41d25529baa634a087d99ea32f6", + "zh:f6eee6df0366e8452d912cfd498792579aede88de3b67c15d36b8949e37479b1", ] } provider "registry.terraform.io/hashicorp/local" { - version = "2.5.2" + version = "2.6.1" constraints = "~> 2.0" hashes = [ - "h1:IyFbOIO6mhikFNL/2h1iZJ6kyN3U00jgkpCLUCThAfE=", - "zh:136299545178ce281c56f36965bf91c35407c11897f7082b3b983d86cb79b511", - "zh:3b4486858aa9cb8163378722b642c57c529b6c64bfbfc9461d940a84cd66ebea", - "zh:4855ee628ead847741aa4f4fc9bed50cfdbf197f2912775dd9fe7bc43fa077c0", - "zh:4b8cd2583d1edcac4011caafe8afb7a95e8110a607a1d5fb87d921178074a69b", - "zh:52084ddaff8c8cd3f9e7bcb7ce4dc1eab00602912c96da43c29b4762dc376038", - "zh:71562d330d3f92d79b2952ffdda0dad167e952e46200c767dd30c6af8d7c0ed3", + "h1:DbiR/D2CPigzCGweYIyJH0N0x04oyI5xiZ9wSW/s3kQ=", + "zh:10050d08f416de42a857e4b6f76809aae63ea4ec6f5c852a126a915dede814b4", + "zh:2df2a3ebe9830d4759c59b51702e209fe053f47453cb4688f43c063bac8746b7", + "zh:2e759568bcc38c86ca0e43701d34cf29945736fdc8e429c5b287ddc2703c7b18", + "zh:6a62a34e48500ab4aea778e355e162ebde03260b7a9eb9edc7e534c84fbca4c6", + "zh:74373728ba32a1d5450a3a88ac45624579e32755b086cd4e51e88d9aca240ef6", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:805f81ade06ff68fa8b908d31892eaed5c180ae031c77ad35f82cb7a74b97cf4", - "zh:8b6b3ebeaaa8e38dd04e56996abe80db9be6f4c1df75ac3cccc77642899bd464", - "zh:ad07750576b99248037b897de71113cc19b1a8d0bc235eb99173cc83d0de3b1b", - "zh:b9f1c3bfadb74068f5c205292badb0661e17ac05eb23bfe8bd809691e4583d0e", - "zh:cc4cbcd67414fefb111c1bf7ab0bc4beb8c0b553d01719ad17de9a047adff4d1", + "zh:8dddae588971a996f622e7589cd8b9da7834c744ac12bfb59c97fa77ded95255", + "zh:946f82f66353bb97aefa8d95c4ca86db227f9b7c50b82415289ac47e4e74d08d", + "zh:e9a5c09e6f35e510acf15b666fd0b34a30164cecdcd81ce7cda0f4b2dade8d91", + "zh:eafe5b873ef42b32feb2f969c38ff8652507e695620cbaf03b9db714bee52249", + "zh:ec146289fa27650c9d433bb5c7847379180c0b7a323b1b94e6e7ad5d2a7dbe71", + "zh:fc882c35ce05631d76c0973b35adde26980778fc81d9da81a2fade2b9d73423b", ] } provider "registry.terraform.io/hashicorp/null" { - version = "3.2.3" + version = "3.2.4" constraints = "~> 3.0, ~> 3.2" hashes = [ - "h1:I0Um8UkrMUb81Fxq/dxbr3HLP2cecTH2WMJiwKSrwQY=", - "zh:22d062e5278d872fe7aed834f5577ba0a5afe34a3bdac2b81f828d8d3e6706d2", - "zh:23dead00493ad863729495dc212fd6c29b8293e707b055ce5ba21ee453ce552d", - "zh:28299accf21763ca1ca144d8f660688d7c2ad0b105b7202554ca60b02a3856d3", - "zh:55c9e8a9ac25a7652df8c51a8a9a422bd67d784061b1de2dc9fe6c3cb4e77f2f", - "zh:756586535d11698a216291c06b9ed8a5cc6a4ec43eee1ee09ecd5c6a9e297ac1", + "h1:L5V05xwp/Gto1leRryuesxjMfgZwjb7oool4WS1UEFQ=", + "zh:59f6b52ab4ff35739647f9509ee6d93d7c032985d9f8c6237d1f8a59471bbbe2", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:9d5eea62fdb587eeb96a8c4d782459f4e6b73baeece4d04b4a40e44faaee9301", - "zh:a6355f596a3fb8fc85c2fb054ab14e722991533f87f928e7169a486462c74670", - "zh:b5a65a789cff4ada58a5baffc76cb9767dc26ec6b45c00d2ec8b1b027f6db4ed", - "zh:db5ab669cf11d0e9f81dc380a6fdfcac437aea3d69109c7aef1a5426639d2d65", - "zh:de655d251c470197bcbb5ac45d289595295acb8f829f6c781d4a75c8c8b7c7dd", - "zh:f5c68199f2e6076bce92a12230434782bf768103a427e9bb9abee99b116af7b5", + "zh:795c897119ff082133150121d39ff26cb5f89a730a2c8c26f3a9c1abf81a9c43", + "zh:7b9c7b16f118fbc2b05a983817b8ce2f86df125857966ad356353baf4bff5c0a", + "zh:85e33ab43e0e1726e5f97a874b8e24820b6565ff8076523cc2922ba671492991", + "zh:9d32ac3619cfc93eb3c4f423492a8e0f79db05fec58e449dee9b2d5873d5f69f", + "zh:9e15c3c9dd8e0d1e3731841d44c34571b6c97f5b95e8296a45318b94e5287a6e", + "zh:b4c2ab35d1b7696c30b64bf2c0f3a62329107bd1a9121ce70683dec58af19615", + "zh:c43723e8cc65bcdf5e0c92581dcbbdcbdcf18b8d2037406a5f2033b1e22de442", + "zh:ceb5495d9c31bfb299d246ab333f08c7fb0d67a4f82681fbf47f2a21c3e11ab5", + "zh:e171026b3659305c558d9804062762d168f50ba02b88b231d20ec99578a6233f", + "zh:ed0fe2acdb61330b01841fa790be00ec6beaac91d41f311fb8254f74eb6a711f", ] } provider "registry.terraform.io/hashicorp/random" { - version = "3.6.3" + version = "3.7.2" constraints = "~> 3.0" hashes = [ - "h1:zG9uFP8l9u+yGZZvi5Te7PV62j50azpgwPunq2vTm1E=", - "zh:04ceb65210251339f07cd4611885d242cd4d0c7306e86dda9785396807c00451", - "zh:448f56199f3e99ff75d5c0afacae867ee795e4dfda6cb5f8e3b2a72ec3583dd8", - "zh:4b4c11ccfba7319e901df2dac836b1ae8f12185e37249e8d870ee10bb87a13fe", - "zh:4fa45c44c0de582c2edb8a2e054f55124520c16a39b2dfc0355929063b6395b1", - "zh:588508280501a06259e023b0695f6a18149a3816d259655c424d068982cbdd36", - "zh:737c4d99a87d2a4d1ac0a54a73d2cb62974ccb2edbd234f333abd079a32ebc9e", + "h1:KG4NuIBl1mRWU0KD/BGfCi1YN/j3F7H4YgeeM7iSdNs=", + "zh:14829603a32e4bc4d05062f059e545a91e27ff033756b48afbae6b3c835f508f", + "zh:1527fb07d9fea400d70e9e6eb4a2b918d5060d604749b6f1c361518e7da546dc", + "zh:1e86bcd7ebec85ba336b423ba1db046aeaa3c0e5f921039b3f1a6fc2f978feab", + "zh:24536dec8bde66753f4b4030b8f3ef43c196d69cccbea1c382d01b222478c7a3", + "zh:29f1786486759fad9b0ce4fdfbbfece9343ad47cd50119045075e05afe49d212", + "zh:4d701e978c2dd8604ba1ce962b047607701e65c078cb22e97171513e9e57491f", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:a357ab512e5ebc6d1fda1382503109766e21bbfdfaa9ccda43d313c122069b30", - "zh:c51bfb15e7d52cc1a2eaec2a903ac2aff15d162c172b1b4c17675190e8147615", - "zh:e0951ee6fa9df90433728b96381fb867e3db98f66f735e0c3e24f8f16903f0ad", - "zh:e3cdcb4e73740621dabd82ee6a37d6cfce7fee2a03d8074df65086760f5cf556", - "zh:eff58323099f1bd9a0bec7cb04f717e7f1b2774c7d612bf7581797e1622613a0", + "zh:7b8434212eef0f8c83f5a90c6d76feaf850f6502b61b53c329e85b3b281cba34", + "zh:ac8a23c212258b7976e1621275e3af7099e7e4a3d4478cf8d5d2a27f3bc3e967", + "zh:b516ca74431f3df4c6cf90ddcdb4042c626e026317a33c53f0b445a3d93b720d", + "zh:dc76e4326aec2490c1600d6871a95e78f9050f9ce427c71707ea412a2f2f1a62", + "zh:eac7b63e86c749c7d48f527671c7aee5b4e26c10be6ad7232d6860167f99dbb0", ] } diff --git a/examples/multi-runner/README.md b/examples/multi-runner/README.md index c277f51416..7b0798fa21 100644 --- a/examples/multi-runner/README.md +++ b/examples/multi-runner/README.md @@ -53,7 +53,7 @@ terraform output -raw webhook_secret | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | ~> 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | | [local](#requirement\_local) | ~> 2.0 | | [random](#requirement\_random) | ~> 3.0 | @@ -61,8 +61,8 @@ terraform output -raw webhook_secret | Name | Version | |------|---------| -| [aws](#provider\_aws) | 5.82.1 | -| [random](#provider\_random) | 3.6.3 | +| [aws](#provider\_aws) | 6.22.1 | +| [random](#provider\_random) | 3.7.2 | ## Modules diff --git a/examples/multi-runner/versions.tf b/examples/multi-runner/versions.tf index 6bd9371ab0..af642af83b 100644 --- a/examples/multi-runner/versions.tf +++ b/examples/multi-runner/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = "~> 5.27" # ensure backwards compatibility with v5.x + version = ">= 6.21" } local = { source = "hashicorp/local" diff --git a/examples/permissions-boundary/.terraform.lock.hcl b/examples/permissions-boundary/.terraform.lock.hcl index 6a9f669990..be40c689d7 100644 --- a/examples/permissions-boundary/.terraform.lock.hcl +++ b/examples/permissions-boundary/.terraform.lock.hcl @@ -2,84 +2,84 @@ # Manual edits may be lost in future updates. provider "registry.terraform.io/hashicorp/aws" { - version = "5.82.1" - constraints = ">= 5.0.0, ~> 5.0, ~> 5.27, ~> 5.77" + version = "6.22.1" + constraints = ">= 5.0.0, >= 6.21.0" hashes = [ - "h1:QTOtDMehUfiD3wDbbDuXYuTqGgLDkKK9Agkd5NCUEic=", - "zh:0fde8533282973f1f5d33b2c4f82d962a2c78860d39b42ac20a9ce399f06f62c", - "zh:1fd1a252bffe91668f35be8eac4e0a980f022120254eae1674c3c05049aff88a", - "zh:31bbd380cd7d74bf9a8c961fc64da4222bed40ffbdb27b011e637fa8b2d33641", - "zh:333ee400cf6f62fa199dc1270bf8efac6ffe56659f86918070b8351b8636e03b", - "zh:42ea9fee0a152d344d548eab43583299a13bcd73fae9e53e7e1a708720ac1315", - "zh:4b78f25a8cda3316eb56aa01909a403ec2f325a2eb0512c9a73966068c26cf29", - "zh:5e9cf9a275eda8f7940a41e32abe0b92ba76b5744def4af5124b343b5f33eb94", - "zh:6a46c8630c16b9e1338c2daed6006118db951420108b58b8b886403c69317439", - "zh:6efe11cf1a01f98a8d8043cdcd8c0ee5fe93a0e582c2b69ebb73ea073f5068c3", - "zh:88ab5c768c7d8133dab94eff48071e764424ad2b7cfeee5abe6d5bb16e4b85c6", + "h1:PTgxp+nMDBd6EFHAIH6ceFfvwa2blqkCwXglZn6Dqa8=", + "zh:3995ca97e6c2c1ed9e231c453287585d3dc1ca2a304683ac0b269b3448fda7c0", + "zh:4f69f70d2edeb0dde9c693b7cd7e8e21c781b2fac7062bed5300092dbadb71e1", + "zh:5c76042fdf3df56a1f581bc477e5d6fc3e099d4d6544fe725b3747e9990726bd", + "zh:6ff8221340955f4b3ba9230918bb026c4414a5aebe9d0967845c43e8e8908aec", + "zh:73cdd8638cb52bbe25887cd5b7946cc3fcb891867de11bcb0fde9b35c4f70a41", + "zh:7af5aec2fd01fa5e5f600f1db1bcf200aaadc05a2c8ffcbb4b6b61cd2bd3e33b", + "zh:7e055cfa7f40b667f5f7af564db9544f46aa189cdbe5530ad812e027647132f5", "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a614beb312574342b27dbc34d65b450997f63fa3e948d0d30f441e4f69337380", - "zh:c1f486e27130610a9b64cacb0bd928009c433d62b3be515488185e6467b4aa1f", - "zh:dccd166e89e1a02e7ce658df3c42d040edec4b09c6f7906aa5743938518148b1", - "zh:e75a3ae0fb42b7ea5a0bb5dffd8f8468004c9700fcc934eb04c264fda2ba9984", + "zh:aba898190c668ade4471da65c96db414679367174ac5b73e8ce7551056c77e3e", + "zh:aedaa8d7d71e6d58cdc09a7e3bcb8031b3ea496a7ac142376eb679d1756057f3", + "zh:cb9739952d467b3f6d72d57722943956e80ab235b58a0e34758538381dcc386c", + "zh:e12a2681028a70cb08eaf4c3364ddab386416502f966067bf99e79ba6be0d7b6", + "zh:e32a922a7d6fd5df69b3cc92932fc2689dc195b0f8b493dcd686abdd892b06cd", + "zh:f2dea7dead6f34b51e8b6aae177a8b333834a41d25529baa634a087d99ea32f6", + "zh:f6eee6df0366e8452d912cfd498792579aede88de3b67c15d36b8949e37479b1", ] } provider "registry.terraform.io/hashicorp/local" { - version = "2.5.2" + version = "2.6.1" constraints = "~> 2.0" hashes = [ - "h1:IyFbOIO6mhikFNL/2h1iZJ6kyN3U00jgkpCLUCThAfE=", - "zh:136299545178ce281c56f36965bf91c35407c11897f7082b3b983d86cb79b511", - "zh:3b4486858aa9cb8163378722b642c57c529b6c64bfbfc9461d940a84cd66ebea", - "zh:4855ee628ead847741aa4f4fc9bed50cfdbf197f2912775dd9fe7bc43fa077c0", - "zh:4b8cd2583d1edcac4011caafe8afb7a95e8110a607a1d5fb87d921178074a69b", - "zh:52084ddaff8c8cd3f9e7bcb7ce4dc1eab00602912c96da43c29b4762dc376038", - "zh:71562d330d3f92d79b2952ffdda0dad167e952e46200c767dd30c6af8d7c0ed3", + "h1:DbiR/D2CPigzCGweYIyJH0N0x04oyI5xiZ9wSW/s3kQ=", + "zh:10050d08f416de42a857e4b6f76809aae63ea4ec6f5c852a126a915dede814b4", + "zh:2df2a3ebe9830d4759c59b51702e209fe053f47453cb4688f43c063bac8746b7", + "zh:2e759568bcc38c86ca0e43701d34cf29945736fdc8e429c5b287ddc2703c7b18", + "zh:6a62a34e48500ab4aea778e355e162ebde03260b7a9eb9edc7e534c84fbca4c6", + "zh:74373728ba32a1d5450a3a88ac45624579e32755b086cd4e51e88d9aca240ef6", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:805f81ade06ff68fa8b908d31892eaed5c180ae031c77ad35f82cb7a74b97cf4", - "zh:8b6b3ebeaaa8e38dd04e56996abe80db9be6f4c1df75ac3cccc77642899bd464", - "zh:ad07750576b99248037b897de71113cc19b1a8d0bc235eb99173cc83d0de3b1b", - "zh:b9f1c3bfadb74068f5c205292badb0661e17ac05eb23bfe8bd809691e4583d0e", - "zh:cc4cbcd67414fefb111c1bf7ab0bc4beb8c0b553d01719ad17de9a047adff4d1", + "zh:8dddae588971a996f622e7589cd8b9da7834c744ac12bfb59c97fa77ded95255", + "zh:946f82f66353bb97aefa8d95c4ca86db227f9b7c50b82415289ac47e4e74d08d", + "zh:e9a5c09e6f35e510acf15b666fd0b34a30164cecdcd81ce7cda0f4b2dade8d91", + "zh:eafe5b873ef42b32feb2f969c38ff8652507e695620cbaf03b9db714bee52249", + "zh:ec146289fa27650c9d433bb5c7847379180c0b7a323b1b94e6e7ad5d2a7dbe71", + "zh:fc882c35ce05631d76c0973b35adde26980778fc81d9da81a2fade2b9d73423b", ] } provider "registry.terraform.io/hashicorp/null" { - version = "3.2.3" + version = "3.2.4" constraints = "~> 3.0, ~> 3.2" hashes = [ - "h1:I0Um8UkrMUb81Fxq/dxbr3HLP2cecTH2WMJiwKSrwQY=", - "zh:22d062e5278d872fe7aed834f5577ba0a5afe34a3bdac2b81f828d8d3e6706d2", - "zh:23dead00493ad863729495dc212fd6c29b8293e707b055ce5ba21ee453ce552d", - "zh:28299accf21763ca1ca144d8f660688d7c2ad0b105b7202554ca60b02a3856d3", - "zh:55c9e8a9ac25a7652df8c51a8a9a422bd67d784061b1de2dc9fe6c3cb4e77f2f", - "zh:756586535d11698a216291c06b9ed8a5cc6a4ec43eee1ee09ecd5c6a9e297ac1", + "h1:L5V05xwp/Gto1leRryuesxjMfgZwjb7oool4WS1UEFQ=", + "zh:59f6b52ab4ff35739647f9509ee6d93d7c032985d9f8c6237d1f8a59471bbbe2", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:9d5eea62fdb587eeb96a8c4d782459f4e6b73baeece4d04b4a40e44faaee9301", - "zh:a6355f596a3fb8fc85c2fb054ab14e722991533f87f928e7169a486462c74670", - "zh:b5a65a789cff4ada58a5baffc76cb9767dc26ec6b45c00d2ec8b1b027f6db4ed", - "zh:db5ab669cf11d0e9f81dc380a6fdfcac437aea3d69109c7aef1a5426639d2d65", - "zh:de655d251c470197bcbb5ac45d289595295acb8f829f6c781d4a75c8c8b7c7dd", - "zh:f5c68199f2e6076bce92a12230434782bf768103a427e9bb9abee99b116af7b5", + "zh:795c897119ff082133150121d39ff26cb5f89a730a2c8c26f3a9c1abf81a9c43", + "zh:7b9c7b16f118fbc2b05a983817b8ce2f86df125857966ad356353baf4bff5c0a", + "zh:85e33ab43e0e1726e5f97a874b8e24820b6565ff8076523cc2922ba671492991", + "zh:9d32ac3619cfc93eb3c4f423492a8e0f79db05fec58e449dee9b2d5873d5f69f", + "zh:9e15c3c9dd8e0d1e3731841d44c34571b6c97f5b95e8296a45318b94e5287a6e", + "zh:b4c2ab35d1b7696c30b64bf2c0f3a62329107bd1a9121ce70683dec58af19615", + "zh:c43723e8cc65bcdf5e0c92581dcbbdcbdcf18b8d2037406a5f2033b1e22de442", + "zh:ceb5495d9c31bfb299d246ab333f08c7fb0d67a4f82681fbf47f2a21c3e11ab5", + "zh:e171026b3659305c558d9804062762d168f50ba02b88b231d20ec99578a6233f", + "zh:ed0fe2acdb61330b01841fa790be00ec6beaac91d41f311fb8254f74eb6a711f", ] } provider "registry.terraform.io/hashicorp/random" { - version = "3.6.3" + version = "3.7.2" constraints = "~> 3.0" hashes = [ - "h1:zG9uFP8l9u+yGZZvi5Te7PV62j50azpgwPunq2vTm1E=", - "zh:04ceb65210251339f07cd4611885d242cd4d0c7306e86dda9785396807c00451", - "zh:448f56199f3e99ff75d5c0afacae867ee795e4dfda6cb5f8e3b2a72ec3583dd8", - "zh:4b4c11ccfba7319e901df2dac836b1ae8f12185e37249e8d870ee10bb87a13fe", - "zh:4fa45c44c0de582c2edb8a2e054f55124520c16a39b2dfc0355929063b6395b1", - "zh:588508280501a06259e023b0695f6a18149a3816d259655c424d068982cbdd36", - "zh:737c4d99a87d2a4d1ac0a54a73d2cb62974ccb2edbd234f333abd079a32ebc9e", + "h1:KG4NuIBl1mRWU0KD/BGfCi1YN/j3F7H4YgeeM7iSdNs=", + "zh:14829603a32e4bc4d05062f059e545a91e27ff033756b48afbae6b3c835f508f", + "zh:1527fb07d9fea400d70e9e6eb4a2b918d5060d604749b6f1c361518e7da546dc", + "zh:1e86bcd7ebec85ba336b423ba1db046aeaa3c0e5f921039b3f1a6fc2f978feab", + "zh:24536dec8bde66753f4b4030b8f3ef43c196d69cccbea1c382d01b222478c7a3", + "zh:29f1786486759fad9b0ce4fdfbbfece9343ad47cd50119045075e05afe49d212", + "zh:4d701e978c2dd8604ba1ce962b047607701e65c078cb22e97171513e9e57491f", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:a357ab512e5ebc6d1fda1382503109766e21bbfdfaa9ccda43d313c122069b30", - "zh:c51bfb15e7d52cc1a2eaec2a903ac2aff15d162c172b1b4c17675190e8147615", - "zh:e0951ee6fa9df90433728b96381fb867e3db98f66f735e0c3e24f8f16903f0ad", - "zh:e3cdcb4e73740621dabd82ee6a37d6cfce7fee2a03d8074df65086760f5cf556", - "zh:eff58323099f1bd9a0bec7cb04f717e7f1b2774c7d612bf7581797e1622613a0", + "zh:7b8434212eef0f8c83f5a90c6d76feaf850f6502b61b53c329e85b3b281cba34", + "zh:ac8a23c212258b7976e1621275e3af7099e7e4a3d4478cf8d5d2a27f3bc3e967", + "zh:b516ca74431f3df4c6cf90ddcdb4042c626e026317a33c53f0b445a3d93b720d", + "zh:dc76e4326aec2490c1600d6871a95e78f9050f9ce427c71707ea412a2f2f1a62", + "zh:eac7b63e86c749c7d48f527671c7aee5b4e26c10be6ad7232d6860167f99dbb0", ] } diff --git a/examples/permissions-boundary/README.md b/examples/permissions-boundary/README.md index 117a9e5877..a5b1857d62 100644 --- a/examples/permissions-boundary/README.md +++ b/examples/permissions-boundary/README.md @@ -35,7 +35,7 @@ terraform apply | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | ~> 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | | [local](#requirement\_local) | ~> 2.0 | | [random](#requirement\_random) | ~> 3.0 | @@ -43,8 +43,8 @@ terraform apply | Name | Version | |------|---------| -| [aws](#provider\_aws) | 5.82.1 | -| [random](#provider\_random) | 3.6.3 | +| [aws](#provider\_aws) | 6.22.1 | +| [random](#provider\_random) | 3.7.2 | | [terraform](#provider\_terraform) | n/a | ## Modules diff --git a/examples/permissions-boundary/setup/README.md b/examples/permissions-boundary/setup/README.md index 0b54340386..defdfa8873 100644 --- a/examples/permissions-boundary/setup/README.md +++ b/examples/permissions-boundary/setup/README.md @@ -4,7 +4,9 @@ | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | ~> 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | +| [local](#requirement\_local) | ~> 2.0 | +| [random](#requirement\_random) | ~> 3.0 | ## Providers diff --git a/examples/permissions-boundary/setup/versions.tf b/examples/permissions-boundary/setup/versions.tf index ac6bb23d38..af642af83b 100644 --- a/examples/permissions-boundary/setup/versions.tf +++ b/examples/permissions-boundary/setup/versions.tf @@ -2,7 +2,15 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = "~> 5.27" + version = ">= 6.21" + } + local = { + source = "hashicorp/local" + version = "~> 2.0" + } + random = { + source = "hashicorp/random" + version = "~> 3.0" } } required_version = ">= 1.3.0" diff --git a/examples/permissions-boundary/versions.tf b/examples/permissions-boundary/versions.tf index 6bd9371ab0..af642af83b 100644 --- a/examples/permissions-boundary/versions.tf +++ b/examples/permissions-boundary/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = "~> 5.27" # ensure backwards compatibility with v5.x + version = ">= 6.21" } local = { source = "hashicorp/local" diff --git a/examples/prebuilt/.terraform.lock.hcl b/examples/prebuilt/.terraform.lock.hcl index 6a9f669990..be40c689d7 100644 --- a/examples/prebuilt/.terraform.lock.hcl +++ b/examples/prebuilt/.terraform.lock.hcl @@ -2,84 +2,84 @@ # Manual edits may be lost in future updates. provider "registry.terraform.io/hashicorp/aws" { - version = "5.82.1" - constraints = ">= 5.0.0, ~> 5.0, ~> 5.27, ~> 5.77" + version = "6.22.1" + constraints = ">= 5.0.0, >= 6.21.0" hashes = [ - "h1:QTOtDMehUfiD3wDbbDuXYuTqGgLDkKK9Agkd5NCUEic=", - "zh:0fde8533282973f1f5d33b2c4f82d962a2c78860d39b42ac20a9ce399f06f62c", - "zh:1fd1a252bffe91668f35be8eac4e0a980f022120254eae1674c3c05049aff88a", - "zh:31bbd380cd7d74bf9a8c961fc64da4222bed40ffbdb27b011e637fa8b2d33641", - "zh:333ee400cf6f62fa199dc1270bf8efac6ffe56659f86918070b8351b8636e03b", - "zh:42ea9fee0a152d344d548eab43583299a13bcd73fae9e53e7e1a708720ac1315", - "zh:4b78f25a8cda3316eb56aa01909a403ec2f325a2eb0512c9a73966068c26cf29", - "zh:5e9cf9a275eda8f7940a41e32abe0b92ba76b5744def4af5124b343b5f33eb94", - "zh:6a46c8630c16b9e1338c2daed6006118db951420108b58b8b886403c69317439", - "zh:6efe11cf1a01f98a8d8043cdcd8c0ee5fe93a0e582c2b69ebb73ea073f5068c3", - "zh:88ab5c768c7d8133dab94eff48071e764424ad2b7cfeee5abe6d5bb16e4b85c6", + "h1:PTgxp+nMDBd6EFHAIH6ceFfvwa2blqkCwXglZn6Dqa8=", + "zh:3995ca97e6c2c1ed9e231c453287585d3dc1ca2a304683ac0b269b3448fda7c0", + "zh:4f69f70d2edeb0dde9c693b7cd7e8e21c781b2fac7062bed5300092dbadb71e1", + "zh:5c76042fdf3df56a1f581bc477e5d6fc3e099d4d6544fe725b3747e9990726bd", + "zh:6ff8221340955f4b3ba9230918bb026c4414a5aebe9d0967845c43e8e8908aec", + "zh:73cdd8638cb52bbe25887cd5b7946cc3fcb891867de11bcb0fde9b35c4f70a41", + "zh:7af5aec2fd01fa5e5f600f1db1bcf200aaadc05a2c8ffcbb4b6b61cd2bd3e33b", + "zh:7e055cfa7f40b667f5f7af564db9544f46aa189cdbe5530ad812e027647132f5", "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a614beb312574342b27dbc34d65b450997f63fa3e948d0d30f441e4f69337380", - "zh:c1f486e27130610a9b64cacb0bd928009c433d62b3be515488185e6467b4aa1f", - "zh:dccd166e89e1a02e7ce658df3c42d040edec4b09c6f7906aa5743938518148b1", - "zh:e75a3ae0fb42b7ea5a0bb5dffd8f8468004c9700fcc934eb04c264fda2ba9984", + "zh:aba898190c668ade4471da65c96db414679367174ac5b73e8ce7551056c77e3e", + "zh:aedaa8d7d71e6d58cdc09a7e3bcb8031b3ea496a7ac142376eb679d1756057f3", + "zh:cb9739952d467b3f6d72d57722943956e80ab235b58a0e34758538381dcc386c", + "zh:e12a2681028a70cb08eaf4c3364ddab386416502f966067bf99e79ba6be0d7b6", + "zh:e32a922a7d6fd5df69b3cc92932fc2689dc195b0f8b493dcd686abdd892b06cd", + "zh:f2dea7dead6f34b51e8b6aae177a8b333834a41d25529baa634a087d99ea32f6", + "zh:f6eee6df0366e8452d912cfd498792579aede88de3b67c15d36b8949e37479b1", ] } provider "registry.terraform.io/hashicorp/local" { - version = "2.5.2" + version = "2.6.1" constraints = "~> 2.0" hashes = [ - "h1:IyFbOIO6mhikFNL/2h1iZJ6kyN3U00jgkpCLUCThAfE=", - "zh:136299545178ce281c56f36965bf91c35407c11897f7082b3b983d86cb79b511", - "zh:3b4486858aa9cb8163378722b642c57c529b6c64bfbfc9461d940a84cd66ebea", - "zh:4855ee628ead847741aa4f4fc9bed50cfdbf197f2912775dd9fe7bc43fa077c0", - "zh:4b8cd2583d1edcac4011caafe8afb7a95e8110a607a1d5fb87d921178074a69b", - "zh:52084ddaff8c8cd3f9e7bcb7ce4dc1eab00602912c96da43c29b4762dc376038", - "zh:71562d330d3f92d79b2952ffdda0dad167e952e46200c767dd30c6af8d7c0ed3", + "h1:DbiR/D2CPigzCGweYIyJH0N0x04oyI5xiZ9wSW/s3kQ=", + "zh:10050d08f416de42a857e4b6f76809aae63ea4ec6f5c852a126a915dede814b4", + "zh:2df2a3ebe9830d4759c59b51702e209fe053f47453cb4688f43c063bac8746b7", + "zh:2e759568bcc38c86ca0e43701d34cf29945736fdc8e429c5b287ddc2703c7b18", + "zh:6a62a34e48500ab4aea778e355e162ebde03260b7a9eb9edc7e534c84fbca4c6", + "zh:74373728ba32a1d5450a3a88ac45624579e32755b086cd4e51e88d9aca240ef6", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:805f81ade06ff68fa8b908d31892eaed5c180ae031c77ad35f82cb7a74b97cf4", - "zh:8b6b3ebeaaa8e38dd04e56996abe80db9be6f4c1df75ac3cccc77642899bd464", - "zh:ad07750576b99248037b897de71113cc19b1a8d0bc235eb99173cc83d0de3b1b", - "zh:b9f1c3bfadb74068f5c205292badb0661e17ac05eb23bfe8bd809691e4583d0e", - "zh:cc4cbcd67414fefb111c1bf7ab0bc4beb8c0b553d01719ad17de9a047adff4d1", + "zh:8dddae588971a996f622e7589cd8b9da7834c744ac12bfb59c97fa77ded95255", + "zh:946f82f66353bb97aefa8d95c4ca86db227f9b7c50b82415289ac47e4e74d08d", + "zh:e9a5c09e6f35e510acf15b666fd0b34a30164cecdcd81ce7cda0f4b2dade8d91", + "zh:eafe5b873ef42b32feb2f969c38ff8652507e695620cbaf03b9db714bee52249", + "zh:ec146289fa27650c9d433bb5c7847379180c0b7a323b1b94e6e7ad5d2a7dbe71", + "zh:fc882c35ce05631d76c0973b35adde26980778fc81d9da81a2fade2b9d73423b", ] } provider "registry.terraform.io/hashicorp/null" { - version = "3.2.3" + version = "3.2.4" constraints = "~> 3.0, ~> 3.2" hashes = [ - "h1:I0Um8UkrMUb81Fxq/dxbr3HLP2cecTH2WMJiwKSrwQY=", - "zh:22d062e5278d872fe7aed834f5577ba0a5afe34a3bdac2b81f828d8d3e6706d2", - "zh:23dead00493ad863729495dc212fd6c29b8293e707b055ce5ba21ee453ce552d", - "zh:28299accf21763ca1ca144d8f660688d7c2ad0b105b7202554ca60b02a3856d3", - "zh:55c9e8a9ac25a7652df8c51a8a9a422bd67d784061b1de2dc9fe6c3cb4e77f2f", - "zh:756586535d11698a216291c06b9ed8a5cc6a4ec43eee1ee09ecd5c6a9e297ac1", + "h1:L5V05xwp/Gto1leRryuesxjMfgZwjb7oool4WS1UEFQ=", + "zh:59f6b52ab4ff35739647f9509ee6d93d7c032985d9f8c6237d1f8a59471bbbe2", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:9d5eea62fdb587eeb96a8c4d782459f4e6b73baeece4d04b4a40e44faaee9301", - "zh:a6355f596a3fb8fc85c2fb054ab14e722991533f87f928e7169a486462c74670", - "zh:b5a65a789cff4ada58a5baffc76cb9767dc26ec6b45c00d2ec8b1b027f6db4ed", - "zh:db5ab669cf11d0e9f81dc380a6fdfcac437aea3d69109c7aef1a5426639d2d65", - "zh:de655d251c470197bcbb5ac45d289595295acb8f829f6c781d4a75c8c8b7c7dd", - "zh:f5c68199f2e6076bce92a12230434782bf768103a427e9bb9abee99b116af7b5", + "zh:795c897119ff082133150121d39ff26cb5f89a730a2c8c26f3a9c1abf81a9c43", + "zh:7b9c7b16f118fbc2b05a983817b8ce2f86df125857966ad356353baf4bff5c0a", + "zh:85e33ab43e0e1726e5f97a874b8e24820b6565ff8076523cc2922ba671492991", + "zh:9d32ac3619cfc93eb3c4f423492a8e0f79db05fec58e449dee9b2d5873d5f69f", + "zh:9e15c3c9dd8e0d1e3731841d44c34571b6c97f5b95e8296a45318b94e5287a6e", + "zh:b4c2ab35d1b7696c30b64bf2c0f3a62329107bd1a9121ce70683dec58af19615", + "zh:c43723e8cc65bcdf5e0c92581dcbbdcbdcf18b8d2037406a5f2033b1e22de442", + "zh:ceb5495d9c31bfb299d246ab333f08c7fb0d67a4f82681fbf47f2a21c3e11ab5", + "zh:e171026b3659305c558d9804062762d168f50ba02b88b231d20ec99578a6233f", + "zh:ed0fe2acdb61330b01841fa790be00ec6beaac91d41f311fb8254f74eb6a711f", ] } provider "registry.terraform.io/hashicorp/random" { - version = "3.6.3" + version = "3.7.2" constraints = "~> 3.0" hashes = [ - "h1:zG9uFP8l9u+yGZZvi5Te7PV62j50azpgwPunq2vTm1E=", - "zh:04ceb65210251339f07cd4611885d242cd4d0c7306e86dda9785396807c00451", - "zh:448f56199f3e99ff75d5c0afacae867ee795e4dfda6cb5f8e3b2a72ec3583dd8", - "zh:4b4c11ccfba7319e901df2dac836b1ae8f12185e37249e8d870ee10bb87a13fe", - "zh:4fa45c44c0de582c2edb8a2e054f55124520c16a39b2dfc0355929063b6395b1", - "zh:588508280501a06259e023b0695f6a18149a3816d259655c424d068982cbdd36", - "zh:737c4d99a87d2a4d1ac0a54a73d2cb62974ccb2edbd234f333abd079a32ebc9e", + "h1:KG4NuIBl1mRWU0KD/BGfCi1YN/j3F7H4YgeeM7iSdNs=", + "zh:14829603a32e4bc4d05062f059e545a91e27ff033756b48afbae6b3c835f508f", + "zh:1527fb07d9fea400d70e9e6eb4a2b918d5060d604749b6f1c361518e7da546dc", + "zh:1e86bcd7ebec85ba336b423ba1db046aeaa3c0e5f921039b3f1a6fc2f978feab", + "zh:24536dec8bde66753f4b4030b8f3ef43c196d69cccbea1c382d01b222478c7a3", + "zh:29f1786486759fad9b0ce4fdfbbfece9343ad47cd50119045075e05afe49d212", + "zh:4d701e978c2dd8604ba1ce962b047607701e65c078cb22e97171513e9e57491f", "zh:78d5eefdd9e494defcb3c68d282b8f96630502cac21d1ea161f53cfe9bb483b3", - "zh:a357ab512e5ebc6d1fda1382503109766e21bbfdfaa9ccda43d313c122069b30", - "zh:c51bfb15e7d52cc1a2eaec2a903ac2aff15d162c172b1b4c17675190e8147615", - "zh:e0951ee6fa9df90433728b96381fb867e3db98f66f735e0c3e24f8f16903f0ad", - "zh:e3cdcb4e73740621dabd82ee6a37d6cfce7fee2a03d8074df65086760f5cf556", - "zh:eff58323099f1bd9a0bec7cb04f717e7f1b2774c7d612bf7581797e1622613a0", + "zh:7b8434212eef0f8c83f5a90c6d76feaf850f6502b61b53c329e85b3b281cba34", + "zh:ac8a23c212258b7976e1621275e3af7099e7e4a3d4478cf8d5d2a27f3bc3e967", + "zh:b516ca74431f3df4c6cf90ddcdb4042c626e026317a33c53f0b445a3d93b720d", + "zh:dc76e4326aec2490c1600d6871a95e78f9050f9ce427c71707ea412a2f2f1a62", + "zh:eac7b63e86c749c7d48f527671c7aee5b4e26c10be6ad7232d6860167f99dbb0", ] } diff --git a/examples/prebuilt/README.md b/examples/prebuilt/README.md index 2969ef8698..882388783b 100644 --- a/examples/prebuilt/README.md +++ b/examples/prebuilt/README.md @@ -73,7 +73,7 @@ terraform output webhook_secret | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | | [local](#requirement\_local) | ~> 2.0 | | [random](#requirement\_random) | ~> 3.0 | @@ -81,8 +81,8 @@ terraform output webhook_secret | Name | Version | |------|---------| -| [aws](#provider\_aws) | 5.82.1 | -| [random](#provider\_random) | 3.6.3 | +| [aws](#provider\_aws) | 6.22.1 | +| [random](#provider\_random) | 3.7.2 | ## Modules diff --git a/examples/prebuilt/versions.tf b/examples/prebuilt/versions.tf index 650e894012..af642af83b 100644 --- a/examples/prebuilt/versions.tf +++ b/examples/prebuilt/versions.tf @@ -2,7 +2,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } local = { source = "hashicorp/local" diff --git a/examples/termination-watcher/.terraform.lock.hcl b/examples/termination-watcher/.terraform.lock.hcl index 9526664229..4f33187500 100644 --- a/examples/termination-watcher/.terraform.lock.hcl +++ b/examples/termination-watcher/.terraform.lock.hcl @@ -2,24 +2,24 @@ # Manual edits may be lost in future updates. provider "registry.terraform.io/hashicorp/aws" { - version = "5.82.1" - constraints = "~> 5.27" + version = "6.22.1" + constraints = ">= 6.21.0" hashes = [ - "h1:QTOtDMehUfiD3wDbbDuXYuTqGgLDkKK9Agkd5NCUEic=", - "zh:0fde8533282973f1f5d33b2c4f82d962a2c78860d39b42ac20a9ce399f06f62c", - "zh:1fd1a252bffe91668f35be8eac4e0a980f022120254eae1674c3c05049aff88a", - "zh:31bbd380cd7d74bf9a8c961fc64da4222bed40ffbdb27b011e637fa8b2d33641", - "zh:333ee400cf6f62fa199dc1270bf8efac6ffe56659f86918070b8351b8636e03b", - "zh:42ea9fee0a152d344d548eab43583299a13bcd73fae9e53e7e1a708720ac1315", - "zh:4b78f25a8cda3316eb56aa01909a403ec2f325a2eb0512c9a73966068c26cf29", - "zh:5e9cf9a275eda8f7940a41e32abe0b92ba76b5744def4af5124b343b5f33eb94", - "zh:6a46c8630c16b9e1338c2daed6006118db951420108b58b8b886403c69317439", - "zh:6efe11cf1a01f98a8d8043cdcd8c0ee5fe93a0e582c2b69ebb73ea073f5068c3", - "zh:88ab5c768c7d8133dab94eff48071e764424ad2b7cfeee5abe6d5bb16e4b85c6", + "h1:PTgxp+nMDBd6EFHAIH6ceFfvwa2blqkCwXglZn6Dqa8=", + "zh:3995ca97e6c2c1ed9e231c453287585d3dc1ca2a304683ac0b269b3448fda7c0", + "zh:4f69f70d2edeb0dde9c693b7cd7e8e21c781b2fac7062bed5300092dbadb71e1", + "zh:5c76042fdf3df56a1f581bc477e5d6fc3e099d4d6544fe725b3747e9990726bd", + "zh:6ff8221340955f4b3ba9230918bb026c4414a5aebe9d0967845c43e8e8908aec", + "zh:73cdd8638cb52bbe25887cd5b7946cc3fcb891867de11bcb0fde9b35c4f70a41", + "zh:7af5aec2fd01fa5e5f600f1db1bcf200aaadc05a2c8ffcbb4b6b61cd2bd3e33b", + "zh:7e055cfa7f40b667f5f7af564db9544f46aa189cdbe5530ad812e027647132f5", "zh:9b12af85486a96aedd8d7984b0ff811a4b42e3d88dad1a3fb4c0b580d04fa425", - "zh:a614beb312574342b27dbc34d65b450997f63fa3e948d0d30f441e4f69337380", - "zh:c1f486e27130610a9b64cacb0bd928009c433d62b3be515488185e6467b4aa1f", - "zh:dccd166e89e1a02e7ce658df3c42d040edec4b09c6f7906aa5743938518148b1", - "zh:e75a3ae0fb42b7ea5a0bb5dffd8f8468004c9700fcc934eb04c264fda2ba9984", + "zh:aba898190c668ade4471da65c96db414679367174ac5b73e8ce7551056c77e3e", + "zh:aedaa8d7d71e6d58cdc09a7e3bcb8031b3ea496a7ac142376eb679d1756057f3", + "zh:cb9739952d467b3f6d72d57722943956e80ab235b58a0e34758538381dcc386c", + "zh:e12a2681028a70cb08eaf4c3364ddab386416502f966067bf99e79ba6be0d7b6", + "zh:e32a922a7d6fd5df69b3cc92932fc2689dc195b0f8b493dcd686abdd892b06cd", + "zh:f2dea7dead6f34b51e8b6aae177a8b333834a41d25529baa634a087d99ea32f6", + "zh:f6eee6df0366e8452d912cfd498792579aede88de3b67c15d36b8949e37479b1", ] } diff --git a/lambdas/.nvmrc b/lambdas/.nvmrc index 53d1c14db3..54c65116f1 100644 --- a/lambdas/.nvmrc +++ b/lambdas/.nvmrc @@ -1 +1 @@ -v22 +v24 diff --git a/modules/ami-housekeeper/README.md b/modules/ami-housekeeper/README.md index 127d1f96b1..8898e0c85e 100644 --- a/modules/ami-housekeeper/README.md +++ b/modules/ami-housekeeper/README.md @@ -67,13 +67,13 @@ yarn run dist | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules @@ -105,7 +105,7 @@ No modules. | [lambda\_architecture](#input\_lambda\_architecture) | AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. | `string` | `"arm64"` | no | | [lambda\_memory\_size](#input\_lambda\_memory\_size) | Memory size limit in MB of the lambda. | `number` | `256` | no | | [lambda\_principals](#input\_lambda\_principals) | (Optional) add extra principals to the role created for execution of the lambda, e.g. for local testing. |
list(object({
type = string
identifiers = list(string)
}))
| `[]` | no | -| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | +| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs24.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | | [lambda\_s3\_key](#input\_lambda\_s3\_key) | S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas. | `string` | `null` | no | | [lambda\_s3\_object\_version](#input\_lambda\_s3\_object\_version) | S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket. | `string` | `null` | no | diff --git a/modules/ami-housekeeper/variables.tf b/modules/ami-housekeeper/variables.tf index 6d9def766b..54bec6dc32 100644 --- a/modules/ami-housekeeper/variables.tf +++ b/modules/ami-housekeeper/variables.tf @@ -117,7 +117,7 @@ variable "lambda_s3_object_version" { variable "lambda_runtime" { description = "AWS Lambda runtime." type = string - default = "nodejs22.x" + default = "nodejs24.x" } variable "lambda_architecture" { diff --git a/modules/ami-housekeeper/versions.tf b/modules/ami-housekeeper/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/ami-housekeeper/versions.tf +++ b/modules/ami-housekeeper/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/download-lambda/README.md b/modules/download-lambda/README.md index 1d51508ab3..9971618089 100644 --- a/modules/download-lambda/README.md +++ b/modules/download-lambda/README.md @@ -30,7 +30,7 @@ module "lambdas" { | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | | [null](#requirement\_null) | ~> 3 | ## Providers diff --git a/modules/download-lambda/versions.tf b/modules/download-lambda/versions.tf index bb56e1e9ba..6bc038a353 100644 --- a/modules/download-lambda/versions.tf +++ b/modules/download-lambda/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } null = { source = "hashicorp/null" diff --git a/modules/lambda/README.md b/modules/lambda/README.md index 0420776ad4..26ff5e5c24 100644 --- a/modules/lambda/README.md +++ b/modules/lambda/README.md @@ -10,13 +10,13 @@ Generic module to create lambda functions | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules @@ -39,7 +39,7 @@ No modules. | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| -| [lambda](#input\_lambda) | Configuration for the lambda function.

`aws_partition`: Partition for the base arn if not 'aws'
`architecture`: AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions.
`environment_variables`: Environment variables for the lambda.
`handler`: The entrypoint for the lambda.
`principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing.
`lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'.
`logging_kms_key_id`: Specifies the kms key id to encrypt the logs with
`logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.
`memory_size`: Memory size limit in MB of the lambda.
`metrics_namespace`: Namespace for the metrics emitted by the lambda.
`name`: The name of the lambda function.
`prefix`: The prefix used for naming resources.
`role_path`: The path that will be added to the role, if not set the environment name will be used.
`role_permissions_boundary`: Permissions boundary that will be added to the created role for the lambda.
`runtime`: AWS Lambda runtime.
`s3_bucket`: S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly.
`s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas.
`s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket.
`security_group_ids`: List of security group IDs associated with the Lambda function.
`subnet_ids`: List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`.
`tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`timeout`: Time out of the lambda in seconds.
`tracing_config`: Configuration for lambda tracing.
`zip`: File location of the lambda zip file. |
object({
aws_partition = optional(string, "aws")
architecture = optional(string, "arm64")
environment_variables = optional(map(string), {})
handler = string
lambda_tags = optional(map(string), {})
log_level = optional(string, "info")
logging_kms_key_id = optional(string, null)
logging_retention_in_days = optional(number, 180)
memory_size = optional(number, 256)
metrics_namespace = optional(string, "GitHub Runners")
name = string
prefix = optional(string, null)
principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
role_path = optional(string, null)
role_permissions_boundary = optional(string, null)
runtime = optional(string, "nodejs22.x")
s3_bucket = optional(string, null)
s3_key = optional(string, null)
s3_object_version = optional(string, null)
security_group_ids = optional(list(string), [])
subnet_ids = optional(list(string), [])
tags = optional(map(string), {})
timeout = optional(number, 60)
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
zip = optional(string, null)
})
| n/a | yes | +| [lambda](#input\_lambda) | Configuration for the lambda function.

`aws_partition`: Partition for the base arn if not 'aws'
`architecture`: AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions.
`environment_variables`: Environment variables for the lambda.
`handler`: The entrypoint for the lambda.
`principals`: Add extra principals to the role created for execution of the lambda, e.g. for local testing.
`lambda_tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`log_level`: Logging level for lambda logging. Valid values are 'silly', 'trace', 'debug', 'info', 'warn', 'error', 'fatal'.
`logging_kms_key_id`: Specifies the kms key id to encrypt the logs with
`logging_retention_in_days`: Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653.
`memory_size`: Memory size limit in MB of the lambda.
`metrics_namespace`: Namespace for the metrics emitted by the lambda.
`name`: The name of the lambda function.
`prefix`: The prefix used for naming resources.
`role_path`: The path that will be added to the role, if not set the environment name will be used.
`role_permissions_boundary`: Permissions boundary that will be added to the created role for the lambda.
`runtime`: AWS Lambda runtime.
`s3_bucket`: S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly.
`s3_key`: S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas.
`s3_object_version`: S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket.
`security_group_ids`: List of security group IDs associated with the Lambda function.
`subnet_ids`: List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`.
`tags`: Map of tags that will be added to created resources. By default resources will be tagged with name and environment.
`timeout`: Time out of the lambda in seconds.
`tracing_config`: Configuration for lambda tracing.
`zip`: File location of the lambda zip file. |
object({
aws_partition = optional(string, "aws")
architecture = optional(string, "arm64")
environment_variables = optional(map(string), {})
handler = string
lambda_tags = optional(map(string), {})
log_level = optional(string, "info")
logging_kms_key_id = optional(string, null)
logging_retention_in_days = optional(number, 180)
memory_size = optional(number, 256)
metrics_namespace = optional(string, "GitHub Runners")
name = string
prefix = optional(string, null)
principals = optional(list(object({
type = string
identifiers = list(string)
})), [])
role_path = optional(string, null)
role_permissions_boundary = optional(string, null)
runtime = optional(string, "nodejs24.x")
s3_bucket = optional(string, null)
s3_key = optional(string, null)
s3_object_version = optional(string, null)
security_group_ids = optional(list(string), [])
subnet_ids = optional(list(string), [])
tags = optional(map(string), {})
timeout = optional(number, 60)
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
zip = optional(string, null)
})
| n/a | yes | ## Outputs diff --git a/modules/lambda/variables.tf b/modules/lambda/variables.tf index bafd2372e8..7cbecba071 100644 --- a/modules/lambda/variables.tf +++ b/modules/lambda/variables.tf @@ -47,7 +47,7 @@ variable "lambda" { })), []) role_path = optional(string, null) role_permissions_boundary = optional(string, null) - runtime = optional(string, "nodejs22.x") + runtime = optional(string, "nodejs24.x") s3_bucket = optional(string, null) s3_key = optional(string, null) s3_object_version = optional(string, null) diff --git a/modules/lambda/versions.tf b/modules/lambda/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/lambda/versions.tf +++ b/modules/lambda/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/multi-runner/README.md b/modules/multi-runner/README.md index 0515763f4c..32dab7e7c6 100644 --- a/modules/multi-runner/README.md +++ b/modules/multi-runner/README.md @@ -79,14 +79,14 @@ module "multi-runner" { | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3 | -| [aws](#requirement\_aws) | >= 5.77 | +| [aws](#requirement\_aws) | >= 6.21 | | [random](#requirement\_random) | ~> 3.0 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.77 | +| [aws](#provider\_aws) | >= 6.21 | | [random](#provider\_random) | ~> 3.0 | ## Modules @@ -140,7 +140,7 @@ module "multi-runner" { | [lambda\_event\_source\_mapping\_batch\_size](#input\_lambda\_event\_source\_mapping\_batch\_size) | Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used. | `number` | `10` | no | | [lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds](#input\_lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds) | Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10. Defaults to 0. | `number` | `0` | no | | [lambda\_principals](#input\_lambda\_principals) | (Optional) add extra principals to the role created for execution of the lambda, e.g. for local testing. |
list(object({
type = string
identifiers = list(string)
}))
| `[]` | no | -| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | +| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs24.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | | [lambda\_security\_group\_ids](#input\_lambda\_security\_group\_ids) | List of security group IDs associated with the Lambda function. | `list(string)` | `[]` | no | | [lambda\_subnet\_ids](#input\_lambda\_subnet\_ids) | List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. | `list(string)` | `[]` | no | diff --git a/modules/multi-runner/variables.tf b/modules/multi-runner/variables.tf index 5c839e1104..6ceab81ed6 100644 --- a/modules/multi-runner/variables.tf +++ b/modules/multi-runner/variables.tf @@ -364,7 +364,7 @@ variable "log_level" { variable "lambda_runtime" { description = "AWS Lambda runtime." type = string - default = "nodejs22.x" + default = "nodejs24.x" } variable "lambda_architecture" { diff --git a/modules/multi-runner/versions.tf b/modules/multi-runner/versions.tf index fa763f4e76..cf8961bb61 100644 --- a/modules/multi-runner/versions.tf +++ b/modules/multi-runner/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.77" + version = ">= 6.21" } random = { source = "hashicorp/random" diff --git a/modules/runner-binaries-syncer/README.md b/modules/runner-binaries-syncer/README.md index 740be89925..2999be138f 100644 --- a/modules/runner-binaries-syncer/README.md +++ b/modules/runner-binaries-syncer/README.md @@ -37,13 +37,13 @@ yarn run dist | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules @@ -89,7 +89,7 @@ No modules. | [lambda\_architecture](#input\_lambda\_architecture) | AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. | `string` | `"arm64"` | no | | [lambda\_memory\_size](#input\_lambda\_memory\_size) | Memory size of the lambda. | `number` | `256` | no | | [lambda\_principals](#input\_lambda\_principals) | (Optional) add extra principals to the role created for execution of the lambda, e.g. for local testing. |
list(object({
type = string
identifiers = list(string)
}))
| `[]` | no | -| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | +| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs24.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | | [lambda\_schedule\_expression](#input\_lambda\_schedule\_expression) | Scheduler expression for action runner binary syncer. | `string` | `"cron(27 * * * ? *)"` | no | | [lambda\_security\_group\_ids](#input\_lambda\_security\_group\_ids) | List of security group IDs associated with the Lambda function. | `list(string)` | `[]` | no | diff --git a/modules/runner-binaries-syncer/variables.tf b/modules/runner-binaries-syncer/variables.tf index 4a38fb24b0..dd16a7c3ee 100644 --- a/modules/runner-binaries-syncer/variables.tf +++ b/modules/runner-binaries-syncer/variables.tf @@ -220,7 +220,7 @@ variable "lambda_principals" { variable "lambda_runtime" { description = "AWS Lambda runtime." type = string - default = "nodejs22.x" + default = "nodejs24.x" } variable "lambda_architecture" { diff --git a/modules/runner-binaries-syncer/versions.tf b/modules/runner-binaries-syncer/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/runner-binaries-syncer/versions.tf +++ b/modules/runner-binaries-syncer/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/runners/README.md b/modules/runners/README.md index 169cee7eac..4ad4825113 100644 --- a/modules/runners/README.md +++ b/modules/runners/README.md @@ -53,13 +53,13 @@ yarn run dist | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules @@ -179,7 +179,7 @@ yarn run dist | [lambda\_architecture](#input\_lambda\_architecture) | AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. | `string` | `"arm64"` | no | | [lambda\_event\_source\_mapping\_batch\_size](#input\_lambda\_event\_source\_mapping\_batch\_size) | Maximum number of records to pass to the lambda function in a single batch for the event source mapping. When not set, the AWS default of 10 events will be used. | `number` | `10` | no | | [lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds](#input\_lambda\_event\_source\_mapping\_maximum\_batching\_window\_in\_seconds) | Maximum amount of time to gather records before invoking the lambda function, in seconds. AWS requires this to be greater than 0 if batch\_size is greater than 10. Defaults to 0. | `number` | `0` | no | -| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | +| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs24.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | | [lambda\_scale\_down\_memory\_size](#input\_lambda\_scale\_down\_memory\_size) | Memory size limit in MB for scale down lambda. | `number` | `512` | no | | [lambda\_scale\_up\_memory\_size](#input\_lambda\_scale\_up\_memory\_size) | Memory size limit in MB for scale-up lambda. | `number` | `512` | no | diff --git a/modules/runners/job-retry/README.md b/modules/runners/job-retry/README.md index f54b943855..7ecd69deeb 100644 --- a/modules/runners/job-retry/README.md +++ b/modules/runners/job-retry/README.md @@ -13,13 +13,13 @@ The module is an inner module and used by the runner module when the opt-in feat | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules diff --git a/modules/runners/job-retry/versions.tf b/modules/runners/job-retry/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/runners/job-retry/versions.tf +++ b/modules/runners/job-retry/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/runners/pool/README.md b/modules/runners/pool/README.md index 052a8be60c..4bd268c37c 100644 --- a/modules/runners/pool/README.md +++ b/modules/runners/pool/README.md @@ -11,13 +11,13 @@ The pool is an opt-in feature. To be able to use the count on a module level to | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 0.14.1 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules diff --git a/modules/runners/pool/versions.tf b/modules/runners/pool/versions.tf index 5e8b391414..bceee0424e 100644 --- a/modules/runners/pool/versions.tf +++ b/modules/runners/pool/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/runners/variables.tf b/modules/runners/variables.tf index a45075bb52..846ddeafc6 100644 --- a/modules/runners/variables.tf +++ b/modules/runners/variables.tf @@ -597,7 +597,7 @@ variable "disable_runner_autoupdate" { variable "lambda_runtime" { description = "AWS Lambda runtime." type = string - default = "nodejs22.x" + default = "nodejs24.x" } variable "lambda_architecture" { diff --git a/modules/runners/versions.tf b/modules/runners/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/runners/versions.tf +++ b/modules/runners/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/setup-iam-permissions/README.md b/modules/setup-iam-permissions/README.md index 29096519ae..f2401278c0 100644 --- a/modules/setup-iam-permissions/README.md +++ b/modules/setup-iam-permissions/README.md @@ -42,13 +42,13 @@ Next execute the created Terraform code via `terraform init && terraform apply`. | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules diff --git a/modules/setup-iam-permissions/versions.tf b/modules/setup-iam-permissions/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/setup-iam-permissions/versions.tf +++ b/modules/setup-iam-permissions/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/ssm/README.md b/modules/ssm/README.md index cb23d3aa87..19b872e919 100644 --- a/modules/ssm/README.md +++ b/modules/ssm/README.md @@ -10,13 +10,13 @@ This module is used for storing configuration of runners, registration tokens an | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules diff --git a/modules/ssm/versions.tf b/modules/ssm/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/ssm/versions.tf +++ b/modules/ssm/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/termination-watcher/README.md b/modules/termination-watcher/README.md index c79939065f..788f4c5c13 100644 --- a/modules/termination-watcher/README.md +++ b/modules/termination-watcher/README.md @@ -61,7 +61,7 @@ yarn run dist | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers diff --git a/modules/termination-watcher/notification/README.md b/modules/termination-watcher/notification/README.md index a6de04ac27..1a26de3bce 100644 --- a/modules/termination-watcher/notification/README.md +++ b/modules/termination-watcher/notification/README.md @@ -4,13 +4,13 @@ | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules diff --git a/modules/termination-watcher/notification/versions.tf b/modules/termination-watcher/notification/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/termination-watcher/notification/versions.tf +++ b/modules/termination-watcher/notification/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/termination-watcher/termination/README.md b/modules/termination-watcher/termination/README.md index 28b321aaa3..22912b9646 100644 --- a/modules/termination-watcher/termination/README.md +++ b/modules/termination-watcher/termination/README.md @@ -4,13 +4,13 @@ | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules diff --git a/modules/termination-watcher/termination/versions.tf b/modules/termination-watcher/termination/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/termination-watcher/termination/versions.tf +++ b/modules/termination-watcher/termination/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/termination-watcher/versions.tf b/modules/termination-watcher/versions.tf index 33f3480ddf..42a40b33fd 100644 --- a/modules/termination-watcher/versions.tf +++ b/modules/termination-watcher/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } } } diff --git a/modules/webhook/README.md b/modules/webhook/README.md index 0dbd1429cf..10b0179672 100644 --- a/modules/webhook/README.md +++ b/modules/webhook/README.md @@ -36,14 +36,14 @@ yarn run dist | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | | [null](#requirement\_null) | ~> 3 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | ## Modules @@ -72,7 +72,7 @@ yarn run dist | [kms\_key\_arn](#input\_kms\_key\_arn) | Optional CMK Key ARN to be used for Parameter Store. | `string` | `null` | no | | [lambda\_architecture](#input\_lambda\_architecture) | AWS Lambda architecture. Lambda functions using Graviton processors ('arm64') tend to have better price/performance than 'x86\_64' functions. | `string` | `"arm64"` | no | | [lambda\_memory\_size](#input\_lambda\_memory\_size) | Memory size limit in MB for lambda. | `number` | `256` | no | -| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs22.x"` | no | +| [lambda\_runtime](#input\_lambda\_runtime) | AWS Lambda runtime. | `string` | `"nodejs24.x"` | no | | [lambda\_s3\_bucket](#input\_lambda\_s3\_bucket) | S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly. | `string` | `null` | no | | [lambda\_security\_group\_ids](#input\_lambda\_security\_group\_ids) | List of security group IDs associated with the Lambda function. | `list(string)` | `[]` | no | | [lambda\_subnet\_ids](#input\_lambda\_subnet\_ids) | List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. | `list(string)` | `[]` | no | diff --git a/modules/webhook/direct/README.md b/modules/webhook/direct/README.md index ee3db410b9..aa69347ae4 100644 --- a/modules/webhook/direct/README.md +++ b/modules/webhook/direct/README.md @@ -4,14 +4,14 @@ | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.27 | +| [aws](#requirement\_aws) | >= 6.21 | | [null](#requirement\_null) | ~> 3.2 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.27 | +| [aws](#provider\_aws) | >= 6.21 | | [null](#provider\_null) | ~> 3.2 | ## Modules @@ -40,7 +40,7 @@ No modules. | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| -| [config](#input\_config) | Configuration object for all variables. |
object({
prefix = string
archive = optional(object({
enable = optional(bool, true)
retention_days = optional(number, 7)
}), {})
tags = optional(map(string), {})

lambda_subnet_ids = optional(list(string), [])
lambda_security_group_ids = optional(list(string), [])
sqs_job_queues_arns = list(string)
lambda_zip = optional(string, null)
lambda_memory_size = optional(number, 256)
lambda_timeout = optional(number, 10)
role_permissions_boundary = optional(string, null)
role_path = optional(string, null)
logging_retention_in_days = optional(number, 180)
logging_kms_key_id = optional(string, null)
lambda_s3_bucket = optional(string, null)
lambda_s3_key = optional(string, null)
lambda_s3_object_version = optional(string, null)
lambda_apigateway_access_log_settings = optional(object({
destination_arn = string
format = string
}), null)
repository_white_list = optional(list(string), [])
kms_key_arn = optional(string, null)
log_level = optional(string, "info")
lambda_runtime = optional(string, "nodejs22.x")
aws_partition = optional(string, "aws")
lambda_architecture = optional(string, "arm64")
github_app_parameters = object({
webhook_secret = map(string)
})
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
lambda_tags = optional(map(string), {})
api_gw_source_arn = string
ssm_parameter_runner_matcher_config = list(object({
name = string
arn = string
version = string
}))
})
| n/a | yes | +| [config](#input\_config) | Configuration object for all variables. |
object({
prefix = string
archive = optional(object({
enable = optional(bool, true)
retention_days = optional(number, 7)
}), {})
tags = optional(map(string), {})

lambda_subnet_ids = optional(list(string), [])
lambda_security_group_ids = optional(list(string), [])
sqs_job_queues_arns = list(string)
lambda_zip = optional(string, null)
lambda_memory_size = optional(number, 256)
lambda_timeout = optional(number, 10)
role_permissions_boundary = optional(string, null)
role_path = optional(string, null)
logging_retention_in_days = optional(number, 180)
logging_kms_key_id = optional(string, null)
lambda_s3_bucket = optional(string, null)
lambda_s3_key = optional(string, null)
lambda_s3_object_version = optional(string, null)
lambda_apigateway_access_log_settings = optional(object({
destination_arn = string
format = string
}), null)
repository_white_list = optional(list(string), [])
kms_key_arn = optional(string, null)
log_level = optional(string, "info")
lambda_runtime = optional(string, "nodejs24.x")
aws_partition = optional(string, "aws")
lambda_architecture = optional(string, "arm64")
github_app_parameters = object({
webhook_secret = map(string)
})
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
lambda_tags = optional(map(string), {})
api_gw_source_arn = string
ssm_parameter_runner_matcher_config = list(object({
name = string
arn = string
version = string
}))
})
| n/a | yes | ## Outputs diff --git a/modules/webhook/direct/variables.tf b/modules/webhook/direct/variables.tf index 2a1b559c92..5da98e548a 100644 --- a/modules/webhook/direct/variables.tf +++ b/modules/webhook/direct/variables.tf @@ -28,7 +28,7 @@ variable "config" { repository_white_list = optional(list(string), []) kms_key_arn = optional(string, null) log_level = optional(string, "info") - lambda_runtime = optional(string, "nodejs22.x") + lambda_runtime = optional(string, "nodejs24.x") aws_partition = optional(string, "aws") lambda_architecture = optional(string, "arm64") github_app_parameters = object({ diff --git a/modules/webhook/direct/versions.tf b/modules/webhook/direct/versions.tf index 3f6adcb64a..82776fc618 100644 --- a/modules/webhook/direct/versions.tf +++ b/modules/webhook/direct/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } null = { diff --git a/modules/webhook/eventbridge/README.md b/modules/webhook/eventbridge/README.md index 329ac3c232..5c22c69010 100644 --- a/modules/webhook/eventbridge/README.md +++ b/modules/webhook/eventbridge/README.md @@ -4,14 +4,14 @@ | Name | Version | |------|---------| | [terraform](#requirement\_terraform) | >= 1.3.0 | -| [aws](#requirement\_aws) | >= 5.0 | +| [aws](#requirement\_aws) | >= 6.21 | | [null](#requirement\_null) | ~> 3.2 | ## Providers | Name | Version | |------|---------| -| [aws](#provider\_aws) | >= 5.0 | +| [aws](#provider\_aws) | >= 6.21 | | [null](#provider\_null) | ~> 3.2 | ## Modules @@ -54,7 +54,7 @@ No modules. | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| -| [config](#input\_config) | Configuration object for all variables. |
object({
prefix = string
archive = optional(object({
enable = optional(bool, true)
retention_days = optional(number, 7)
}), {})
tags = optional(map(string), {})

lambda_subnet_ids = optional(list(string), [])
lambda_security_group_ids = optional(list(string), [])
sqs_job_queues_arns = list(string)
lambda_zip = optional(string, null)
lambda_memory_size = optional(number, 256)
lambda_timeout = optional(number, 10)
role_permissions_boundary = optional(string, null)
role_path = optional(string, null)
logging_retention_in_days = optional(number, 180)
logging_kms_key_id = optional(string, null)
lambda_s3_bucket = optional(string, null)
lambda_s3_key = optional(string, null)
lambda_s3_object_version = optional(string, null)
lambda_apigateway_access_log_settings = optional(object({
destination_arn = string
format = string
}), null)
repository_white_list = optional(list(string), [])
kms_key_arn = optional(string, null)
log_level = optional(string, "info")
lambda_runtime = optional(string, "nodejs22.x")
aws_partition = optional(string, "aws")
lambda_architecture = optional(string, "arm64")
github_app_parameters = object({
webhook_secret = map(string)
})
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
lambda_tags = optional(map(string), {})
api_gw_source_arn = string
ssm_parameter_runner_matcher_config = list(object({
name = string
arn = string
version = string
}))
accept_events = optional(list(string), null)
})
| n/a | yes | +| [config](#input\_config) | Configuration object for all variables. |
object({
prefix = string
archive = optional(object({
enable = optional(bool, true)
retention_days = optional(number, 7)
}), {})
tags = optional(map(string), {})

lambda_subnet_ids = optional(list(string), [])
lambda_security_group_ids = optional(list(string), [])
sqs_job_queues_arns = list(string)
lambda_zip = optional(string, null)
lambda_memory_size = optional(number, 256)
lambda_timeout = optional(number, 10)
role_permissions_boundary = optional(string, null)
role_path = optional(string, null)
logging_retention_in_days = optional(number, 180)
logging_kms_key_id = optional(string, null)
lambda_s3_bucket = optional(string, null)
lambda_s3_key = optional(string, null)
lambda_s3_object_version = optional(string, null)
lambda_apigateway_access_log_settings = optional(object({
destination_arn = string
format = string
}), null)
repository_white_list = optional(list(string), [])
kms_key_arn = optional(string, null)
log_level = optional(string, "info")
lambda_runtime = optional(string, "nodejs24.x")
aws_partition = optional(string, "aws")
lambda_architecture = optional(string, "arm64")
github_app_parameters = object({
webhook_secret = map(string)
})
tracing_config = optional(object({
mode = optional(string, null)
capture_http_requests = optional(bool, false)
capture_error = optional(bool, false)
}), {})
lambda_tags = optional(map(string), {})
api_gw_source_arn = string
ssm_parameter_runner_matcher_config = list(object({
name = string
arn = string
version = string
}))
accept_events = optional(list(string), null)
})
| n/a | yes | ## Outputs diff --git a/modules/webhook/eventbridge/variables.tf b/modules/webhook/eventbridge/variables.tf index 8a884a6ba3..e39f24ab6d 100644 --- a/modules/webhook/eventbridge/variables.tf +++ b/modules/webhook/eventbridge/variables.tf @@ -28,7 +28,7 @@ variable "config" { repository_white_list = optional(list(string), []) kms_key_arn = optional(string, null) log_level = optional(string, "info") - lambda_runtime = optional(string, "nodejs22.x") + lambda_runtime = optional(string, "nodejs24.x") aws_partition = optional(string, "aws") lambda_architecture = optional(string, "arm64") github_app_parameters = object({ diff --git a/modules/webhook/eventbridge/versions.tf b/modules/webhook/eventbridge/versions.tf index a3c66b7b40..82776fc618 100644 --- a/modules/webhook/eventbridge/versions.tf +++ b/modules/webhook/eventbridge/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.0" + version = ">= 6.21" } null = { diff --git a/modules/webhook/variables.tf b/modules/webhook/variables.tf index c1683f2d3c..5f0a39c0d2 100644 --- a/modules/webhook/variables.tf +++ b/modules/webhook/variables.tf @@ -142,7 +142,7 @@ variable "log_level" { variable "lambda_runtime" { description = "AWS Lambda runtime." type = string - default = "nodejs22.x" + default = "nodejs24.x" } variable "aws_partition" { diff --git a/modules/webhook/versions.tf b/modules/webhook/versions.tf index 14f948428d..e864c4f9ed 100644 --- a/modules/webhook/versions.tf +++ b/modules/webhook/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.27" + version = ">= 6.21" } null = { diff --git a/variables.tf b/variables.tf index 7ff6ecece4..17ea50bfcf 100644 --- a/variables.tf +++ b/variables.tf @@ -781,7 +781,7 @@ variable "disable_runner_autoupdate" { variable "lambda_runtime" { description = "AWS Lambda runtime." type = string - default = "nodejs22.x" + default = "nodejs24.x" } variable "lambda_architecture" { diff --git a/versions.tf b/versions.tf index f1bce2af72..77f4d7b326 100644 --- a/versions.tf +++ b/versions.tf @@ -4,7 +4,7 @@ terraform { required_providers { aws = { source = "hashicorp/aws" - version = ">= 5.77" + version = ">= 6.21" } random = { source = "hashicorp/random" From ce198bfba7fc33b5ff22bc61580f9cebc6e752c2 Mon Sep 17 00:00:00 2001 From: Niek Palm Date: Tue, 9 Dec 2025 18:46:20 +0100 Subject: [PATCH 3/4] fix!: remove deprecated terraform variables (#4945) This pull request removes support for several deprecated AMI-related variables across all modules, fully migrating the configuration to the consolidated `ami` object. This change simplifies how AMI settings are managed, improves consistency, and reduces confusion for users. All references to the old variables (`ami_filter`, `ami_owners`, `ami_id_ssm_parameter_name`, `ami_kms_key_arn`) have been removed from module inputs, outputs, templates, documentation, and internal logic. **Migration to consolidated AMI configuration:** * Removed all deprecated AMI variables (`ami_filter`, `ami_owners`, `ami_id_ssm_parameter_name`, `ami_kms_key_arn`) from module variable definitions, outputs, and internal usage in `variables.tf`, `outputs.tf`, and related files. [[1]](diffhunk://#diff-05b5a57c136b6ff596500bcbfdcff145ef6cddea2a0e86d184d9daa9a65a288eL396-L424) [[2]](diffhunk://#diff-23e8f44c0f21971190244acdb8a35eaa21af7578ed5f1b97bef83f1a566d979cL138-L165) [[3]](diffhunk://#diff-de6c47c2496bd028a84d55ab12d8a4f90174ebfb6544b8b5c7b07a7ee4f27ec7L78-L90) [[4]](diffhunk://#diff-2daea3e8167ce5d859f6f1bee08138dbe216003262325490e8b90477277c104aL70-L89) [[5]](diffhunk://#diff-52d0673ff466b6445542e17038ea73a1cf41b8112f49ee57da4cebf8f0cb99c5L73-R73) [[6]](diffhunk://#diff-52d0673ff466b6445542e17038ea73a1cf41b8112f49ee57da4cebf8f0cb99c5L186-L187) [[7]](diffhunk://#diff-951f6bd1e32c3d27dd90e2dfb1f5232a704ef01fd925f3ee4323d6adc2dcdf5aL15-L20) [[8]](diffhunk://#diff-3937b99021390c0192952207dd2e26a409e0c03446478fb09ac3cd360bb60ee5L9-L14) * Updated example and documentation files to use the new `ami` object structure, replacing previous usage of the deprecated variables. [[1]](diffhunk://#diff-ef2038e9f8d807236d2acebe3c3a191039f8021cc4a0188f4778de908f0d453bL36-R40) [[2]](diffhunk://#diff-ef2038e9f8d807236d2acebe3c3a191039f8021cc4a0188f4778de908f0d453bL52-R57) [[3]](diffhunk://#diff-0a0d2ecd774e69a1397a913b6230f45692b49c0b17ccb103d318f6ab078353e2L48-R51) [[4]](diffhunk://#diff-b2b9df08c45240d599f6260d246bad6e67129932174131db209341d8464247a8L18-R19) [[5]](diffhunk://#diff-61032a0bb5f9d7ae65ba5155b2e58e12901f39bb7068f16b419d94c6f7a5b922L86-R89) * Refactored module runner logic to only use the new `ami` object, removing all fallback and compatibility code for the old variables. [[1]](diffhunk://#diff-dc46acf24afd63ef8c556b77c126ccc6e578bc87e3aa09a931f33d9bf2532fbbL182-L185) [[2]](diffhunk://#diff-57f00cdf57feef92ffcb35d6618e62e14ad652d5d520f331068332d5f3cade51L30-L32) [[3]](diffhunk://#diff-e9624b388e62ca51cf1fe073eb0919588e8c36a1143ecdb49580996a89f13bebL40-R51) * Updated internal references to AMI SSM parameter names and related policies to use the new configuration, ensuring all resource and environment variable logic is aligned with the consolidated approach. [[1]](diffhunk://#diff-bc00c0efa92f360635d026350da2fb775718514e2b1ae718281400e661b7469bL13-R13) [[2]](diffhunk://#diff-bc00c0efa92f360635d026350da2fb775718514e2b1ae718281400e661b7469bL28-R28) [[3]](diffhunk://#diff-5921c9e3315946068538b290966e7e4e51b6e49d04c2466b0bdd4b298629b29dL56-R57) [[4]](diffhunk://#diff-a598ba79d09e4770d55ed09e6b1d51e68c4a54562a3e3cbb46619d625a609d23L28-R28) [[5]](diffhunk://#diff-a598ba79d09e4770d55ed09e6b1d51e68c4a54562a3e3cbb46619d625a609d23L151-R151) With these updates, all AMI configuration is now handled through the unified `ami` object, making runner setup more straightforward and future-proof. ## Tested - [x] default example - [x] multi runner example --------- Co-authored-by: github-aws-runners-pr|bot Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- README.md | 5 ---- docs/configuration.md | 2 -- examples/default/README.md | 1 - examples/default/outputs.tf | 6 ---- examples/ephemeral/main.tf | 6 ++-- examples/multi-runner/README.md | 1 - examples/multi-runner/outputs.tf | 6 ---- .../templates/runner-configs/windows-x64.yaml | 11 +++---- examples/prebuilt/README.md | 13 ++++++--- examples/prebuilt/main.tf | 6 ++-- main.tf | 8 ++--- modules/multi-runner/README.md | 3 +- modules/multi-runner/outputs.tf | 20 ------------- modules/multi-runner/runners.tf | 3 -- modules/multi-runner/variables.tf | 9 +----- modules/runners/README.md | 4 --- modules/runners/main.tf | 10 ++++--- modules/runners/policies-lambda-common.tf | 4 +-- modules/runners/pool.tf | 4 +-- modules/runners/scale-up.tf | 4 +-- modules/runners/variables.tf | 28 ------------------ outputs.tf | 13 --------- variables.tf | 29 ------------------- 23 files changed, 39 insertions(+), 157 deletions(-) diff --git a/README.md b/README.md index 75e4727fc1..a3c55d33fd 100644 --- a/README.md +++ b/README.md @@ -107,16 +107,12 @@ Join our discord community via [this invite link](https://discord.gg/bxgXW8jJGh) | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| | [ami](#input\_ami) | AMI configuration for the action runner instances. This object allows you to specify all AMI-related settings in one place.

Parameters:
- `filter`: Map of lists to filter AMIs by various criteria (e.g., { name = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-*"], state = ["available"] })
- `owners`: List of AMI owners to limit the search. Common values: ["amazon"], ["self"], or specific AWS account IDs
- `id_ssm_parameter_arn`: ARN of an SSM parameter containing the AMI ID. If specified, this overrides both AMI filter and parameter name
- `kms_key_arn`: Optional KMS key ARN if the AMI is encrypted with a customer managed key

Defaults to null, in which case the module falls back to individual AMI variables (deprecated). |
object({
filter = optional(map(list(string)), { state = ["available"] })
owners = optional(list(string), ["amazon"])
id_ssm_parameter_arn = optional(string, null)
kms_key_arn = optional(string, null)
})
| `null` | no | -| [ami\_filter](#input\_ami\_filter) | [DEPRECATED: Use ami.filter] Map of lists used to create the AMI filter for the action runner AMI. | `map(list(string))` |
{
"state": [
"available"
]
}
| no | | [ami\_housekeeper\_cleanup\_config](#input\_ami\_housekeeper\_cleanup\_config) | Configuration for AMI cleanup.

`amiFilters` - Filters to use when searching for AMIs to cleanup. Default filter for images owned by the account and that are available.
`dryRun` - If true, no AMIs will be deregistered. Default false.
`launchTemplateNames` - Launch template names to use when searching for AMIs to cleanup. Default no launch templates.
`maxItems` - The maximum number of AMIs that will be queried for cleanup. Default no maximum.
`minimumDaysOld` - Minimum number of days old an AMI must be to be considered for cleanup. Default 30.
`ssmParameterNames` - SSM parameter names to use when searching for AMIs to cleanup. This parameter should be set when using SSM to configure the AMI to use. Default no SSM parameters. |
object({
amiFilters = optional(list(object({
Name = string
Values = list(string)
})),
[{
Name : "state",
Values : ["available"],
},
{
Name : "image-type",
Values : ["machine"],
}]
)
dryRun = optional(bool, false)
launchTemplateNames = optional(list(string))
maxItems = optional(number)
minimumDaysOld = optional(number, 30)
ssmParameterNames = optional(list(string))
})
| `{}` | no | | [ami\_housekeeper\_lambda\_s3\_key](#input\_ami\_housekeeper\_lambda\_s3\_key) | S3 key for syncer lambda function. Required if using S3 bucket to specify lambdas. | `string` | `null` | no | | [ami\_housekeeper\_lambda\_s3\_object\_version](#input\_ami\_housekeeper\_lambda\_s3\_object\_version) | S3 object version for syncer lambda function. Useful if S3 versioning is enabled on source bucket. | `string` | `null` | no | | [ami\_housekeeper\_lambda\_schedule\_expression](#input\_ami\_housekeeper\_lambda\_schedule\_expression) | Scheduler expression for action runner binary syncer. | `string` | `"rate(1 day)"` | no | | [ami\_housekeeper\_lambda\_timeout](#input\_ami\_housekeeper\_lambda\_timeout) | Time out of the lambda in seconds. | `number` | `300` | no | | [ami\_housekeeper\_lambda\_zip](#input\_ami\_housekeeper\_lambda\_zip) | File location of the lambda zip file. | `string` | `null` | no | -| [ami\_id\_ssm\_parameter\_name](#input\_ami\_id\_ssm\_parameter\_name) | [DEPRECATED: Use ami.id\_ssm\_parameter\_arn] String used to construct the SSM parameter name used to resolve the latest AMI ID for the runner instances. The SSM parameter should be of type String and contain a valid AMI ID. The default behavior is to use the latest Ubuntu 22.04 AMI. | `string` | `null` | no | -| [ami\_kms\_key\_arn](#input\_ami\_kms\_key\_arn) | [DEPRECATED: Use ami.kms\_key\_arn] Optional CMK Key ARN to be used to launch an instance from a shared encrypted AMI | `string` | `null` | no | -| [ami\_owners](#input\_ami\_owners) | [DEPRECATED: Use ami.owners] The list of owners that should be used to find the AMI. | `list(string)` |
[
"amazon"
]
| no | | [associate\_public\_ipv4\_address](#input\_associate\_public\_ipv4\_address) | Associate public IPv4 with the runner. Only tested with IPv4 | `bool` | `false` | no | | [aws\_partition](#input\_aws\_partition) | (optiona) partition in the arn namespace to use if not 'aws' | `string` | `"aws"` | no | | [aws\_region](#input\_aws\_region) | AWS region. | `string` | n/a | yes | @@ -244,7 +240,6 @@ Join our discord community via [this invite link](https://discord.gg/bxgXW8jJGh) | Name | Description | |------|-------------| | [binaries\_syncer](#output\_binaries\_syncer) | n/a | -| [deprecated\_variables\_warning](#output\_deprecated\_variables\_warning) | Warning for deprecated variables usage. These variables will be removed in a future release. Please migrate to using the consolidated 'ami' object. | | [instance\_termination\_handler](#output\_instance\_termination\_handler) | n/a | | [instance\_termination\_watcher](#output\_instance\_termination\_watcher) | n/a | | [queues](#output\_queues) | SQS queues. | diff --git a/docs/configuration.md b/docs/configuration.md index 9437688b57..8ec7e4caef 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -205,8 +205,6 @@ ami = { } ``` -> **Note:** The old way of configuring AMIs using individual variables (`ami_filter`, `ami_owners`, `ami_kms_key_arn`, `ami_id_ssm_parameter_arn`, `ami_id_ssm_parameter_name`) is deprecated and will be removed in a future version. It is recommended to migrate to the new consolidated `ami` object. Support for `ami_id_ssm_parameter_name` will be dropped, please specify an arn via `ami.id_ssm_parameter_arn` instead. - ## Logging The module uses [AWS Lambda Powertools](https://awslabs.github.io/aws-lambda-powertools-typescript/latest/) for logging. By default the log level is set to `info`, by setting the log level to `debug` the incoming events of the Lambda are logged as well. diff --git a/examples/default/README.md b/examples/default/README.md index 28d1baa141..2eae797fd7 100644 --- a/examples/default/README.md +++ b/examples/default/README.md @@ -70,7 +70,6 @@ terraform output -raw webhook_secret | Name | Description | |------|-------------| -| [deprecated\_variables\_warning](#output\_deprecated\_variables\_warning) | n/a | | [runners](#output\_runners) | n/a | | [webhook\_endpoint](#output\_webhook\_endpoint) | n/a | | [webhook\_secret](#output\_webhook\_secret) | n/a | diff --git a/examples/default/outputs.tf b/examples/default/outputs.tf index fb9dccc223..2a0f9a311e 100644 --- a/examples/default/outputs.tf +++ b/examples/default/outputs.tf @@ -12,9 +12,3 @@ output "webhook_secret" { sensitive = true value = random_id.random.hex } - -output "deprecated_variables_warning" { - value = join("", [ - module.runners.deprecated_variables_warning, - ]) -} diff --git a/examples/ephemeral/main.tf b/examples/ephemeral/main.tf index 4a83282177..2b9403ca4d 100644 --- a/examples/ephemeral/main.tf +++ b/examples/ephemeral/main.tf @@ -83,8 +83,10 @@ module "runners" { # configure your pre-built AMI # enable_userdata = false - # ami_filter = { name = ["github-runner-al2023-x86_64-*"], state = ["available"] } - # ami_owners = [data.aws_caller_identity.current.account_id] + # ami = { + # filter = { name = ["github-runner-al2023-x86_64-*"], state = ["available"] } + # owners = [data.aws_caller_identity.current.account_id] + # } # or use the default AMI # enable_userdata = true diff --git a/examples/multi-runner/README.md b/examples/multi-runner/README.md index 7b0798fa21..8f14b48503 100644 --- a/examples/multi-runner/README.md +++ b/examples/multi-runner/README.md @@ -94,7 +94,6 @@ terraform output -raw webhook_secret | Name | Description | |------|-------------| -| [deprecated\_variables\_warning](#output\_deprecated\_variables\_warning) | n/a | | [webhook\_endpoint](#output\_webhook\_endpoint) | n/a | | [webhook\_secret](#output\_webhook\_secret) | n/a | diff --git a/examples/multi-runner/outputs.tf b/examples/multi-runner/outputs.tf index 8a1c330077..1feaf2e671 100644 --- a/examples/multi-runner/outputs.tf +++ b/examples/multi-runner/outputs.tf @@ -6,9 +6,3 @@ output "webhook_secret" { sensitive = true value = random_id.random.hex } - -output "deprecated_variables_warning" { - value = join("", [ - module.runners.deprecated_variables_warning, - ]) -} diff --git a/examples/multi-runner/templates/runner-configs/windows-x64.yaml b/examples/multi-runner/templates/runner-configs/windows-x64.yaml index fdf8be6533..0bd3486a42 100644 --- a/examples/multi-runner/templates/runner-configs/windows-x64.yaml +++ b/examples/multi-runner/templates/runner-configs/windows-x64.yaml @@ -15,8 +15,9 @@ runner_config: delay_webhook_event: 5 scale_down_schedule_expression: cron(* * * * ? *) runner_boot_time_in_minutes: 20 - ami_filter: - name: - - Windows_Server-2022-English-Full-ECS_Optimized-* - state: - - available + ami: + filter: + name: + - Windows_Server-2022-English-Full-ECS_Optimized-* + state: + - available diff --git a/examples/prebuilt/README.md b/examples/prebuilt/README.md index 882388783b..b24f47a01d 100644 --- a/examples/prebuilt/README.md +++ b/examples/prebuilt/README.md @@ -33,9 +33,11 @@ Assuming you have built the `linux-al2023` image which has a pre-defined AMI nam module "runners" { ... # set the name of the ami to use - ami_filter = { name = ["github-runner-al2023-x86_64-2023*"], state = ["available"] } - # provide the owner id of - ami_owners = [""] + ami = { + filter = { name = ["github-runner-al2023-x86_64-2023*"], state = ["available"] } + # provide the owner id of + owners = [""] + } enable_userdata = false ... @@ -49,7 +51,10 @@ data "aws_caller_identity" "current" {} module "runners" { ... - ami_owners = [data.aws_caller_identity.current.account_id] + ami = { + filter = { name = ["github-runner-al2023-x86_64-2023*"], state = ["available"] } + owners = [data.aws_caller_identity.current.account_id] + } ... } ``` diff --git a/examples/prebuilt/main.tf b/examples/prebuilt/main.tf index 85f2dd0b63..62434f3f61 100644 --- a/examples/prebuilt/main.tf +++ b/examples/prebuilt/main.tf @@ -45,8 +45,10 @@ module "runners" { # configure your pre-built AMI enable_userdata = false - ami_filter = { name = [var.ami_name_filter], state = ["available"] } - ami_owners = [data.aws_caller_identity.current.account_id] + ami = { + filter = { name = [var.ami_name_filter], state = ["available"] } + owners = [data.aws_caller_identity.current.account_id] + } # disable binary syncer since github agent is already installed in the AMI. enable_runner_binaries_syncer = false diff --git a/main.tf b/main.tf index f0dadd6b66..c7ce20d158 100644 --- a/main.tf +++ b/main.tf @@ -177,12 +177,8 @@ module "runners" { instance_max_spot_price = var.instance_max_spot_price block_device_mappings = var.block_device_mappings - runner_architecture = var.runner_architecture - ami = var.ami - ami_filter = var.ami_filter - ami_owners = var.ami_owners - ami_id_ssm_parameter_name = var.ami_id_ssm_parameter_name - ami_kms_key_arn = var.ami_kms_key_arn + runner_architecture = var.runner_architecture + ami = var.ami sqs_build_queue = aws_sqs_queue.queued_builds github_app_parameters = local.github_app_parameters diff --git a/modules/multi-runner/README.md b/modules/multi-runner/README.md index 32dab7e7c6..58fd81ffed 100644 --- a/modules/multi-runner/README.md +++ b/modules/multi-runner/README.md @@ -150,7 +150,7 @@ module "multi-runner" { | [logging\_retention\_in\_days](#input\_logging\_retention\_in\_days) | Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. | `number` | `180` | no | | [matcher\_config\_parameter\_store\_tier](#input\_matcher\_config\_parameter\_store\_tier) | The tier of the parameter store for the matcher configuration. Valid values are `Standard`, and `Advanced`. | `string` | `"Standard"` | no | | [metrics](#input\_metrics) | Configuration for metrics created by the module, by default metrics are disabled to avoid additional costs. When metrics are enable all metrics are created unless explicit configured otherwise. |
object({
enable = optional(bool, false)
namespace = optional(string, "GitHub Runners")
metric = optional(object({
enable_github_app_rate_limit = optional(bool, true)
enable_job_retry = optional(bool, true)
enable_spot_termination_warning = optional(bool, true)
}), {})
})
| `{}` | no | -| [multi\_runner\_config](#input\_multi\_runner\_config) | multi\_runner\_config = {
runner\_config: {
runner\_os: "The EC2 Operating System type to use for action runner instances (linux,windows)."
runner\_architecture: "The platform architecture of the runner instance\_type."
runner\_metadata\_options: "(Optional) Metadata options for the ec2 runner instances."
ami: "(Optional) AMI configuration for the action runner instances. This object allows you to specify all AMI-related settings in one place."
ami\_filter: "(Optional) List of maps used to create the AMI filter for the action runner AMI. By default amazon linux 2 is used."
ami\_owners: "(Optional) The list of owners used to select the AMI of action runner instances."
create\_service\_linked\_role\_spot: (Optional) create the serviced linked role for spot instances that is required by the scale-up lambda.
credit\_specification: "(Optional) The credit specification of the runner instance\_type. Can be unset, `standard` or `unlimited`.
delay\_webhook\_event: "The number of seconds the event accepted by the webhook is invisible on the queue before the scale up lambda will receive the event."
disable\_runner\_autoupdate: "Disable the auto update of the github runner agent. Be aware there is a grace period of 30 days, see also the [GitHub article](https://github.blog/changelog/2022-02-01-github-actions-self-hosted-runners-can-now-disable-automatic-updates/)"
ebs\_optimized: "The EC2 EBS optimized configuration."
enable\_ephemeral\_runners: "Enable ephemeral runners, runners will only be used once."
enable\_job\_queued\_check: "Enables JIT configuration for creating runners instead of registration token based registraton. JIT configuration will only be applied for ephemeral runners. By default JIT configuration is enabled for ephemeral runners an can be disabled via this override. When running on GHES without support for JIT configuration this variable should be set to true for ephemeral runners."
enable\_on\_demand\_failover\_for\_errors: "Enable on-demand failover. For example to fall back to on demand when no spot capacity is available the variable can be set to `InsufficientInstanceCapacity`. When not defined the default behavior is to retry later."
enable\_organization\_runners: "Register runners to organization, instead of repo level"
enable\_runner\_binaries\_syncer: "Option to disable the lambda to sync GitHub runner distribution, useful when using a pre-build AMI."
enable\_ssm\_on\_runners: "Enable to allow access the runner instances for debugging purposes via SSM. Note that this adds additional permissions to the runner instances."
enable\_userdata: "Should the userdata script be enabled for the runner. Set this to false if you are using your own prebuilt AMI."
instance\_allocation\_strategy: "The allocation strategy for spot instances. AWS recommends to use `capacity-optimized` however the AWS default is `lowest-price`."
instance\_max\_spot\_price: "Max price price for spot instances per hour. This variable will be passed to the create fleet as max spot price for the fleet."
instance\_target\_capacity\_type: "Default lifecycle used for runner instances, can be either `spot` or `on-demand`."
instance\_types: "List of instance types for the action runner. Defaults are based on runner\_os (al2023 for linux and Windows Server Core for win)."
job\_queue\_retention\_in\_seconds: "The number of seconds the job is held in the queue before it is purged"
minimum\_running\_time\_in\_minutes: "The time an ec2 action runner should be running at minimum before terminated if not busy."
pool\_runner\_owner: "The pool will deploy runners to the GitHub org ID, set this value to the org to which you want the runners deployed. Repo level is not supported."
runner\_additional\_security\_group\_ids: "List of additional security groups IDs to apply to the runner. If added outside the multi\_runner\_config block, the additional security group(s) will be applied to all runner configs. If added inside the multi\_runner\_config, the additional security group(s) will be applied to the individual runner."
runner\_as\_root: "Run the action runner under the root user. Variable `runner_run_as` will be ignored."
runner\_boot\_time\_in\_minutes: "The minimum time for an EC2 runner to boot and register as a runner."
runner\_disable\_default\_labels: "Disable default labels for the runners (os, architecture and `self-hosted`). If enabled, the runner will only have the extra labels provided in `runner_extra_labels`. In case you on own start script is used, this configuration parameter needs to be parsed via SSM."
runner\_extra\_labels: "Extra (custom) labels for the runners (GitHub). Separate each label by a comma. Labels checks on the webhook can be enforced by setting `multi_runner_config.matcherConfig.exactMatch`. GitHub read-only labels should not be provided."
runner\_group\_name: "Name of the runner group."
runner\_name\_prefix: "Prefix for the GitHub runner name."
runner\_run\_as: "Run the GitHub actions agent as user."
runners\_maximum\_count: "The maximum number of runners that will be created. Setting the variable to `-1` desiables the maximum check."
scale\_down\_schedule\_expression: "Scheduler expression to check every x for scale down."
scale\_up\_reserved\_concurrent\_executions: "Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations."
userdata\_template: "Alternative user-data template, replacing the default template. By providing your own user\_data you have to take care of installing all required software, including the action runner. Variables userdata\_pre/post\_install are ignored."
enable\_jit\_config "Overwrite the default behavior for JIT configuration. By default JIT configuration is enabled for ephemeral runners and disabled for non-ephemeral runners. In case of GHES check first if the JIT config API is available. In case you are upgrading from 3.x to 4.x you can set `enable_jit_config` to `false` to avoid a breaking change when having your own AMI."
enable\_runner\_detailed\_monitoring: "Should detailed monitoring be enabled for the runner. Set this to true if you want to use detailed monitoring. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html for details."
enable\_cloudwatch\_agent: "Enabling the cloudwatch agent on the ec2 runner instances, the runner contains default config. Configuration can be overridden via `cloudwatch_config`."
cloudwatch\_config: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details."
userdata\_pre\_install: "Script to be ran before the GitHub Actions runner is installed on the EC2 instances"
userdata\_post\_install: "Script to be ran after the GitHub Actions runner is installed on the EC2 instances"
runner\_hook\_job\_started: "Script to be ran in the runner environment at the beginning of every job"
runner\_hook\_job\_completed: "Script to be ran in the runner environment at the end of every job"
runner\_ec2\_tags: "Map of tags that will be added to the launch template instance tag specifications."
runner\_iam\_role\_managed\_policy\_arns: "Attach AWS or customer-managed IAM policies (by ARN) to the runner IAM role"
vpc\_id: "The VPC for security groups of the action runners. If not set uses the value of `var.vpc_id`."
subnet\_ids: "List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. If not set, uses the value of `var.subnet_ids`."
idle\_config: "List of time period that can be defined as cron expression to keep a minimum amount of runners active instead of scaling down to 0. By defining this list you can ensure that in time periods that match the cron expression within 5 seconds a runner is kept idle."
runner\_log\_files: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details."
block\_device\_mappings: "The EC2 instance block device configuration. Takes the following keys: `device_name`, `delete_on_termination`, `volume_type`, `volume_size`, `encrypted`, `iops`, `throughput`, `kms_key_id`, `snapshot_id`."
job\_retry: "Experimental! Can be removed / changed without trigger a major release. Configure job retries. The configuration enables job retries (for ephemeral runners). After creating the instances a message will be published to a job retry queue. The job retry check lambda is checking after a delay if the job is queued. If not the message will be published again on the scale-up (build queue). Using this feature can impact the rate limit of the GitHub app."
pool\_config: "The configuration for updating the pool. The `pool_size` to adjust to by the events triggered by the `schedule_expression`. For example you can configure a cron expression for week days to adjust the pool to 10 and another expression for the weekend to adjust the pool to 1. Use `schedule_expression_timezone` to override the schedule time zone (defaults to UTC)."
}
matcherConfig: {
labelMatchers: "The list of list of labels supported by the runner configuration. `[[self-hosted, linux, x64, example]]`"
exactMatch: "If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ workflow label matches it will trigger the webhook."
priority: "If set it defines the priority of the matcher, the matcher with the lowest priority will be evaluated first. Default is 999, allowed values 0-999."
}
redrive\_build\_queue: "Set options to attach (optional) a dead letter queue to the build queue, the queue between the webhook and the scale up lambda. You have the following options. 1. Disable by setting `enabled` to false. 2. Enable by setting `enabled` to `true`, `maxReceiveCount` to a number of max retries."
} |
map(object({
runner_config = object({
runner_os = string
runner_architecture = string
runner_metadata_options = optional(map(any), {
instance_metadata_tags = "enabled"
http_endpoint = "enabled"
http_tokens = "required"
http_put_response_hop_limit = 1
})
ami = optional(object({
filter = optional(map(list(string)), { state = ["available"] })
owners = optional(list(string), ["amazon"])
id_ssm_parameter_arn = optional(string, null)
kms_key_arn = optional(string, null)
}), null) # Defaults to null, in which case the module falls back to individual AMI variables (deprecated)
# Deprecated: Use ami object instead
ami_filter = optional(map(list(string)), { state = ["available"] })
ami_owners = optional(list(string), ["amazon"])
ami_id_ssm_parameter_name = optional(string, null)
ami_kms_key_arn = optional(string, "")
create_service_linked_role_spot = optional(bool, false)
credit_specification = optional(string, null)
delay_webhook_event = optional(number, 30)
disable_runner_autoupdate = optional(bool, false)
ebs_optimized = optional(bool, false)
enable_ephemeral_runners = optional(bool, false)
enable_job_queued_check = optional(bool, null)
enable_on_demand_failover_for_errors = optional(list(string), [])
enable_organization_runners = optional(bool, false)
enable_runner_binaries_syncer = optional(bool, true)
enable_ssm_on_runners = optional(bool, false)
enable_userdata = optional(bool, true)
instance_allocation_strategy = optional(string, "lowest-price")
instance_max_spot_price = optional(string, null)
instance_target_capacity_type = optional(string, "spot")
instance_types = list(string)
job_queue_retention_in_seconds = optional(number, 86400)
minimum_running_time_in_minutes = optional(number, null)
pool_runner_owner = optional(string, null)
runner_as_root = optional(bool, false)
runner_boot_time_in_minutes = optional(number, 5)
runner_disable_default_labels = optional(bool, false)
runner_extra_labels = optional(list(string), [])
runner_group_name = optional(string, "Default")
runner_name_prefix = optional(string, "")
runner_run_as = optional(string, "ec2-user")
runners_maximum_count = number
runner_additional_security_group_ids = optional(list(string), [])
scale_down_schedule_expression = optional(string, "cron(*/5 * * * ? *)")
scale_up_reserved_concurrent_executions = optional(number, 1)
userdata_template = optional(string, null)
userdata_content = optional(string, null)
enable_jit_config = optional(bool, null)
enable_runner_detailed_monitoring = optional(bool, false)
enable_cloudwatch_agent = optional(bool, true)
cloudwatch_config = optional(string, null)
userdata_pre_install = optional(string, "")
userdata_post_install = optional(string, "")
runner_hook_job_started = optional(string, "")
runner_hook_job_completed = optional(string, "")
runner_ec2_tags = optional(map(string), {})
runner_iam_role_managed_policy_arns = optional(list(string), [])
vpc_id = optional(string, null)
subnet_ids = optional(list(string), null)
idle_config = optional(list(object({
cron = string
timeZone = string
idleCount = number
evictionStrategy = optional(string, "oldest_first")
})), [])
cpu_options = optional(object({
core_count = number
threads_per_core = number
}), null)
runner_log_files = optional(list(object({
log_group_name = string
prefix_log_group = bool
file_path = string
log_stream_name = string
})), null)
block_device_mappings = optional(list(object({
delete_on_termination = optional(bool, true)
device_name = optional(string, "/dev/xvda")
encrypted = optional(bool, true)
iops = optional(number)
kms_key_id = optional(string)
snapshot_id = optional(string)
throughput = optional(number)
volume_size = number
volume_type = optional(string, "gp3")
})), [{
volume_size = 30
}])
pool_config = optional(list(object({
schedule_expression = string
schedule_expression_timezone = optional(string)
size = number
})), [])
job_retry = optional(object({
enable = optional(bool, false)
delay_in_seconds = optional(number, 300)
delay_backoff = optional(number, 2)
lambda_memory_size = optional(number, 256)
lambda_timeout = optional(number, 30)
max_attempts = optional(number, 1)
}), {})
})
matcherConfig = object({
labelMatchers = list(list(string))
exactMatch = optional(bool, false)
priority = optional(number, 999)
})
redrive_build_queue = optional(object({
enabled = bool
maxReceiveCount = number
}), {
enabled = false
maxReceiveCount = null
})
}))
| n/a | yes | +| [multi\_runner\_config](#input\_multi\_runner\_config) | multi\_runner\_config = {
runner\_config: {
runner\_os: "The EC2 Operating System type to use for action runner instances (linux,windows)."
runner\_architecture: "The platform architecture of the runner instance\_type."
runner\_metadata\_options: "(Optional) Metadata options for the ec2 runner instances."
ami: "(Optional) AMI configuration for the action runner instances. This object allows you to specify all AMI-related settings in one place."
create\_service\_linked\_role\_spot: (Optional) create the serviced linked role for spot instances that is required by the scale-up lambda.
credit\_specification: "(Optional) The credit specification of the runner instance\_type. Can be unset, `standard` or `unlimited`.
delay\_webhook\_event: "The number of seconds the event accepted by the webhook is invisible on the queue before the scale up lambda will receive the event."
disable\_runner\_autoupdate: "Disable the auto update of the github runner agent. Be aware there is a grace period of 30 days, see also the [GitHub article](https://github.blog/changelog/2022-02-01-github-actions-self-hosted-runners-can-now-disable-automatic-updates/)"
ebs\_optimized: "The EC2 EBS optimized configuration."
enable\_ephemeral\_runners: "Enable ephemeral runners, runners will only be used once."
enable\_job\_queued\_check: "Enables JIT configuration for creating runners instead of registration token based registraton. JIT configuration will only be applied for ephemeral runners. By default JIT configuration is enabled for ephemeral runners an can be disabled via this override. When running on GHES without support for JIT configuration this variable should be set to true for ephemeral runners."
enable\_on\_demand\_failover\_for\_errors: "Enable on-demand failover. For example to fall back to on demand when no spot capacity is available the variable can be set to `InsufficientInstanceCapacity`. When not defined the default behavior is to retry later."
enable\_organization\_runners: "Register runners to organization, instead of repo level"
enable\_runner\_binaries\_syncer: "Option to disable the lambda to sync GitHub runner distribution, useful when using a pre-build AMI."
enable\_ssm\_on\_runners: "Enable to allow access the runner instances for debugging purposes via SSM. Note that this adds additional permissions to the runner instances."
enable\_userdata: "Should the userdata script be enabled for the runner. Set this to false if you are using your own prebuilt AMI."
instance\_allocation\_strategy: "The allocation strategy for spot instances. AWS recommends to use `capacity-optimized` however the AWS default is `lowest-price`."
instance\_max\_spot\_price: "Max price price for spot instances per hour. This variable will be passed to the create fleet as max spot price for the fleet."
instance\_target\_capacity\_type: "Default lifecycle used for runner instances, can be either `spot` or `on-demand`."
instance\_types: "List of instance types for the action runner. Defaults are based on runner\_os (al2023 for linux and Windows Server Core for win)."
job\_queue\_retention\_in\_seconds: "The number of seconds the job is held in the queue before it is purged"
minimum\_running\_time\_in\_minutes: "The time an ec2 action runner should be running at minimum before terminated if not busy."
pool\_runner\_owner: "The pool will deploy runners to the GitHub org ID, set this value to the org to which you want the runners deployed. Repo level is not supported."
runner\_additional\_security\_group\_ids: "List of additional security groups IDs to apply to the runner. If added outside the multi\_runner\_config block, the additional security group(s) will be applied to all runner configs. If added inside the multi\_runner\_config, the additional security group(s) will be applied to the individual runner."
runner\_as\_root: "Run the action runner under the root user. Variable `runner_run_as` will be ignored."
runner\_boot\_time\_in\_minutes: "The minimum time for an EC2 runner to boot and register as a runner."
runner\_disable\_default\_labels: "Disable default labels for the runners (os, architecture and `self-hosted`). If enabled, the runner will only have the extra labels provided in `runner_extra_labels`. In case you on own start script is used, this configuration parameter needs to be parsed via SSM."
runner\_extra\_labels: "Extra (custom) labels for the runners (GitHub). Separate each label by a comma. Labels checks on the webhook can be enforced by setting `multi_runner_config.matcherConfig.exactMatch`. GitHub read-only labels should not be provided."
runner\_group\_name: "Name of the runner group."
runner\_name\_prefix: "Prefix for the GitHub runner name."
runner\_run\_as: "Run the GitHub actions agent as user."
runners\_maximum\_count: "The maximum number of runners that will be created. Setting the variable to `-1` desiables the maximum check."
scale\_down\_schedule\_expression: "Scheduler expression to check every x for scale down."
scale\_up\_reserved\_concurrent\_executions: "Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations."
userdata\_template: "Alternative user-data template, replacing the default template. By providing your own user\_data you have to take care of installing all required software, including the action runner. Variables userdata\_pre/post\_install are ignored."
enable\_jit\_config "Overwrite the default behavior for JIT configuration. By default JIT configuration is enabled for ephemeral runners and disabled for non-ephemeral runners. In case of GHES check first if the JIT config API is available. In case you are upgrading from 3.x to 4.x you can set `enable_jit_config` to `false` to avoid a breaking change when having your own AMI."
enable\_runner\_detailed\_monitoring: "Should detailed monitoring be enabled for the runner. Set this to true if you want to use detailed monitoring. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html for details."
enable\_cloudwatch\_agent: "Enabling the cloudwatch agent on the ec2 runner instances, the runner contains default config. Configuration can be overridden via `cloudwatch_config`."
cloudwatch\_config: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details."
userdata\_pre\_install: "Script to be ran before the GitHub Actions runner is installed on the EC2 instances"
userdata\_post\_install: "Script to be ran after the GitHub Actions runner is installed on the EC2 instances"
runner\_hook\_job\_started: "Script to be ran in the runner environment at the beginning of every job"
runner\_hook\_job\_completed: "Script to be ran in the runner environment at the end of every job"
runner\_ec2\_tags: "Map of tags that will be added to the launch template instance tag specifications."
runner\_iam\_role\_managed\_policy\_arns: "Attach AWS or customer-managed IAM policies (by ARN) to the runner IAM role"
vpc\_id: "The VPC for security groups of the action runners. If not set uses the value of `var.vpc_id`."
subnet\_ids: "List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. If not set, uses the value of `var.subnet_ids`."
idle\_config: "List of time period that can be defined as cron expression to keep a minimum amount of runners active instead of scaling down to 0. By defining this list you can ensure that in time periods that match the cron expression within 5 seconds a runner is kept idle."
runner\_log\_files: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details."
block\_device\_mappings: "The EC2 instance block device configuration. Takes the following keys: `device_name`, `delete_on_termination`, `volume_type`, `volume_size`, `encrypted`, `iops`, `throughput`, `kms_key_id`, `snapshot_id`."
job\_retry: "Experimental! Can be removed / changed without trigger a major release. Configure job retries. The configuration enables job retries (for ephemeral runners). After creating the instances a message will be published to a job retry queue. The job retry check lambda is checking after a delay if the job is queued. If not the message will be published again on the scale-up (build queue). Using this feature can impact the rate limit of the GitHub app."
pool\_config: "The configuration for updating the pool. The `pool_size` to adjust to by the events triggered by the `schedule_expression`. For example you can configure a cron expression for week days to adjust the pool to 10 and another expression for the weekend to adjust the pool to 1. Use `schedule_expression_timezone` to override the schedule time zone (defaults to UTC)."
}
matcherConfig: {
labelMatchers: "The list of list of labels supported by the runner configuration. `[[self-hosted, linux, x64, example]]`"
exactMatch: "If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ workflow label matches it will trigger the webhook."
priority: "If set it defines the priority of the matcher, the matcher with the lowest priority will be evaluated first. Default is 999, allowed values 0-999."
}
redrive\_build\_queue: "Set options to attach (optional) a dead letter queue to the build queue, the queue between the webhook and the scale up lambda. You have the following options. 1. Disable by setting `enabled` to false. 2. Enable by setting `enabled` to `true`, `maxReceiveCount` to a number of max retries."
} |
map(object({
runner_config = object({
runner_os = string
runner_architecture = string
runner_metadata_options = optional(map(any), {
instance_metadata_tags = "enabled"
http_endpoint = "enabled"
http_tokens = "required"
http_put_response_hop_limit = 1
})
ami = optional(object({
filter = optional(map(list(string)), { state = ["available"] })
owners = optional(list(string), ["amazon"])
id_ssm_parameter_arn = optional(string, null)
kms_key_arn = optional(string, null)
}), null)
create_service_linked_role_spot = optional(bool, false)
credit_specification = optional(string, null)
delay_webhook_event = optional(number, 30)
disable_runner_autoupdate = optional(bool, false)
ebs_optimized = optional(bool, false)
enable_ephemeral_runners = optional(bool, false)
enable_job_queued_check = optional(bool, null)
enable_on_demand_failover_for_errors = optional(list(string), [])
enable_organization_runners = optional(bool, false)
enable_runner_binaries_syncer = optional(bool, true)
enable_ssm_on_runners = optional(bool, false)
enable_userdata = optional(bool, true)
instance_allocation_strategy = optional(string, "lowest-price")
instance_max_spot_price = optional(string, null)
instance_target_capacity_type = optional(string, "spot")
instance_types = list(string)
job_queue_retention_in_seconds = optional(number, 86400)
minimum_running_time_in_minutes = optional(number, null)
pool_runner_owner = optional(string, null)
runner_as_root = optional(bool, false)
runner_boot_time_in_minutes = optional(number, 5)
runner_disable_default_labels = optional(bool, false)
runner_extra_labels = optional(list(string), [])
runner_group_name = optional(string, "Default")
runner_name_prefix = optional(string, "")
runner_run_as = optional(string, "ec2-user")
runners_maximum_count = number
runner_additional_security_group_ids = optional(list(string), [])
scale_down_schedule_expression = optional(string, "cron(*/5 * * * ? *)")
scale_up_reserved_concurrent_executions = optional(number, 1)
userdata_template = optional(string, null)
userdata_content = optional(string, null)
enable_jit_config = optional(bool, null)
enable_runner_detailed_monitoring = optional(bool, false)
enable_cloudwatch_agent = optional(bool, true)
cloudwatch_config = optional(string, null)
userdata_pre_install = optional(string, "")
userdata_post_install = optional(string, "")
runner_hook_job_started = optional(string, "")
runner_hook_job_completed = optional(string, "")
runner_ec2_tags = optional(map(string), {})
runner_iam_role_managed_policy_arns = optional(list(string), [])
vpc_id = optional(string, null)
subnet_ids = optional(list(string), null)
idle_config = optional(list(object({
cron = string
timeZone = string
idleCount = number
evictionStrategy = optional(string, "oldest_first")
})), [])
cpu_options = optional(object({
core_count = number
threads_per_core = number
}), null)
runner_log_files = optional(list(object({
log_group_name = string
prefix_log_group = bool
file_path = string
log_stream_name = string
})), null)
block_device_mappings = optional(list(object({
delete_on_termination = optional(bool, true)
device_name = optional(string, "/dev/xvda")
encrypted = optional(bool, true)
iops = optional(number)
kms_key_id = optional(string)
snapshot_id = optional(string)
throughput = optional(number)
volume_size = number
volume_type = optional(string, "gp3")
})), [{
volume_size = 30
}])
pool_config = optional(list(object({
schedule_expression = string
schedule_expression_timezone = optional(string)
size = number
})), [])
job_retry = optional(object({
enable = optional(bool, false)
delay_in_seconds = optional(number, 300)
delay_backoff = optional(number, 2)
lambda_memory_size = optional(number, 256)
lambda_timeout = optional(number, 30)
max_attempts = optional(number, 1)
}), {})
})
matcherConfig = object({
labelMatchers = list(list(string))
exactMatch = optional(bool, false)
priority = optional(number, 999)
})
redrive_build_queue = optional(object({
enabled = bool
maxReceiveCount = number
}), {
enabled = false
maxReceiveCount = null
})
}))
| n/a | yes | | [pool\_lambda\_reserved\_concurrent\_executions](#input\_pool\_lambda\_reserved\_concurrent\_executions) | Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations. | `number` | `1` | no | | [pool\_lambda\_timeout](#input\_pool\_lambda\_timeout) | Time out for the pool lambda in seconds. | `number` | `60` | no | | [prefix](#input\_prefix) | The prefix used for naming resources | `string` | `"github-actions"` | no | @@ -195,7 +195,6 @@ module "multi-runner" { | Name | Description | |------|-------------| | [binaries\_syncer\_map](#output\_binaries\_syncer\_map) | n/a | -| [deprecated\_variables\_warning](#output\_deprecated\_variables\_warning) | Warning for deprecated variables usage. These variables will be removed in a future release. Please migrate to using the consolidated 'ami' object in each runner configuration. | | [instance\_termination\_handler](#output\_instance\_termination\_handler) | n/a | | [instance\_termination\_watcher](#output\_instance\_termination\_watcher) | n/a | | [runners\_map](#output\_runners\_map) | n/a | diff --git a/modules/multi-runner/outputs.tf b/modules/multi-runner/outputs.tf index 2f2b1d3458..7ce7171faf 100644 --- a/modules/multi-runner/outputs.tf +++ b/modules/multi-runner/outputs.tf @@ -67,23 +67,3 @@ output "instance_termination_handler" { lambda_role = module.instance_termination_watcher[0].spot_termination_handler.lambda_role } : null } - -output "deprecated_variables_warning" { - description = "Warning for deprecated variables usage. These variables will be removed in a future release. Please migrate to using the consolidated 'ami' object in each runner configuration." - value = join("", [ - for key, runner_config in var.multi_runner_config : ( - join("", [ - # Show object migration warning only when ami is null and old variables are used - try(runner_config.runner_config.ami, null) == null ? ( - (try(runner_config.runner_config.ami_filter, { state = ["available"] }) != { state = ["available"] } || - try(runner_config.runner_config.ami_owners, ["amazon"]) != ["amazon"] || - try(runner_config.runner_config.ami_kms_key_arn, "") != "") ? - "DEPRECATION WARNING: Runner '${key}' is using deprecated AMI variables (ami_filter, ami_owners, ami_kms_key_arn). These variables will be removed in a future version. Please migrate to using the consolidated 'ami' object.\n" : "" - ) : "", - # Always show warning for ami_id_ssm_parameter_name to migrate to ami_id_ssm_parameter_arn - try(runner_config.runner_config.ami_id_ssm_parameter_name, null) != null ? - "DEPRECATION WARNING: Runner '${key}' is using deprecated variable 'ami_id_ssm_parameter_name'. Please use 'ami.id_ssm_parameter_arn' instead.\n" : "" - ]) - ) - ]) -} diff --git a/modules/multi-runner/runners.tf b/modules/multi-runner/runners.tf index d58e61f6ac..7616c5c9e7 100644 --- a/modules/multi-runner/runners.tf +++ b/modules/multi-runner/runners.tf @@ -27,9 +27,6 @@ module "runners" { runner_architecture = each.value.runner_config.runner_architecture ami = each.value.runner_config.ami - ami_filter = each.value.runner_config.ami_filter - ami_owners = each.value.runner_config.ami_owners - ami_kms_key_arn = each.value.runner_config.ami_kms_key_arn sqs_build_queue = { "arn" : each.value.arn, "url" : each.value.url } github_app_parameters = local.github_app_parameters diff --git a/modules/multi-runner/variables.tf b/modules/multi-runner/variables.tf index 6ceab81ed6..df2c0729cd 100644 --- a/modules/multi-runner/variables.tf +++ b/modules/multi-runner/variables.tf @@ -70,12 +70,7 @@ variable "multi_runner_config" { owners = optional(list(string), ["amazon"]) id_ssm_parameter_arn = optional(string, null) kms_key_arn = optional(string, null) - }), null) # Defaults to null, in which case the module falls back to individual AMI variables (deprecated) - # Deprecated: Use ami object instead - ami_filter = optional(map(list(string)), { state = ["available"] }) - ami_owners = optional(list(string), ["amazon"]) - ami_id_ssm_parameter_name = optional(string, null) - ami_kms_key_arn = optional(string, "") + }), null) create_service_linked_role_spot = optional(bool, false) credit_specification = optional(string, null) delay_webhook_event = optional(number, 30) @@ -183,8 +178,6 @@ variable "multi_runner_config" { runner_architecture: "The platform architecture of the runner instance_type." runner_metadata_options: "(Optional) Metadata options for the ec2 runner instances." ami: "(Optional) AMI configuration for the action runner instances. This object allows you to specify all AMI-related settings in one place." - ami_filter: "(Optional) List of maps used to create the AMI filter for the action runner AMI. By default amazon linux 2 is used." - ami_owners: "(Optional) The list of owners used to select the AMI of action runner instances." create_service_linked_role_spot: (Optional) create the serviced linked role for spot instances that is required by the scale-up lambda. credit_specification: "(Optional) The credit specification of the runner instance_type. Can be unset, `standard` or `unlimited`. delay_webhook_event: "The number of seconds the event accepted by the webhook is invisible on the queue before the scale up lambda will receive the event." diff --git a/modules/runners/README.md b/modules/runners/README.md index 4ad4825113..f5d11c6c09 100644 --- a/modules/runners/README.md +++ b/modules/runners/README.md @@ -137,10 +137,6 @@ yarn run dist | Name | Description | Type | Default | Required | |------|-------------|------|---------|:--------:| | [ami](#input\_ami) | AMI configuration for the action runner instances. This object allows you to specify all AMI-related settings in one place.

Parameters:
- `filter`: Map of lists to filter AMIs by various criteria (e.g., { name = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-*"], state = ["available"] })
- `owners`: List of AMI owners to limit the search. Common values: ["amazon"], ["self"], or specific AWS account IDs
- `id_ssm_parameter_name`: Name of an SSM parameter containing the AMI ID. If specified, this overrides the AMI filter
- `id_ssm_parameter_arn`: ARN of an SSM parameter containing the AMI ID. If specified, this overrides both AMI filter and parameter name
- `kms_key_arn`: Optional KMS key ARN if the AMI is encrypted with a customer managed key

Defaults to null, in which case the module falls back to individual AMI variables (deprecated). |
object({
filter = optional(map(list(string)), { state = ["available"] })
owners = optional(list(string), ["amazon"])
id_ssm_parameter_arn = optional(string, null)
kms_key_arn = optional(string, null)
})
| `null` | no | -| [ami\_filter](#input\_ami\_filter) | [DEPRECATED: Use ami.filter] Map of lists used to create the AMI filter for the action runner AMI. | `map(list(string))` |
{
"state": [
"available"
]
}
| no | -| [ami\_id\_ssm\_parameter\_name](#input\_ami\_id\_ssm\_parameter\_name) | [DEPRECATED: Use ami.id\_ssm\_parameter\_name] Externally managed SSM parameter (of data type aws:ec2:image) that contains the AMI ID to launch runner instances from. Overrides ami\_filter | `string` | `null` | no | -| [ami\_kms\_key\_arn](#input\_ami\_kms\_key\_arn) | [DEPRECATED: Use ami.kms\_key\_arn] Optional CMK Key ARN to be used to launch an instance from a shared encrypted AMI | `string` | `null` | no | -| [ami\_owners](#input\_ami\_owners) | [DEPRECATED: Use ami.owners] The list of owners used to select the AMI of action runner instances. | `list(string)` |
[
"amazon"
]
| no | | [associate\_public\_ipv4\_address](#input\_associate\_public\_ipv4\_address) | Associate public IPv4 with the runner. Only tested with IPv4 | `bool` | `false` | no | | [aws\_partition](#input\_aws\_partition) | (optional) partition for the base arn if not 'aws' | `string` | `"aws"` | no | | [aws\_region](#input\_aws\_region) | AWS region. | `string` | n/a | yes | diff --git a/modules/runners/main.tf b/modules/runners/main.tf index 68365f2280..3c27935206 100644 --- a/modules/runners/main.tf +++ b/modules/runners/main.tf @@ -37,16 +37,18 @@ locals { "linux" = "${path.module}/templates/start-runner.sh" } - # Handle AMI configuration from either the new object or old variables + # Handle AMI configuration ami_config = var.ami != null ? var.ami : { - filter = var.ami_filter - owners = var.ami_owners + filter = local.default_ami[var.runner_os] + owners = ["amazon"] id_ssm_parameter_arn = null - kms_key_arn = var.ami_kms_key_arn + kms_key_arn = null } ami_kms_key_arn = local.ami_config.kms_key_arn != null ? local.ami_config.kms_key_arn : "" ami_filter = merge(local.default_ami[var.runner_os], local.ami_config.filter) ami_id_ssm_module_managed = local.ami_config.id_ssm_parameter_arn == null + # Extract parameter name from ARN (format: arn:aws:ssm:region:account:parameter/path/to/param) + ami_id_ssm_parameter_name = local.ami_id_ssm_module_managed ? null : try(regex("parameter/(.+)$", local.ami_config.id_ssm_parameter_arn)[0], null) enable_job_queued_check = var.enable_job_queued_check == null ? !var.enable_ephemeral_runners : var.enable_job_queued_check diff --git a/modules/runners/policies-lambda-common.tf b/modules/runners/policies-lambda-common.tf index feb0d39fd9..0e9b2eace9 100644 --- a/modules/runners/policies-lambda-common.tf +++ b/modules/runners/policies-lambda-common.tf @@ -10,7 +10,7 @@ data "aws_iam_policy_document" "lambda_assume_role_policy" { } resource "aws_iam_policy" "ami_id_ssm_parameter_read" { - count = var.ami_id_ssm_parameter_name != null ? 1 : 0 + count = local.ami_id_ssm_parameter_name != null ? 1 : 0 name = "${var.prefix}-ami-id-ssm-parameter-read" path = local.role_path description = "Allows for reading ${var.prefix} GitHub runner AMI ID from an SSM parameter" @@ -25,7 +25,7 @@ resource "aws_iam_policy" "ami_id_ssm_parameter_read" { "ssm:GetParameter" ], "Resource": [ - "arn:${var.aws_partition}:ssm:${var.aws_region}:${data.aws_caller_identity.current.account_id}:parameter/${trimprefix(var.ami_id_ssm_parameter_name, "/")}" + "arn:${var.aws_partition}:ssm:${var.aws_region}:${data.aws_caller_identity.current.account_id}:parameter/${trimprefix(local.ami_id_ssm_parameter_name, "/")}" ] } ] diff --git a/modules/runners/pool.tf b/modules/runners/pool.tf index 2762008ebf..2019ebbc6f 100644 --- a/modules/runners/pool.tf +++ b/modules/runners/pool.tf @@ -53,8 +53,8 @@ module "pool" { subnet_ids = var.subnet_ids ssm_token_path = "${var.ssm_paths.root}/${var.ssm_paths.tokens}" ssm_config_path = "${var.ssm_paths.root}/${var.ssm_paths.config}" - ami_id_ssm_parameter_name = var.ami_id_ssm_parameter_name - ami_id_ssm_parameter_read_policy_arn = var.ami_id_ssm_parameter_name != null ? aws_iam_policy.ami_id_ssm_parameter_read[0].arn : null + ami_id_ssm_parameter_name = local.ami_id_ssm_parameter_name + ami_id_ssm_parameter_read_policy_arn = local.ami_id_ssm_parameter_name != null ? aws_iam_policy.ami_id_ssm_parameter_read[0].arn : null tags = local.tags lambda_tags = var.lambda_tags arn_ssm_parameters_path_config = local.arn_ssm_parameters_path_config diff --git a/modules/runners/scale-up.tf b/modules/runners/scale-up.tf index b1ea88652d..b97fefed4f 100644 --- a/modules/runners/scale-up.tf +++ b/modules/runners/scale-up.tf @@ -25,7 +25,7 @@ resource "aws_lambda_function" "scale_up" { architectures = [var.lambda_architecture] environment { variables = { - AMI_ID_SSM_PARAMETER_NAME = var.ami_id_ssm_parameter_name + AMI_ID_SSM_PARAMETER_NAME = local.ami_id_ssm_parameter_name DISABLE_RUNNER_AUTOUPDATE = var.disable_runner_autoupdate ENABLE_EPHEMERAL_RUNNERS = var.enable_ephemeral_runners ENABLE_JIT_CONFIG = var.enable_jit_config @@ -148,7 +148,7 @@ resource "aws_iam_role_policy_attachment" "scale_up_vpc_execution_role" { } resource "aws_iam_role_policy_attachment" "ami_id_ssm_parameter_read" { - count = var.ami_id_ssm_parameter_name != null ? 1 : 0 + count = local.ami_id_ssm_parameter_name != null ? 1 : 0 role = aws_iam_role.scale_up.name policy_arn = aws_iam_policy.ami_id_ssm_parameter_read[0].arn } diff --git a/modules/runners/variables.tf b/modules/runners/variables.tf index 846ddeafc6..a527c3e87d 100644 --- a/modules/runners/variables.tf +++ b/modules/runners/variables.tf @@ -135,34 +135,6 @@ variable "instance_types" { default = null } -variable "ami_filter" { - description = "[DEPRECATED: Use ami.filter] Map of lists used to create the AMI filter for the action runner AMI." - type = map(list(string)) - default = { state = ["available"] } - validation { - # check the availability of the AMI - condition = contains(keys(var.ami_filter), "state") - error_message = "The \"ami_filter\" variable must contain the \"state\" key with the value \"available\"." - } -} - -variable "ami_owners" { - description = "[DEPRECATED: Use ami.owners] The list of owners used to select the AMI of action runner instances." - type = list(string) - default = ["amazon"] -} - -variable "ami_id_ssm_parameter_name" { - description = "[DEPRECATED: Use ami.id_ssm_parameter_name] Externally managed SSM parameter (of data type aws:ec2:image) that contains the AMI ID to launch runner instances from. Overrides ami_filter" - type = string - default = null -} - -variable "ami_kms_key_arn" { - description = "[DEPRECATED: Use ami.kms_key_arn] Optional CMK Key ARN to be used to launch an instance from a shared encrypted AMI" - type = string - default = null -} variable "enable_userdata" { description = "Should the userdata script be enabled for the runner. Set this to false if you are using your own prebuilt AMI" diff --git a/outputs.tf b/outputs.tf index 84d1842256..fdf4a37801 100644 --- a/outputs.tf +++ b/outputs.tf @@ -75,16 +75,3 @@ output "instance_termination_handler" { lambda_role = module.instance_termination_watcher[0].spot_termination_handler.lambda_role } : null } - -output "deprecated_variables_warning" { - description = "Warning for deprecated variables usage. These variables will be removed in a future release. Please migrate to using the consolidated 'ami' object." - value = join("", [ - # Show object migration warning only when ami is null and old variables are used - var.ami == null ? join("", [ - (var.ami_filter != { state = ["available"] } || var.ami_owners != ["amazon"] || var.ami_kms_key_arn != null) ? - "DEPRECATION WARNING: You are using the deprecated AMI variables (ami_filter, ami_owners, ami_kms_key_arn). These variables will be removed in a future version. Please migrate to using the consolidated 'ami' object.\n" : "", - ]) : "", - # Always show warning for ami_id_ssm_parameter_name to migrate to ami_id_ssm_parameter_arn - var.ami_id_ssm_parameter_name != null ? "DEPRECATION WARNING: The variable 'ami_id_ssm_parameter_name' is deprecated and will be removed in a future version. Please use 'ami.id_ssm_parameter_arn' instead.\n" : "" - ]) -} diff --git a/variables.tf b/variables.tf index 17ea50bfcf..f0b23dfb39 100644 --- a/variables.tf +++ b/variables.tf @@ -393,35 +393,6 @@ EOT default = null } -variable "ami_filter" { - description = "[DEPRECATED: Use ami.filter] Map of lists used to create the AMI filter for the action runner AMI." - type = map(list(string)) - default = { state = ["available"] } - validation { - # check the availability of the AMI - condition = contains(keys(var.ami_filter), "state") - error_message = "The AMI filter must contain the state filter." - } -} - -variable "ami_owners" { - description = "[DEPRECATED: Use ami.owners] The list of owners that should be used to find the AMI." - type = list(string) - default = ["amazon"] -} - -variable "ami_id_ssm_parameter_name" { - description = "[DEPRECATED: Use ami.id_ssm_parameter_arn] String used to construct the SSM parameter name used to resolve the latest AMI ID for the runner instances. The SSM parameter should be of type String and contain a valid AMI ID. The default behavior is to use the latest Ubuntu 22.04 AMI." - type = string - default = null -} - -variable "ami_kms_key_arn" { - description = "[DEPRECATED: Use ami.kms_key_arn] Optional CMK Key ARN to be used to launch an instance from a shared encrypted AMI" - type = string - default = null -} - variable "lambda_s3_bucket" { description = "S3 bucket from which to specify lambda functions. This is an alternative to providing local files directly." type = string From 621cf5da0bff346543783797c3af4c7816cf29cc Mon Sep 17 00:00:00 2001 From: Ederson Brilhante Date: Tue, 9 Dec 2025 18:55:30 +0100 Subject: [PATCH 4/4] feat: add support to use placement group in launch template (#4929) ## Description This PR adds support for configuring EC2 placement groups for GitHub Actions runners in the multi-runner module. It plumbs a new placement option from the runner configuration through to the underlying EC2 runner module. ## Details Updated modules/multi-runner/runners.tf to pass placement = each.value.runner_config.placement into the runners module. This allows specifying AWS placement groups for EC2 runners, enabling tighter control over instance placement. The change is backwards compatible: if placement is unset in runner_config, behavior remains unchanged. ## Motivation / Future work Placement groups are a prerequisite for supporting macOS runners, which require a host_id. A follow-up PR will add explicit macOS support leveraging this new placement wiring. --------- Co-authored-by: github-actions[bot] Co-authored-by: Niek Palm --- README.md | 1 + main.tf | 1 + modules/multi-runner/README.md | 2 +- modules/multi-runner/runners.tf | 1 + modules/multi-runner/variables.tf | 11 +++++++++++ modules/runners/README.md | 1 + modules/runners/main.tf | 15 +++++++++++++++ modules/runners/variables.tf | 16 ++++++++++++++++ variables.tf | 16 ++++++++++++++++ 9 files changed, 63 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index a3c55d33fd..1ca2e82e22 100644 --- a/README.md +++ b/README.md @@ -202,6 +202,7 @@ Join our discord community via [this invite link](https://discord.gg/bxgXW8jJGh) | [runner\_metadata\_options](#input\_runner\_metadata\_options) | Metadata options for the ec2 runner instances. By default, the module uses metadata tags for bootstrapping the runner, only disable `instance_metadata_tags` when using custom scripts for starting the runner. | `map(any)` |
{
"http_endpoint": "enabled",
"http_put_response_hop_limit": 1,
"http_tokens": "required",
"instance_metadata_tags": "enabled"
}
| no | | [runner\_name\_prefix](#input\_runner\_name\_prefix) | The prefix used for the GitHub runner name. The prefix will be used in the default start script to prefix the instance name when register the runner in GitHub. The value is available via an EC2 tag 'ghr:runner\_name\_prefix'. | `string` | `""` | no | | [runner\_os](#input\_runner\_os) | The EC2 Operating System type to use for action runner instances (linux,windows). | `string` | `"linux"` | no | +| [runner\_placement](#input\_runner\_placement) | The placement options for the instance. See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/launch_template#placement for details. |
object({
affinity = optional(string)
availability_zone = optional(string)
group_id = optional(string)
group_name = optional(string)
host_id = optional(string)
host_resource_group_arn = optional(number)
spread_domain = optional(string)
tenancy = optional(string)
partition_number = optional(number)
})
| `null` | no | | [runner\_run\_as](#input\_runner\_run\_as) | Run the GitHub actions agent as user. | `string` | `"ec2-user"` | no | | [runners\_ebs\_optimized](#input\_runners\_ebs\_optimized) | Enable EBS optimization for the runner instances. | `bool` | `false` | no | | [runners\_lambda\_s3\_key](#input\_runners\_lambda\_s3\_key) | S3 key for runners lambda function. Required if using S3 bucket to specify lambdas. | `string` | `null` | no | diff --git a/main.tf b/main.tf index c7ce20d158..a8c501bc9a 100644 --- a/main.tf +++ b/main.tf @@ -205,6 +205,7 @@ module "runners" { metadata_options = var.runner_metadata_options credit_specification = var.runner_credit_specification cpu_options = var.runner_cpu_options + placement = var.runner_placement enable_runner_binaries_syncer = var.enable_runner_binaries_syncer lambda_s3_bucket = var.lambda_s3_bucket diff --git a/modules/multi-runner/README.md b/modules/multi-runner/README.md index 58fd81ffed..7920092afa 100644 --- a/modules/multi-runner/README.md +++ b/modules/multi-runner/README.md @@ -150,7 +150,7 @@ module "multi-runner" { | [logging\_retention\_in\_days](#input\_logging\_retention\_in\_days) | Specifies the number of days you want to retain log events for the lambda log group. Possible values are: 0, 1, 3, 5, 7, 14, 30, 60, 90, 120, 150, 180, 365, 400, 545, 731, 1827, and 3653. | `number` | `180` | no | | [matcher\_config\_parameter\_store\_tier](#input\_matcher\_config\_parameter\_store\_tier) | The tier of the parameter store for the matcher configuration. Valid values are `Standard`, and `Advanced`. | `string` | `"Standard"` | no | | [metrics](#input\_metrics) | Configuration for metrics created by the module, by default metrics are disabled to avoid additional costs. When metrics are enable all metrics are created unless explicit configured otherwise. |
object({
enable = optional(bool, false)
namespace = optional(string, "GitHub Runners")
metric = optional(object({
enable_github_app_rate_limit = optional(bool, true)
enable_job_retry = optional(bool, true)
enable_spot_termination_warning = optional(bool, true)
}), {})
})
| `{}` | no | -| [multi\_runner\_config](#input\_multi\_runner\_config) | multi\_runner\_config = {
runner\_config: {
runner\_os: "The EC2 Operating System type to use for action runner instances (linux,windows)."
runner\_architecture: "The platform architecture of the runner instance\_type."
runner\_metadata\_options: "(Optional) Metadata options for the ec2 runner instances."
ami: "(Optional) AMI configuration for the action runner instances. This object allows you to specify all AMI-related settings in one place."
create\_service\_linked\_role\_spot: (Optional) create the serviced linked role for spot instances that is required by the scale-up lambda.
credit\_specification: "(Optional) The credit specification of the runner instance\_type. Can be unset, `standard` or `unlimited`.
delay\_webhook\_event: "The number of seconds the event accepted by the webhook is invisible on the queue before the scale up lambda will receive the event."
disable\_runner\_autoupdate: "Disable the auto update of the github runner agent. Be aware there is a grace period of 30 days, see also the [GitHub article](https://github.blog/changelog/2022-02-01-github-actions-self-hosted-runners-can-now-disable-automatic-updates/)"
ebs\_optimized: "The EC2 EBS optimized configuration."
enable\_ephemeral\_runners: "Enable ephemeral runners, runners will only be used once."
enable\_job\_queued\_check: "Enables JIT configuration for creating runners instead of registration token based registraton. JIT configuration will only be applied for ephemeral runners. By default JIT configuration is enabled for ephemeral runners an can be disabled via this override. When running on GHES without support for JIT configuration this variable should be set to true for ephemeral runners."
enable\_on\_demand\_failover\_for\_errors: "Enable on-demand failover. For example to fall back to on demand when no spot capacity is available the variable can be set to `InsufficientInstanceCapacity`. When not defined the default behavior is to retry later."
enable\_organization\_runners: "Register runners to organization, instead of repo level"
enable\_runner\_binaries\_syncer: "Option to disable the lambda to sync GitHub runner distribution, useful when using a pre-build AMI."
enable\_ssm\_on\_runners: "Enable to allow access the runner instances for debugging purposes via SSM. Note that this adds additional permissions to the runner instances."
enable\_userdata: "Should the userdata script be enabled for the runner. Set this to false if you are using your own prebuilt AMI."
instance\_allocation\_strategy: "The allocation strategy for spot instances. AWS recommends to use `capacity-optimized` however the AWS default is `lowest-price`."
instance\_max\_spot\_price: "Max price price for spot instances per hour. This variable will be passed to the create fleet as max spot price for the fleet."
instance\_target\_capacity\_type: "Default lifecycle used for runner instances, can be either `spot` or `on-demand`."
instance\_types: "List of instance types for the action runner. Defaults are based on runner\_os (al2023 for linux and Windows Server Core for win)."
job\_queue\_retention\_in\_seconds: "The number of seconds the job is held in the queue before it is purged"
minimum\_running\_time\_in\_minutes: "The time an ec2 action runner should be running at minimum before terminated if not busy."
pool\_runner\_owner: "The pool will deploy runners to the GitHub org ID, set this value to the org to which you want the runners deployed. Repo level is not supported."
runner\_additional\_security\_group\_ids: "List of additional security groups IDs to apply to the runner. If added outside the multi\_runner\_config block, the additional security group(s) will be applied to all runner configs. If added inside the multi\_runner\_config, the additional security group(s) will be applied to the individual runner."
runner\_as\_root: "Run the action runner under the root user. Variable `runner_run_as` will be ignored."
runner\_boot\_time\_in\_minutes: "The minimum time for an EC2 runner to boot and register as a runner."
runner\_disable\_default\_labels: "Disable default labels for the runners (os, architecture and `self-hosted`). If enabled, the runner will only have the extra labels provided in `runner_extra_labels`. In case you on own start script is used, this configuration parameter needs to be parsed via SSM."
runner\_extra\_labels: "Extra (custom) labels for the runners (GitHub). Separate each label by a comma. Labels checks on the webhook can be enforced by setting `multi_runner_config.matcherConfig.exactMatch`. GitHub read-only labels should not be provided."
runner\_group\_name: "Name of the runner group."
runner\_name\_prefix: "Prefix for the GitHub runner name."
runner\_run\_as: "Run the GitHub actions agent as user."
runners\_maximum\_count: "The maximum number of runners that will be created. Setting the variable to `-1` desiables the maximum check."
scale\_down\_schedule\_expression: "Scheduler expression to check every x for scale down."
scale\_up\_reserved\_concurrent\_executions: "Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations."
userdata\_template: "Alternative user-data template, replacing the default template. By providing your own user\_data you have to take care of installing all required software, including the action runner. Variables userdata\_pre/post\_install are ignored."
enable\_jit\_config "Overwrite the default behavior for JIT configuration. By default JIT configuration is enabled for ephemeral runners and disabled for non-ephemeral runners. In case of GHES check first if the JIT config API is available. In case you are upgrading from 3.x to 4.x you can set `enable_jit_config` to `false` to avoid a breaking change when having your own AMI."
enable\_runner\_detailed\_monitoring: "Should detailed monitoring be enabled for the runner. Set this to true if you want to use detailed monitoring. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html for details."
enable\_cloudwatch\_agent: "Enabling the cloudwatch agent on the ec2 runner instances, the runner contains default config. Configuration can be overridden via `cloudwatch_config`."
cloudwatch\_config: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details."
userdata\_pre\_install: "Script to be ran before the GitHub Actions runner is installed on the EC2 instances"
userdata\_post\_install: "Script to be ran after the GitHub Actions runner is installed on the EC2 instances"
runner\_hook\_job\_started: "Script to be ran in the runner environment at the beginning of every job"
runner\_hook\_job\_completed: "Script to be ran in the runner environment at the end of every job"
runner\_ec2\_tags: "Map of tags that will be added to the launch template instance tag specifications."
runner\_iam\_role\_managed\_policy\_arns: "Attach AWS or customer-managed IAM policies (by ARN) to the runner IAM role"
vpc\_id: "The VPC for security groups of the action runners. If not set uses the value of `var.vpc_id`."
subnet\_ids: "List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. If not set, uses the value of `var.subnet_ids`."
idle\_config: "List of time period that can be defined as cron expression to keep a minimum amount of runners active instead of scaling down to 0. By defining this list you can ensure that in time periods that match the cron expression within 5 seconds a runner is kept idle."
runner\_log\_files: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details."
block\_device\_mappings: "The EC2 instance block device configuration. Takes the following keys: `device_name`, `delete_on_termination`, `volume_type`, `volume_size`, `encrypted`, `iops`, `throughput`, `kms_key_id`, `snapshot_id`."
job\_retry: "Experimental! Can be removed / changed without trigger a major release. Configure job retries. The configuration enables job retries (for ephemeral runners). After creating the instances a message will be published to a job retry queue. The job retry check lambda is checking after a delay if the job is queued. If not the message will be published again on the scale-up (build queue). Using this feature can impact the rate limit of the GitHub app."
pool\_config: "The configuration for updating the pool. The `pool_size` to adjust to by the events triggered by the `schedule_expression`. For example you can configure a cron expression for week days to adjust the pool to 10 and another expression for the weekend to adjust the pool to 1. Use `schedule_expression_timezone` to override the schedule time zone (defaults to UTC)."
}
matcherConfig: {
labelMatchers: "The list of list of labels supported by the runner configuration. `[[self-hosted, linux, x64, example]]`"
exactMatch: "If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ workflow label matches it will trigger the webhook."
priority: "If set it defines the priority of the matcher, the matcher with the lowest priority will be evaluated first. Default is 999, allowed values 0-999."
}
redrive\_build\_queue: "Set options to attach (optional) a dead letter queue to the build queue, the queue between the webhook and the scale up lambda. You have the following options. 1. Disable by setting `enabled` to false. 2. Enable by setting `enabled` to `true`, `maxReceiveCount` to a number of max retries."
} |
map(object({
runner_config = object({
runner_os = string
runner_architecture = string
runner_metadata_options = optional(map(any), {
instance_metadata_tags = "enabled"
http_endpoint = "enabled"
http_tokens = "required"
http_put_response_hop_limit = 1
})
ami = optional(object({
filter = optional(map(list(string)), { state = ["available"] })
owners = optional(list(string), ["amazon"])
id_ssm_parameter_arn = optional(string, null)
kms_key_arn = optional(string, null)
}), null)
create_service_linked_role_spot = optional(bool, false)
credit_specification = optional(string, null)
delay_webhook_event = optional(number, 30)
disable_runner_autoupdate = optional(bool, false)
ebs_optimized = optional(bool, false)
enable_ephemeral_runners = optional(bool, false)
enable_job_queued_check = optional(bool, null)
enable_on_demand_failover_for_errors = optional(list(string), [])
enable_organization_runners = optional(bool, false)
enable_runner_binaries_syncer = optional(bool, true)
enable_ssm_on_runners = optional(bool, false)
enable_userdata = optional(bool, true)
instance_allocation_strategy = optional(string, "lowest-price")
instance_max_spot_price = optional(string, null)
instance_target_capacity_type = optional(string, "spot")
instance_types = list(string)
job_queue_retention_in_seconds = optional(number, 86400)
minimum_running_time_in_minutes = optional(number, null)
pool_runner_owner = optional(string, null)
runner_as_root = optional(bool, false)
runner_boot_time_in_minutes = optional(number, 5)
runner_disable_default_labels = optional(bool, false)
runner_extra_labels = optional(list(string), [])
runner_group_name = optional(string, "Default")
runner_name_prefix = optional(string, "")
runner_run_as = optional(string, "ec2-user")
runners_maximum_count = number
runner_additional_security_group_ids = optional(list(string), [])
scale_down_schedule_expression = optional(string, "cron(*/5 * * * ? *)")
scale_up_reserved_concurrent_executions = optional(number, 1)
userdata_template = optional(string, null)
userdata_content = optional(string, null)
enable_jit_config = optional(bool, null)
enable_runner_detailed_monitoring = optional(bool, false)
enable_cloudwatch_agent = optional(bool, true)
cloudwatch_config = optional(string, null)
userdata_pre_install = optional(string, "")
userdata_post_install = optional(string, "")
runner_hook_job_started = optional(string, "")
runner_hook_job_completed = optional(string, "")
runner_ec2_tags = optional(map(string), {})
runner_iam_role_managed_policy_arns = optional(list(string), [])
vpc_id = optional(string, null)
subnet_ids = optional(list(string), null)
idle_config = optional(list(object({
cron = string
timeZone = string
idleCount = number
evictionStrategy = optional(string, "oldest_first")
})), [])
cpu_options = optional(object({
core_count = number
threads_per_core = number
}), null)
runner_log_files = optional(list(object({
log_group_name = string
prefix_log_group = bool
file_path = string
log_stream_name = string
})), null)
block_device_mappings = optional(list(object({
delete_on_termination = optional(bool, true)
device_name = optional(string, "/dev/xvda")
encrypted = optional(bool, true)
iops = optional(number)
kms_key_id = optional(string)
snapshot_id = optional(string)
throughput = optional(number)
volume_size = number
volume_type = optional(string, "gp3")
})), [{
volume_size = 30
}])
pool_config = optional(list(object({
schedule_expression = string
schedule_expression_timezone = optional(string)
size = number
})), [])
job_retry = optional(object({
enable = optional(bool, false)
delay_in_seconds = optional(number, 300)
delay_backoff = optional(number, 2)
lambda_memory_size = optional(number, 256)
lambda_timeout = optional(number, 30)
max_attempts = optional(number, 1)
}), {})
})
matcherConfig = object({
labelMatchers = list(list(string))
exactMatch = optional(bool, false)
priority = optional(number, 999)
})
redrive_build_queue = optional(object({
enabled = bool
maxReceiveCount = number
}), {
enabled = false
maxReceiveCount = null
})
}))
| n/a | yes | +| [multi\_runner\_config](#input\_multi\_runner\_config) | multi\_runner\_config = {
runner\_config: {
runner\_os: "The EC2 Operating System type to use for action runner instances (linux,windows)."
runner\_architecture: "The platform architecture of the runner instance\_type."
runner\_metadata\_options: "(Optional) Metadata options for the ec2 runner instances."
ami: "(Optional) AMI configuration for the action runner instances. This object allows you to specify all AMI-related settings in one place."
create\_service\_linked\_role\_spot: (Optional) create the serviced linked role for spot instances that is required by the scale-up lambda.
credit\_specification: "(Optional) The credit specification of the runner instance\_type. Can be unset, `standard` or `unlimited`.
delay\_webhook\_event: "The number of seconds the event accepted by the webhook is invisible on the queue before the scale up lambda will receive the event."
disable\_runner\_autoupdate: "Disable the auto update of the github runner agent. Be aware there is a grace period of 30 days, see also the [GitHub article](https://github.blog/changelog/2022-02-01-github-actions-self-hosted-runners-can-now-disable-automatic-updates/)"
ebs\_optimized: "The EC2 EBS optimized configuration."
enable\_ephemeral\_runners: "Enable ephemeral runners, runners will only be used once."
enable\_job\_queued\_check: "Enables JIT configuration for creating runners instead of registration token based registraton. JIT configuration will only be applied for ephemeral runners. By default JIT configuration is enabled for ephemeral runners an can be disabled via this override. When running on GHES without support for JIT configuration this variable should be set to true for ephemeral runners."
enable\_on\_demand\_failover\_for\_errors: "Enable on-demand failover. For example to fall back to on demand when no spot capacity is available the variable can be set to `InsufficientInstanceCapacity`. When not defined the default behavior is to retry later."
enable\_organization\_runners: "Register runners to organization, instead of repo level"
enable\_runner\_binaries\_syncer: "Option to disable the lambda to sync GitHub runner distribution, useful when using a pre-build AMI."
enable\_ssm\_on\_runners: "Enable to allow access the runner instances for debugging purposes via SSM. Note that this adds additional permissions to the runner instances."
enable\_userdata: "Should the userdata script be enabled for the runner. Set this to false if you are using your own prebuilt AMI."
instance\_allocation\_strategy: "The allocation strategy for spot instances. AWS recommends to use `capacity-optimized` however the AWS default is `lowest-price`."
instance\_max\_spot\_price: "Max price price for spot instances per hour. This variable will be passed to the create fleet as max spot price for the fleet."
instance\_target\_capacity\_type: "Default lifecycle used for runner instances, can be either `spot` or `on-demand`."
instance\_types: "List of instance types for the action runner. Defaults are based on runner\_os (al2023 for linux and Windows Server Core for win)."
job\_queue\_retention\_in\_seconds: "The number of seconds the job is held in the queue before it is purged"
minimum\_running\_time\_in\_minutes: "The time an ec2 action runner should be running at minimum before terminated if not busy."
pool\_runner\_owner: "The pool will deploy runners to the GitHub org ID, set this value to the org to which you want the runners deployed. Repo level is not supported."
runner\_additional\_security\_group\_ids: "List of additional security groups IDs to apply to the runner. If added outside the multi\_runner\_config block, the additional security group(s) will be applied to all runner configs. If added inside the multi\_runner\_config, the additional security group(s) will be applied to the individual runner."
runner\_as\_root: "Run the action runner under the root user. Variable `runner_run_as` will be ignored."
runner\_boot\_time\_in\_minutes: "The minimum time for an EC2 runner to boot and register as a runner."
runner\_disable\_default\_labels: "Disable default labels for the runners (os, architecture and `self-hosted`). If enabled, the runner will only have the extra labels provided in `runner_extra_labels`. In case you on own start script is used, this configuration parameter needs to be parsed via SSM."
runner\_extra\_labels: "Extra (custom) labels for the runners (GitHub). Separate each label by a comma. Labels checks on the webhook can be enforced by setting `multi_runner_config.matcherConfig.exactMatch`. GitHub read-only labels should not be provided."
runner\_group\_name: "Name of the runner group."
runner\_name\_prefix: "Prefix for the GitHub runner name."
runner\_run\_as: "Run the GitHub actions agent as user."
runners\_maximum\_count: "The maximum number of runners that will be created. Setting the variable to `-1` desiables the maximum check."
scale\_down\_schedule\_expression: "Scheduler expression to check every x for scale down."
scale\_up\_reserved\_concurrent\_executions: "Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations."
userdata\_template: "Alternative user-data template, replacing the default template. By providing your own user\_data you have to take care of installing all required software, including the action runner. Variables userdata\_pre/post\_install are ignored."
enable\_jit\_config "Overwrite the default behavior for JIT configuration. By default JIT configuration is enabled for ephemeral runners and disabled for non-ephemeral runners. In case of GHES check first if the JIT config API is available. In case you are upgrading from 3.x to 4.x you can set `enable_jit_config` to `false` to avoid a breaking change when having your own AMI."
enable\_runner\_detailed\_monitoring: "Should detailed monitoring be enabled for the runner. Set this to true if you want to use detailed monitoring. See https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-cloudwatch-new.html for details."
enable\_cloudwatch\_agent: "Enabling the cloudwatch agent on the ec2 runner instances, the runner contains default config. Configuration can be overridden via `cloudwatch_config`."
cloudwatch\_config: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details."
userdata\_pre\_install: "Script to be ran before the GitHub Actions runner is installed on the EC2 instances"
userdata\_post\_install: "Script to be ran after the GitHub Actions runner is installed on the EC2 instances"
runner\_hook\_job\_started: "Script to be ran in the runner environment at the beginning of every job"
runner\_hook\_job\_completed: "Script to be ran in the runner environment at the end of every job"
runner\_ec2\_tags: "Map of tags that will be added to the launch template instance tag specifications."
runner\_iam\_role\_managed\_policy\_arns: "Attach AWS or customer-managed IAM policies (by ARN) to the runner IAM role"
vpc\_id: "The VPC for security groups of the action runners. If not set uses the value of `var.vpc_id`."
subnet\_ids: "List of subnets in which the action runners will be launched, the subnets needs to be subnets in the `vpc_id`. If not set, uses the value of `var.subnet_ids`."
idle\_config: "List of time period that can be defined as cron expression to keep a minimum amount of runners active instead of scaling down to 0. By defining this list you can ensure that in time periods that match the cron expression within 5 seconds a runner is kept idle."
runner\_log\_files: "(optional) Replaces the module default cloudwatch log config. See https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-Agent-Configuration-File-Details.html for details."
block\_device\_mappings: "The EC2 instance block device configuration. Takes the following keys: `device_name`, `delete_on_termination`, `volume_type`, `volume_size`, `encrypted`, `iops`, `throughput`, `kms_key_id`, `snapshot_id`."
job\_retry: "Experimental! Can be removed / changed without trigger a major release. Configure job retries. The configuration enables job retries (for ephemeral runners). After creating the instances a message will be published to a job retry queue. The job retry check lambda is checking after a delay if the job is queued. If not the message will be published again on the scale-up (build queue). Using this feature can impact the rate limit of the GitHub app."
pool\_config: "The configuration for updating the pool. The `pool_size` to adjust to by the events triggered by the `schedule_expression`. For example you can configure a cron expression for week days to adjust the pool to 10 and another expression for the weekend to adjust the pool to 1. Use `schedule_expression_timezone` to override the schedule time zone (defaults to UTC)."
}
matcherConfig: {
labelMatchers: "The list of list of labels supported by the runner configuration. `[[self-hosted, linux, x64, example]]`"
exactMatch: "If set to true all labels in the workflow job must match the GitHub labels (os, architecture and `self-hosted`). When false if __any__ workflow label matches it will trigger the webhook."
priority: "If set it defines the priority of the matcher, the matcher with the lowest priority will be evaluated first. Default is 999, allowed values 0-999."
}
redrive\_build\_queue: "Set options to attach (optional) a dead letter queue to the build queue, the queue between the webhook and the scale up lambda. You have the following options. 1. Disable by setting `enabled` to false. 2. Enable by setting `enabled` to `true`, `maxReceiveCount` to a number of max retries."
} |
map(object({
runner_config = object({
runner_os = string
runner_architecture = string
runner_metadata_options = optional(map(any), {
instance_metadata_tags = "enabled"
http_endpoint = "enabled"
http_tokens = "required"
http_put_response_hop_limit = 1
})
ami = optional(object({
filter = optional(map(list(string)), { state = ["available"] })
owners = optional(list(string), ["amazon"])
id_ssm_parameter_arn = optional(string, null)
kms_key_arn = optional(string, null)
}), null)
create_service_linked_role_spot = optional(bool, false)
credit_specification = optional(string, null)
delay_webhook_event = optional(number, 30)
disable_runner_autoupdate = optional(bool, false)
ebs_optimized = optional(bool, false)
enable_ephemeral_runners = optional(bool, false)
enable_job_queued_check = optional(bool, null)
enable_on_demand_failover_for_errors = optional(list(string), [])
enable_organization_runners = optional(bool, false)
enable_runner_binaries_syncer = optional(bool, true)
enable_ssm_on_runners = optional(bool, false)
enable_userdata = optional(bool, true)
instance_allocation_strategy = optional(string, "lowest-price")
instance_max_spot_price = optional(string, null)
instance_target_capacity_type = optional(string, "spot")
instance_types = list(string)
job_queue_retention_in_seconds = optional(number, 86400)
minimum_running_time_in_minutes = optional(number, null)
pool_runner_owner = optional(string, null)
runner_as_root = optional(bool, false)
runner_boot_time_in_minutes = optional(number, 5)
runner_disable_default_labels = optional(bool, false)
runner_extra_labels = optional(list(string), [])
runner_group_name = optional(string, "Default")
runner_name_prefix = optional(string, "")
runner_run_as = optional(string, "ec2-user")
runners_maximum_count = number
runner_additional_security_group_ids = optional(list(string), [])
scale_down_schedule_expression = optional(string, "cron(*/5 * * * ? *)")
scale_up_reserved_concurrent_executions = optional(number, 1)
userdata_template = optional(string, null)
userdata_content = optional(string, null)
enable_jit_config = optional(bool, null)
enable_runner_detailed_monitoring = optional(bool, false)
enable_cloudwatch_agent = optional(bool, true)
cloudwatch_config = optional(string, null)
userdata_pre_install = optional(string, "")
userdata_post_install = optional(string, "")
runner_hook_job_started = optional(string, "")
runner_hook_job_completed = optional(string, "")
runner_ec2_tags = optional(map(string), {})
runner_iam_role_managed_policy_arns = optional(list(string), [])
vpc_id = optional(string, null)
subnet_ids = optional(list(string), null)
idle_config = optional(list(object({
cron = string
timeZone = string
idleCount = number
evictionStrategy = optional(string, "oldest_first")
})), [])
cpu_options = optional(object({
core_count = number
threads_per_core = number
}), null)
placement = optional(object({
affinity = optional(string)
availability_zone = optional(string)
group_id = optional(string)
group_name = optional(string)
host_id = optional(string)
host_resource_group_arn = optional(number)
spread_domain = optional(string)
tenancy = optional(string)
partition_number = optional(number)
}), null)
runner_log_files = optional(list(object({
log_group_name = string
prefix_log_group = bool
file_path = string
log_stream_name = string
})), null)
block_device_mappings = optional(list(object({
delete_on_termination = optional(bool, true)
device_name = optional(string, "/dev/xvda")
encrypted = optional(bool, true)
iops = optional(number)
kms_key_id = optional(string)
snapshot_id = optional(string)
throughput = optional(number)
volume_size = number
volume_type = optional(string, "gp3")
})), [{
volume_size = 30
}])
pool_config = optional(list(object({
schedule_expression = string
schedule_expression_timezone = optional(string)
size = number
})), [])
job_retry = optional(object({
enable = optional(bool, false)
delay_in_seconds = optional(number, 300)
delay_backoff = optional(number, 2)
lambda_memory_size = optional(number, 256)
lambda_timeout = optional(number, 30)
max_attempts = optional(number, 1)
}), {})
})
matcherConfig = object({
labelMatchers = list(list(string))
exactMatch = optional(bool, false)
priority = optional(number, 999)
})
redrive_build_queue = optional(object({
enabled = bool
maxReceiveCount = number
}), {
enabled = false
maxReceiveCount = null
})
}))
| n/a | yes | | [pool\_lambda\_reserved\_concurrent\_executions](#input\_pool\_lambda\_reserved\_concurrent\_executions) | Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations. | `number` | `1` | no | | [pool\_lambda\_timeout](#input\_pool\_lambda\_timeout) | Time out for the pool lambda in seconds. | `number` | `60` | no | | [prefix](#input\_prefix) | The prefix used for naming resources | `string` | `"github-actions"` | no | diff --git a/modules/multi-runner/runners.tf b/modules/multi-runner/runners.tf index 7616c5c9e7..ea8fb17dce 100644 --- a/modules/multi-runner/runners.tf +++ b/modules/multi-runner/runners.tf @@ -54,6 +54,7 @@ module "runners" { metadata_options = each.value.runner_config.runner_metadata_options credit_specification = each.value.runner_config.credit_specification cpu_options = each.value.runner_config.cpu_options + placement = each.value.runner_config.placement enable_runner_binaries_syncer = each.value.runner_config.enable_runner_binaries_syncer lambda_s3_bucket = var.lambda_s3_bucket diff --git a/modules/multi-runner/variables.tf b/modules/multi-runner/variables.tf index df2c0729cd..0ca473ecf2 100644 --- a/modules/multi-runner/variables.tf +++ b/modules/multi-runner/variables.tf @@ -125,6 +125,17 @@ variable "multi_runner_config" { core_count = number threads_per_core = number }), null) + placement = optional(object({ + affinity = optional(string) + availability_zone = optional(string) + group_id = optional(string) + group_name = optional(string) + host_id = optional(string) + host_resource_group_arn = optional(number) + spread_domain = optional(string) + tenancy = optional(string) + partition_number = optional(number) + }), null) runner_log_files = optional(list(object({ log_group_name = string prefix_log_group = bool diff --git a/modules/runners/README.md b/modules/runners/README.md index f5d11c6c09..9bb3a6f4e6 100644 --- a/modules/runners/README.md +++ b/modules/runners/README.md @@ -192,6 +192,7 @@ yarn run dist | [metrics](#input\_metrics) | Configuration for metrics created by the module, by default metrics are disabled to avoid additional costs. When metrics are enable all metrics are created unless explicit configured otherwise. |
object({
enable = optional(bool, false)
namespace = optional(string, "GitHub Runners")
metric = optional(object({
enable_github_app_rate_limit = optional(bool, true)
enable_job_retry = optional(bool, true)
enable_spot_termination_warning = optional(bool, true)
}), {})
})
| `{}` | no | | [minimum\_running\_time\_in\_minutes](#input\_minimum\_running\_time\_in\_minutes) | The time an ec2 action runner should be running at minimum before terminated if non busy. If not set the default is calculated based on the OS. | `number` | `null` | no | | [overrides](#input\_overrides) | This map provides the possibility to override some defaults. The following attributes are supported: `name_sg` overrides the `Name` tag for all security groups created by this module. `name_runner_agent_instance` overrides the `Name` tag for the ec2 instance defined in the auto launch configuration. `name_docker_machine_runners` overrides the `Name` tag spot instances created by the runner agent. | `map(string)` |
{
"name_runner": "",
"name_sg": ""
}
| no | +| [placement](#input\_placement) | The placement options for the instance. See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/launch_template#placement for details. |
object({
affinity = optional(string)
availability_zone = optional(string)
group_id = optional(string)
group_name = optional(string)
host_id = optional(string)
host_resource_group_arn = optional(number)
spread_domain = optional(string)
tenancy = optional(string)
partition_number = optional(number)
})
| `null` | no | | [pool\_config](#input\_pool\_config) | The configuration for updating the pool. The `pool_size` to adjust to by the events triggered by the `schedule_expression`. For example you can configure a cron expression for week days to adjust the pool to 10 and another expression for the weekend to adjust the pool to 1. Use `schedule_expression_timezone ` to override the schedule time zone (defaults to UTC). |
list(object({
schedule_expression = string
schedule_expression_timezone = optional(string)
size = number
}))
| `[]` | no | | [pool\_lambda\_memory\_size](#input\_pool\_lambda\_memory\_size) | Lambda Memory size limit in MB for pool lambda | `number` | `512` | no | | [pool\_lambda\_reserved\_concurrent\_executions](#input\_pool\_lambda\_reserved\_concurrent\_executions) | Amount of reserved concurrent executions for the scale-up lambda function. A value of 0 disables lambda from being triggered and -1 removes any concurrency limitations. | `number` | `1` | no | diff --git a/modules/runners/main.tf b/modules/runners/main.tf index 3c27935206..5522c5fb45 100644 --- a/modules/runners/main.tf +++ b/modules/runners/main.tf @@ -171,6 +171,21 @@ resource "aws_launch_template" "runner" { } } + dynamic "placement" { + for_each = var.placement != null ? [var.placement] : [] + content { + affinity = try(placement.value.affinity, null) + availability_zone = try(placement.value.availability_zone, null) + group_id = try(placement.value.group_id, null) + group_name = try(placement.value.group_name, null) + host_id = try(placement.value.host_id, null) + host_resource_group_arn = try(placement.value.host_resource_group_arn, null) + spread_domain = try(placement.value.spread_domain, null) + tenancy = try(placement.value.tenancy, null) + partition_number = try(placement.value.partition_number, null) + } + } + monitoring { enabled = var.enable_runner_detailed_monitoring } diff --git a/modules/runners/variables.tf b/modules/runners/variables.tf index a527c3e87d..8c8a1a136c 100644 --- a/modules/runners/variables.tf +++ b/modules/runners/variables.tf @@ -643,6 +643,22 @@ variable "cpu_options" { default = null } +variable "placement" { + description = "The placement options for the instance. See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/launch_template#placement for details." + type = object({ + affinity = optional(string) + availability_zone = optional(string) + group_id = optional(string) + group_name = optional(string) + host_id = optional(string) + host_resource_group_arn = optional(number) + spread_domain = optional(string) + tenancy = optional(string) + partition_number = optional(number) + }) + default = null +} + variable "enable_jit_config" { description = "Overwrite the default behavior for JIT configuration. By default JIT configuration is enabled for ephemeral runners and disabled for non-ephemeral runners. In case of GHES check first if the JIT config API is available. In case you are upgrading from 3.x to 4.x you can set `enable_jit_config` to `false` to avoid a breaking change when having your own AMI." type = bool diff --git a/variables.tf b/variables.tf index f0b23dfb39..bc9d3abac5 100644 --- a/variables.tf +++ b/variables.tf @@ -858,6 +858,22 @@ variable "runner_cpu_options" { default = null } +variable "runner_placement" { + description = "The placement options for the instance. See https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/launch_template#placement for details." + type = object({ + affinity = optional(string) + availability_zone = optional(string) + group_id = optional(string) + group_name = optional(string) + host_id = optional(string) + host_resource_group_arn = optional(number) + spread_domain = optional(string) + tenancy = optional(string) + partition_number = optional(number) + }) + default = null +} + variable "enable_jit_config" { description = "Overwrite the default behavior for JIT configuration. By default JIT configuration is enabled for ephemeral runners and disabled for non-ephemeral runners. In case of GHES check first if the JIT config API is available. In case you are upgrading from 3.x to 4.x you can set `enable_jit_config` to `false` to avoid a breaking change when having your own AMI." type = bool