This repository is a starter for developers building scheduled AWS Lambda jobs with lambdacron.
You supply the job logic, notification content, and the Terraform module interfaces that a separate deployment repository will consume.
This repository is not the place where the function is normally deployed. The expected model is:
- This repository defines the Lambda image source, the root Terraform module for the function, and the
bootstrap-deploy-repohelper module. - A separate deployment repository publishes configuration for a particular deployment of the function and points at the modules in this repository.
- Deployment of the Lambda image in
lambda/is managed separately.
A key idea here is that this model allows us to have one team in charge of maintaining the logic and deploying updated images, while other teams can consume that, deploying into their own AWS accounts.
main.tf,variables.tf,outputs.tf,versions.tfRoot Terraform module that a separate deployment repository uses to deploy the scheduled Lambda withlambdacron.lambda/Separate Terraform root for publishing the job image inlambda/dockerto public ECR. This image publishing flow is intentionally separate from downstream deployment repos.lambda/docker/lambda_handler.pyStarter task implementation. Replace the placeholder result payload with your actual job logic.lambda/docker/requirements.txtRuntime dependencies for the scheduled Lambda image. Add any project-specific libraries here.bootstrap-deploy-repo/Reusable Terraform module for a separate deployment repository to create its GitHub OIDC deploy role plus any project-specific GitHub Actions secrets it needs.templates/Jinja email templates. Keep these files, but replace the content to match your result payload.tests/Basic Python smoke tests for the Lambda skeleton and the starter templates.
Update lambda/docker/lambda_handler.py so _perform_task returns the result types and payloads your job needs.
The starter currently publishes a single placeholder result type:
{
"EXAMPLE": {
"message": "Replace this message with your task output.",
"details": "Update lambda_handler.py to emit the payload your templates expect."
}
}The outermost key ("EXAMPLE") is the result type. The inner keys ("message" and "details") are the payload. Any shape is valid as long as the templates can find the keys they need to render the notification emails. The result type is injected into the payload with key result_type for template logic.
Notification templates are jinja2 templates to render the payload (with result_type injected. Update the templates in templates/ to match the payload your task emits.
Update lambda/docker/requirements.txt when your Lambda needs extra Python packages.
The starter only includes boto3 and lambdacron.
Update the root Terraform files when your project needs different infrastructure behavior. A separate deployment repository will consume this module, so the job here is to expose the right inputs, defaults, and extension points:
- Set the default image tag to the published image that the deployment repository should use.
- Expect users to set schedule, email sender/recipients, and other things that won't have obvious defaults.
- Use environment variables to inject configuration that may differ between deployments, like S3 bucket names or API endpoints.
- Add any project-specific Terraform inputs or locals.
- Include extra IAM permissions to the scheduled Lambda role if the job needs more AWS access.
- Add reasonable defaults for any
lambdacroninputs that make sense for your project. Note that many required variables in the unmodified template will often have reasonable defaults for a specific project (e.g., by using the project's name as a prefix for resource names).
Upstream lambdacron now supports scheduled_lambda_additional_policy_arns, which lets callers attach pre-existing managed policies to the scheduled Lambda role without writing their own role-attachment resources. If you expect downstream deployment repositories to extend runtime permissions, prefer exposing that upstream input from this repo's root module.
This template currently also includes an optional scheduled_lambda_additional_policy_json hook in the root module. That is still usable, but it is now more of a local compatibility convenience than the preferred upstream pattern. For anything more complex, add extra IAM resources next to the root module and attach them to module.lambdacron.scheduled_lambda_role_arn. Do not edit lambdacron itself for project-specific permissions.
Update bootstrap-deploy-repo/main.tf and bootstrap-deploy-repo/variables.tf when downstream deployment repositories will need extra GitHub Actions secrets or permissions.
This module always creates AWS_DEPLOY_ROLE_ARN for the deployment repository that consumes it. Use github_actions_secrets for other project secrets required by that downstream deployment workflow.
This wrapper delegates to upstream lambdacron//modules/github-deployer-role.
Every app should also set allowed_resource_name_prefixes here to the application-specific prefixes used by that deployment repository. Do not rely on the upstream lambdacron default unless the downstream infrastructure is actually named that way.
Backend state configuration is intentionally out of scope for bootstrap-deploy-repo. The deployment repository should manage TF_STATE_BUCKET and optional TF_STATE_TABLE itself.
A separate deployment repository will typically interact with this repo in three places:
- It uses this repo's root Terraform as a module source for the actual AWS deployment.
- It may use
bootstrap-deploy-repo/as a module source for its own GitHub OIDC/bootstrap setup. - It points at an already-published image from
lambda/(or whatever separate image-publishing process the function uses).
This repository intentionally does not define the deployment workflow for a specific installation of the function. That workflow belongs in the separate deployment repository.
Example files are provided for each Terraform root:
- terraform.tfvars.example
- lambda/terraform.tfvars.example
- bootstrap-deploy-repo/terraform.tfvars.example
These files are for local smoke-testing and interface development. They are not meant to replace a dedicated deployment repository.
Basic Python test dependencies live in requirements-dev.txt. After installing them, run:
python3 -m pytestValidate Terraform/OpenTofu roots locally with:
tofu init -backend=false
tofu validate
tofu -chdir=lambda init -backend=false
tofu -chdir=lambda validate
tofu -chdir=bootstrap-deploy-repo init -backend=false
tofu -chdir=bootstrap-deploy-repo validate