You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/README.md
+8-10Lines changed: 8 additions & 10 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -384,15 +384,13 @@ scale down to zero before finishing all the jobs, leaving some waiting indefinit
384
384
the `max_duration` to a time long enough to cover the full time a job may have to wait between the time it is queued and
385
385
the time it finishes, assuming that the HRA scales up the pool by 1 and runs the job on the new runner.
386
386
387
-
:::info
388
-
389
-
If there are more jobs queued than there are runners allowed by `maxReplicas`, the timeout timer does not start on the
390
-
capacity reservation until enough reservations ahead of it are removed for it to be considered as representing and
391
-
active job. Although there are some edge cases regarding `max_duration` that seem not to be covered properly (see
392
-
[actions-runner-controller issue #2466](https://github.com/actions/actions-runner-controller/issues/2466)), they only
393
-
merit adding a few extra minutes to the timeout.
394
-
395
-
:::
387
+
> [!TIP]
388
+
>
389
+
> If there are more jobs queued than there are runners allowed by `maxReplicas`, the timeout timer does not start on the
390
+
> capacity reservation until enough reservations ahead of it are removed for it to be considered as representing and
391
+
> active job. Although there are some edge cases regarding `max_duration` that seem not to be covered properly (see
392
+
> [actions-runner-controller issue #2466](https://github.com/actions/actions-runner-controller/issues/2466)), they only
393
+
> merit adding a few extra minutes to the timeout.
396
394
397
395
### Recommended `max_duration` Duration
398
396
@@ -570,7 +568,7 @@ documentation for further details.
570
568
| <a name="input_regex_replace_chars"></a> [regex\_replace\_chars](#input\_regex\_replace\_chars) | Terraform regular expression (regex) string.<br>Characters matching the regex will be removed from the ID elements.<br>If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits. | `string` | `null` | no |
| <a name="input_resources"></a> [resources](#input\_resources) | The cpu and memory of the deployment's limits and requests. | <pre>object({<br> limits = object({<br> cpu = string<br> memory = string<br> })<br> requests = object({<br> cpu = string<br> memory = string<br> })<br> })</pre> | n/a | yes |
573
-
| <a name="input_runners"></a> [runners](#input\_runners) | Map of Action Runner configurations, with the key being the name of the runner. Please note that the name must be in<br>kebab-case.<br><br>For example:<pre>hcl<br>organization_runner = {<br> type = "organization" # can be either 'organization' or 'repository'<br> dind_enabled: true # A Docker daemon will be started in the runner Pod<br> image: summerwind/actions-runner-dind # If dind_enabled=false, set this to 'summerwind/actions-runner'<br> scope = "ACME" # org name for Organization runners, repo name for Repository runners<br> group = "core-automation" # Optional. Assigns the runners to a runner group, for access control.<br> scale_down_delay_seconds = 300<br> min_replicas = 1<br> max_replicas = 5<br> labels = [<br> "Ubuntu",<br> "core-automation",<br> ]<br>}</pre> | <pre>map(object({<br> type = string<br> scope = string<br> group = optional(string, null)<br> image = optional(string, "summerwind/actions-runner-dind")<br> dind_enabled = optional(bool, true)<br> node_selector = optional(map(string), {})<br> pod_annotations = optional(map(string), {})<br><br> # running_pod_annotations are only applied to the pods once they start running a job<br> running_pod_annotations = optional(map(string), {})<br><br> # affinity is too complex to model. Whatever you assigned affinity will be copied<br> # to the runner Pod spec.<br> affinity = optional(any)<br><br> tolerations = optional(list(object({<br> key = string<br> operator = string<br> value = optional(string, null)<br> effect = string<br> })), [])<br> scale_down_delay_seconds = optional(number, 300)<br> min_replicas = number<br> max_replicas = number<br> # Scheduled overrides. See https://github.com/actions/actions-runner-controller/blob/master/docs/automatically-scaling-runners.md#scheduled-overrides<br> # Order is important. The earlier entry is prioritized higher than later entries. So you usually define<br> # one-time overrides at the top of your list, then yearly, monthly, weekly, and lastly daily overrides.<br> scheduled_overrides = optional(list(object({<br> start_time = string # ISO 8601 format, eg, "2021-06-01T00:00:00+09:00"<br> end_time = string # ISO 8601 format, eg, "2021-06-01T00:00:00+09:00"<br> min_replicas = optional(number)<br> max_replicas = optional(number)<br> recurrence_rule = optional(object({<br> frequency = string # One of Daily, Weekly, Monthly, Yearly<br> until_time = optional(string) # ISO 8601 format time after which the schedule will no longer apply<br> }))<br> })), [])<br> busy_metrics = optional(object({<br> scale_up_threshold = string<br> scale_down_threshold = string<br> scale_up_adjustment = optional(string)<br> scale_down_adjustment = optional(string)<br> scale_up_factor = optional(string)<br> scale_down_factor = optional(string)<br> }))<br> webhook_driven_scaling_enabled = optional(bool, true)<br> # max_duration is the duration after which a job will be considered completed,<br> # even if the webhook has not received a "job completed" event.<br> # This is to ensure that if an event is missed, it does not leave the runner running forever.<br> # Set it long enough to cover the longest job you expect to run and then some.<br> # See https://github.com/actions/actions-runner-controller/blob/9afd93065fa8b1f87296f0dcdf0c2753a0548cb7/docs/automatically-scaling-runners.md?plain=1#L264-L268<br> # Defaults to 1 hour programmatically (to be able to detect if both max_duration and webhook_startup_timeout are set).<br> max_duration = optional(string)<br> # The name `webhook_startup_timeout` was misleading and has been deprecated.<br> # It has been renamed `max_duration`.<br> webhook_startup_timeout = optional(string)<br> # Adjust the time (in seconds) to wait for the Docker in Docker daemon to become responsive.<br> wait_for_docker_seconds = optional(string, "")<br> pull_driven_scaling_enabled = optional(bool, false)<br> labels = optional(list(string), [])<br> # If not null, `docker_storage` specifies the size (as `go` string) of<br> # an ephemeral (default storage class) Persistent Volume to allocate for the Docker daemon.<br> # Takes precedence over `tmpfs_enabled` for the Docker daemon storage.<br> docker_storage = optional(string, null)<br> # storage is deprecated in favor of docker_storage, since it is only storage for the Docker daemon<br> storage = optional(string, null)<br> # If `pvc_enabled` is true, a Persistent Volume Claim will be created for the runner<br> # and mounted at /home/runner/work/shared. This is useful for sharing data between runners.<br> pvc_enabled = optional(bool, false)<br> # If `tmpfs_enabled` is `true`, both the runner and the docker daemon will use a tmpfs volume,<br> # meaning that all data will be stored in RAM rather than on disk, bypassing disk I/O limitations,<br> # but what would have been disk usage is now additional memory usage. You must specify memory<br> # requests and limits when using tmpfs or else the Pod will likely crash the Node.<br> tmpfs_enabled = optional(bool)<br> resources = optional(object({<br> limits = optional(object({<br> cpu = optional(string, "1")<br> memory = optional(string, "1Gi")<br> ephemeral_storage = optional(string, "10Gi")<br> }), {})<br> requests = optional(object({<br> cpu = optional(string, "500m")<br> memory = optional(string, "256Mi")<br> ephemeral_storage = optional(string, "1Gi")<br> }), {})<br> }), {})<br> }))</pre> | n/a | yes |
571
+
| <a name="input_runners"></a> [runners](#input\_runners) | Map of Action Runner configurations, with the key being the name of the runner. Please note that the name must be in<br>kebab-case.<br><br>For example:<pre>hcl<br>organization_runner = {<br> type = "organization" # can be either 'organization' or 'repository'<br> dind_enabled: true # A Docker daemon will be started in the runner Pod<br> image: summerwind/actions-runner-dind # If dind_enabled=false, set this to 'summerwind/actions-runner'<br> scope = "ACME" # org name for Organization runners, repo name for Repository runners<br> group = "core-automation" # Optional. Assigns the runners to a runner group, for access control.<br> scale_down_delay_seconds = 300<br> min_replicas = 1<br> max_replicas = 5<br> labels = [<br> "Ubuntu",<br> "core-automation",<br> ]<br>}</pre> | <pre>map(object({<br> type = string<br> scope = string<br> group = optional(string, null)<br> image = optional(string, "summerwind/actions-runner-dind")<br> auto_update_enabled = optional(bool, true)<br> dind_enabled = optional(bool, true)<br> node_selector = optional(map(string), {})<br> pod_annotations = optional(map(string), {})<br><br> # running_pod_annotations are only applied to the pods once they start running a job<br> running_pod_annotations = optional(map(string), {})<br><br> # affinity is too complex to model. Whatever you assigned affinity will be copied<br> # to the runner Pod spec.<br> affinity = optional(any)<br><br> tolerations = optional(list(object({<br> key = string<br> operator = string<br> value = optional(string, null)<br> effect = string<br> })), [])<br> scale_down_delay_seconds = optional(number, 300)<br> min_replicas = number<br> max_replicas = number<br> # Scheduled overrides. See https://github.com/actions/actions-runner-controller/blob/master/docs/automatically-scaling-runners.md#scheduled-overrides<br> # Order is important. The earlier entry is prioritized higher than later entries. So you usually define<br> # one-time overrides at the top of your list, then yearly, monthly, weekly, and lastly daily overrides.<br> scheduled_overrides = optional(list(object({<br> start_time = string # ISO 8601 format, eg, "2021-06-01T00:00:00+09:00"<br> end_time = string # ISO 8601 format, eg, "2021-06-01T00:00:00+09:00"<br> min_replicas = optional(number)<br> max_replicas = optional(number)<br> recurrence_rule = optional(object({<br> frequency = string # One of Daily, Weekly, Monthly, Yearly<br> until_time = optional(string) # ISO 8601 format time after which the schedule will no longer apply<br> }))<br> })), [])<br> busy_metrics = optional(object({<br> scale_up_threshold = string<br> scale_down_threshold = string<br> scale_up_adjustment = optional(string)<br> scale_down_adjustment = optional(string)<br> scale_up_factor = optional(string)<br> scale_down_factor = optional(string)<br> }))<br> webhook_driven_scaling_enabled = optional(bool, true)<br> # max_duration is the duration after which a job will be considered completed,<br> # even if the webhook has not received a "job completed" event.<br> # This is to ensure that if an event is missed, it does not leave the runner running forever.<br> # Set it long enough to cover the longest job you expect to run and then some.<br> # See https://github.com/actions/actions-runner-controller/blob/9afd93065fa8b1f87296f0dcdf0c2753a0548cb7/docs/automatically-scaling-runners.md?plain=1#L264-L268<br> # Defaults to 1 hour programmatically (to be able to detect if both max_duration and webhook_startup_timeout are set).<br> max_duration = optional(string)<br> # The name `webhook_startup_timeout` was misleading and has been deprecated.<br> # It has been renamed `max_duration`.<br> webhook_startup_timeout = optional(string)<br> # Adjust the time (in seconds) to wait for the Docker in Docker daemon to become responsive.<br> wait_for_docker_seconds = optional(string, "")<br> pull_driven_scaling_enabled = optional(bool, false)<br> labels = optional(list(string), [])<br> # If not null, `docker_storage` specifies the size (as `go` string) of<br> # an ephemeral (default storage class) Persistent Volume to allocate for the Docker daemon.<br> # Takes precedence over `tmpfs_enabled` for the Docker daemon storage.<br> docker_storage = optional(string, null)<br> # storage is deprecated in favor of docker_storage, since it is only storage for the Docker daemon<br> storage = optional(string, null)<br> # If `pvc_enabled` is true, a Persistent Volume Claim will be created for the runner<br> # and mounted at /home/runner/work/shared. This is useful for sharing data between runners.<br> pvc_enabled = optional(bool, false)<br> # If `tmpfs_enabled` is `true`, both the runner and the docker daemon will use a tmpfs volume,<br> # meaning that all data will be stored in RAM rather than on disk, bypassing disk I/O limitations,<br> # but what would have been disk usage is now additional memory usage. You must specify memory<br> # requests and limits when using tmpfs or else the Pod will likely crash the Node.<br> tmpfs_enabled = optional(bool)<br> resources = optional(object({<br> limits = optional(object({<br> cpu = optional(string, "1")<br> memory = optional(string, "1Gi")<br> # ephemeral-storage is the Kubernetes name, but `ephemeral_storage` is the gomplate name,<br> # so allow either. If both are specified, `ephemeral-storage` takes precedence.<br> ephemeral-storage = optional(string)<br> ephemeral_storage = optional(string, "10Gi")<br> }), {})<br> requests = optional(object({<br> cpu = optional(string, "500m")<br> memory = optional(string, "256Mi")<br> # ephemeral-storage is the Kubernetes name, but `ephemeral_storage` is the gomplate name,<br> # so allow either. If both are specified, `ephemeral-storage` takes precedence.<br> ephemeral-storage = optional(string)<br> ephemeral_storage = optional(string, "1Gi")<br> }), {})<br> }), {})<br> }))</pre> | n/a | yes |
574
572
| <a name="input_s3_bucket_arns"></a> [s3\_bucket\_arns](#input\_s3\_bucket\_arns) | List of ARNs of S3 Buckets to which the runners will have read-write access to. | `list(string)` | `[]` | no |
575
573
| <a name="input_ssm_docker_config_json_path"></a> [ssm\_docker\_config\_json\_path](#input\_ssm\_docker\_config\_json\_path) | SSM path to the Docker config JSON | `string` | `null` | no |
576
574
| <a name="input_ssm_github_secret_path"></a> [ssm\_github\_secret\_path](#input\_ssm\_github\_secret\_path) | The path in SSM to the GitHub app private key file contents or GitHub PAT token. | `string` | `""` | no |
0 commit comments