diff --git a/.gitignore b/.gitignore
index 6964514..edeabaf 100644
--- a/.gitignore
+++ b/.gitignore
@@ -7,6 +7,8 @@ aws-assumed-role/
*.iml
.direnv
.envrc
+.cache
+.atmos
# Compiled and auto-generated files
# Note that the leading "**/" appears necessary for Docker even if not for Git
diff --git a/AGENTS.md b/AGENTS.md
new file mode 100644
index 0000000..8deadc1
--- /dev/null
+++ b/AGENTS.md
@@ -0,0 +1,35 @@
+# Repository Guidelines
+
+## Project Structure & Module Organization
+- `src/`: Terraform component (`main.tf`, `variables.tf`, `outputs.tf`, `providers.tf`, `versions.tf`, `context.tf`). This is the source of truth.
+- `test/`: Go Terratest suite using Atmos fixtures (`component_test.go`, `fixtures/`, `test_suite.yaml`). Tests deploy/destroy real AWS resources.
+- `README.yaml`: Source for the generated `README.md` (via atmos + terraform-docs).
+- `.github/`: CI/CD, Renovate/Dependabot, labels, and automerge settings.
+- `docs/`: Project docs (if any). Keep lightweight and current.
+
+## Build, Test, and Development Commands
+- To install atmos read this docs https://github.com/cloudposse/atmos
+- `atmos docs generate readme`: Regenerate `README.md` from `README.yaml` and terraform source.
+- `atmos docs generate readme-simple`: Regenerate `src/README.md` from `README.yaml` and terraform source.
+- `atmos test run`: Run Terratest suite in `test/` (uses Atmos fixtures; creates and destroys AWS resources).
+- Pre-commit locally: `pre-commit install && pre-commit run -a` (runs `terraform_fmt`, `terraform_docs`, `tflint`).
+- TFLint plugin setup: `tflint --init` (uses `.tflint.hcl`).
+
+## Coding Style & Naming Conventions
+- Indentation: Terraform 2 spaces; YAML/Markdown 2 spaces.
+- Terraform: prefer lower_snake_case for variables/locals; keep resources/data sources descriptive and aligned with Cloud Posse null-label patterns.
+- Lint/format: `terraform fmt -recursive`, TFLint rules per `.tflint.hcl`. Do not commit formatting or lint violations.
+
+## Testing Guidelines
+- Framework: Go Terratest with `github.com/cloudposse/test-helpers` and `atmos` fixtures.
+- Location/naming: put tests in `test/` and name files `*_test.go`. Add scenarios under `test/fixtures/stacks/catalog/usecase/`.
+- Run: `atmos test run`. Ensure AWS credentials are configured; tests may incur AWS costs and will clean up after themselves.
+
+## Commit & Pull Request Guidelines
+- Commits: follow Conventional Commits (e.g., `feat:`, `fix:`, `chore(deps):`, `docs:`). Keep messages concise and scoped.
+- PRs: include a clear description, linked issues, and any behavioral changes. Update `README.yaml` when inputs/outputs change and run `atmos docs generate readme`.
+- CI: ensure pre-commit, TFLint, and tests pass. Avoid unrelated changes in the same PR.
+
+## Security & Configuration Tips
+- Never commit secrets. Configure AWS credentials/role assumption externally; the provider setup in `src/providers.tf` supports role assumption via the `iam_roles` module.
+- Global quotas must be applied in `us-east-1`; place in the `gbl` stack and set `region: us-east-1` in `vars`.
diff --git a/Makefile b/Makefile
deleted file mode 100644
index 8a6d902..0000000
--- a/Makefile
+++ /dev/null
@@ -1,8 +0,0 @@
--include $(shell curl -sSL -o .build-harness "https://cloudposse.tools/build-harness"; echo .build-harness)
-
-all: init readme
-
-test::
- @echo "🚀 Starting tests..."
- ./test/run.sh
- @echo "✅ All tests passed."
diff --git a/README.md b/README.md
index df93fcb..0e27c14 100644
--- a/README.md
+++ b/README.md
@@ -2,8 +2,11 @@

-
-

+
+
+

+
+
-This component is responsible for provisioning Datadog Log Archives. It creates a single log archive pipeline for each
-AWS account. If the `catchall` flag is set, it creates a catchall archive within the same S3 bucket.
+This component provisions Datadog Log Archives. It creates a single log archive pipeline for each AWS account. If the `catchall` flag is set, it creates a catchall archive within the same S3 bucket.
+
+Each log archive filters for the tag `env:$env` where `$env` is the environment/account name (e.g. `sbx`, `prd`, `tools`), as well as any tags identified in the `additional_query_tags` key. The `catchall` archive, as the name implies, filters for `*`.
-Each log archive filters for the tag `env:$env` where $env is the environment/account name (ie sbx, prd, tools, etc), as
-well as any tags identified in the additional_tags key. The `catchall` archive, as the name implies, filters for '\*'.
+A second bucket is created for CloudTrail, and a CloudTrail is configured to monitor the log archive bucket and log activity to the CloudTrail bucket. To forward these CloudTrail logs to Datadog, the CloudTrail bucket's ID must be added to the `s3_buckets` key for our `datadog-lambda-forwarder` component.
-A second bucket is created for cloudtrail, and a cloudtrail is configured to monitor the log archive bucket and log
-activity to the cloudtrail bucket. To forward these cloudtrail logs to datadog, the cloudtrail bucket's id must be added
-to the s3_buckets key for our datadog-lambda-forwarder component.
+Both buckets support object lock, with overridable defaults of COMPLIANCE mode and a duration of 7 days.
-Both buckets support object lock, with overridable defaults of COMPLIANCE mode with a duration of 7 days.
+Prerequisites
+- Datadog integration set up in the target environment
+ - Relies on the Datadog API and App keys added by our Datadog integration component
-## Prerequisites
+Issues, Gotchas, Good-to-Knows
+- Destroy/reprovision process
+ - Because of the protections for S3 buckets, destroying/replacing the bucket may require two passes or a manual bucket delete followed by Terraform cleanup. If the bucket has a full day or more of logs, deleting it manually first helps avoid Terraform timeouts.
+ - Two-step process to destroy via Terraform:
+ 1) Set `s3_force_destroy` to `true` and apply
+ 2) Set `enabled` to `false` and apply, or run `terraform destroy`
-- Datadog integration set up in target environment
- - We rely on the datadog api and app keys added by our datadog integration component
-## Issues, Gotchas, Good-to-Knows
+> [!TIP]
+> #### 👽 Use Atmos with Terraform
+> Cloud Posse uses [`atmos`](https://atmos.tools) to easily orchestrate multiple environments using Terraform.
+> Works with [Github Actions](https://atmos.tools/integrations/github-actions/), [Atlantis](https://atmos.tools/integrations/atlantis), or [Spacelift](https://atmos.tools/integrations/spacelift).
+>
+>
+> Watch demo of using Atmos with Terraform
+> 
+> Example of running atmos to manage infrastructure from our Quick Start tutorial.
+>
-### Destroy/reprovision process
-Because of the protections for S3 buckets, if we want to destroy/replace our bucket, we need to do so in two passes or
-destroy the bucket manually and then use terraform to clean up the rest. If reprovisioning a recently provisioned
-bucket, the two-pass process works well. If the bucket has a full day or more of logs, though, deleting it manually
-first will avoid terraform timeouts, and then the terraform process can be used to clean up everything else.
-#### Two step process to destroy via terraform
-- first set `s3_force_destroy` var to true and apply
-- next set `enabled` to false and apply or use tf destroy
## Usage
-**Stack Level**: Global
+Stack Level: Global
-Here's an example snippet for how to use this component. It's suggested to apply this component to all accounts from
-which Datadog receives logs.
+It's suggested to apply this component to all accounts from which Datadog receives logs.
+
+Example Atmos snippet:
```yaml
components:
@@ -74,106 +82,115 @@ components:
workspace_enabled: true
vars:
enabled: true
- # additional_query_tags:
- # - "forwardername:*-dev-datadog-lambda-forwarder-logs"
- # - "account:123456789012"
+ # additional_query_tags:
+ # - "forwardername:*-dev-datadog-lambda-forwarder-logs"
+ # - "account:123456789012"
```
-## Requirements
+> [!IMPORTANT]
+> In Cloud Posse's examples, we avoid pinning modules to specific versions to prevent discrepancies between the documentation
+> and the latest released versions. However, for your own projects, we strongly advise pinning each module to the exact version
+> you're using. This practice ensures the stability of your infrastructure. Additionally, we recommend implementing a systematic
+> approach for updating versions to avoid unexpected changes.
-| Name | Version |
-| --------- | --------- |
-| terraform | >= 0.13.0 |
-| aws | >= 2.0 |
-| datadog | >= 3.3.0 |
-| local | >= 1.3 |
-## Providers
-| Name | Version |
-| ------- | -------- |
-| aws | >= 2.0 |
-| datadog | >= 3.7.0 |
-| http | >= 2.1.0 |
-## Modules
-| Name | Source | Version |
-| -------------------- | ----------------------------------- | ------- |
-| cloudtrail | cloudposse/cloudtrail/aws | 0.21.0 |
-| cloudtrail_s3_bucket | cloudposse/cloudtrail-s3-bucket/aws | 0.23.1 |
-| iam_roles | ../account-map/modules/iam-roles | n/a |
-| s3_bucket | cloudposse/s3-bucket/aws | 0.46.0 |
-| this | cloudposse/label/null | 0.25.0 |
-## Resources
-
-| Name | Type |
-| --------------------------------------- | ----------- |
-| aws_caller_identity.current | data source |
-| aws_partition.current | data source |
-| aws_ssm_parameter.datadog_api_key | data source |
-| aws_ssm_parameter.datadog_app_key | data source |
-| aws_ssm_parameter.datadog_aws_role_name | data source |
-| aws_ssm_parameter.datadog_external_id | data source |
-| datadog_logs_archive.catchall_archive | resource |
-| datadog_logs_archive.logs_archive | resource |
-| http.current_order | data source |
-## Inputs
-| Name | Description | Type | Default | Required |
-| --------------------------- | ----------------------------------------------------------------------------------------------------------------------- | -------- | ------------ | ---------------- |
-| additional_query_tags | Additional tags to include in query for logs for this archive | `list` | [] | no |
-| catchall | Set to true to enable a catchall for logs unmatched by any queries. This should only be used in one environment/account | `bool` | false | no |
-| datadog_aws_account_id | The AWS account ID Datadog's integration servers use for all integrations | `string` | 464622532012 | no |
-| enable_glacier_transition | Enable/disable transition to glacier. Has no effect unless `lifecycle_rules_enabled` set to true | `bool` | true | no |
-| glacier_transition_days | Number of days after which to transition objects to glacier storage | `number` | 365 | no |
-| lifecycle_rules_enabled | Enable/disable lifecycle management rules for s3 objects | `bool` | true | no |
-| object_lock_days_archive | Set duration of archive bucket object lock | `number` | 7 | yes |
-| object_lock_days_cloudtrail | Set duration of cloudtrail bucket object lock | `number` | 7 | yes |
-| object_lock_mode_archive | Set mode of archive bucket object lock | `string` | COMPLIANCE | yes |
-| object_lock_mode_cloudtrail | Set mode of cloudtrail bucket object lock | `string` | COMPLIANCE | yes |
-| s3_force_destroy | Set to true to delete non-empty buckets when `enabled` is set to false | `bool` | false | for destroy only |
+
+## Requirements
-## Outputs
+| Name | Version |
+|------|---------|
+| [terraform](#requirement\_terraform) | >= 0.13.0 |
+| [aws](#requirement\_aws) | >= 4.9.0, < 6.0.0 |
+| [datadog](#requirement\_datadog) | >= 3.19 |
+| [http](#requirement\_http) | >= 2.1.0 |
-| Name | Description |
-| ----------------------------- | ----------------------------------------------------------- |
-| archive_id | The ID of the environment-specific log archive |
-| bucket_arn | The ARN of the bucket used for log archive storage |
-| bucket_domain_name | The FQDN of the bucket used for log archive storage |
-| bucket_id | The ID (name) of the bucket used for log archive storage |
-| bucket_region | The region of the bucket used for log archive storage |
-| cloudtrail_bucket_arn | The ARN of the bucket used for cloudtrail log storage |
-| cloudtrail_bucket_domain_name | The FQDN of the bucket used for cloudtrail log storage |
-| cloudtrail_bucket_id | The ID (name) of the bucket used for cloudtrail log storage |
-| catchall_id | The ID of the catchall log archive |
+## Providers
-## References
+| Name | Version |
+|------|---------|
+| [aws](#provider\_aws) | >= 4.9.0, < 6.0.0 |
+| [datadog](#provider\_datadog) | >= 3.19 |
+| [http](#provider\_http) | >= 2.1.0 |
-- [cloudposse/s3-bucket/aws](https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/latest) - Cloud Posse's S3
- component
-- [datadog_logs_archive resource]
- (https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/logs_archive) - Datadog's provider
- documentation for the datadog_logs_archive resource
+## Modules
+| Name | Source | Version |
+|------|--------|---------|
+| [archive\_bucket](#module\_archive\_bucket) | cloudposse/s3-bucket/aws | 4.10.0 |
+| [bucket\_policy](#module\_bucket\_policy) | cloudposse/iam-policy/aws | 2.0.2 |
+| [cloudtrail](#module\_cloudtrail) | cloudposse/cloudtrail/aws | 0.24.0 |
+| [cloudtrail\_s3\_bucket](#module\_cloudtrail\_s3\_bucket) | cloudposse/s3-bucket/aws | 4.10.0 |
+| [datadog\_configuration](#module\_datadog\_configuration) | github.com/cloudposse-terraform-components/aws-datadog-credentials//src/modules/datadog_keys | v1.535.7 |
+| [iam\_roles](#module\_iam\_roles) | github.com/cloudposse-terraform-components/aws-account-map//src/modules/iam-roles | v1.535.5 |
+| [this](#module\_this) | cloudposse/label/null | 0.25.0 |
-> [!TIP]
-> #### 👽 Use Atmos with Terraform
-> Cloud Posse uses [`atmos`](https://atmos.tools) to easily orchestrate multiple environments using Terraform.
-> Works with [Github Actions](https://atmos.tools/integrations/github-actions/), [Atlantis](https://atmos.tools/integrations/atlantis), or [Spacelift](https://atmos.tools/integrations/spacelift).
->
->
-> Watch demo of using Atmos with Terraform
-> 
-> Example of running atmos to manage infrastructure from our Quick Start tutorial.
->
+## Resources
+| Name | Type |
+|------|------|
+| [datadog_logs_archive.catchall_archive](https://registry.terraform.io/providers/datadog/datadog/latest/docs/resources/logs_archive) | resource |
+| [datadog_logs_archive.logs_archive](https://registry.terraform.io/providers/datadog/datadog/latest/docs/resources/logs_archive) | resource |
+| [datadog_logs_archive_order.archive_order](https://registry.terraform.io/providers/datadog/datadog/latest/docs/resources/logs_archive_order) | resource |
+| [aws_caller_identity.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/caller_identity) | data source |
+| [aws_iam_policy_document.default](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/iam_policy_document) | data source |
+| [aws_partition.current](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/partition) | data source |
+| [aws_ssm_parameter.datadog_aws_role_name](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/data-sources/ssm_parameter) | data source |
+| [http_http.current_order](https://registry.terraform.io/providers/hashicorp/http/latest/docs/data-sources/http) | data source |
+## Inputs
+| Name | Description | Type | Default | Required |
+|------|-------------|------|---------|:--------:|
+| [additional\_query\_tags](#input\_additional\_query\_tags) | Additional tags to be used in the query for this archive | `list(any)` | `[]` | no |
+| [additional\_tag\_map](#input\_additional\_tag\_map) | Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration. | `map(string)` | `{}` | no |
+| [archive\_lifecycle\_config](#input\_archive\_lifecycle\_config) | Lifecycle configuration for the archive S3 bucket | object({
abort_incomplete_multipart_upload_days = optional(number, null)
enable_glacier_transition = optional(bool, true)
glacier_transition_days = optional(number, 365)
noncurrent_version_glacier_transition_days = optional(number, 30)
enable_deeparchive_transition = optional(bool, false)
deeparchive_transition_days = optional(number, 0)
noncurrent_version_deeparchive_transition_days = optional(number, 0)
enable_standard_ia_transition = optional(bool, false)
standard_transition_days = optional(number, 0)
expiration_days = optional(number, 0)
noncurrent_version_expiration_days = optional(number, 0)
}) | `{}` | no |
+| [attributes](#input\_attributes) | ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element. | `list(string)` | `[]` | no |
+| [catchall\_enabled](#input\_catchall\_enabled) | Set to true to enable a catchall for logs unmatched by any queries. This should only be used in one environment/account | `bool` | `false` | no |
+| [cloudtrail\_lifecycle\_config](#input\_cloudtrail\_lifecycle\_config) | Lifecycle configuration for the cloudtrail S3 bucket | object({
abort_incomplete_multipart_upload_days = optional(number, null)
enable_glacier_transition = optional(bool, true)
glacier_transition_days = optional(number, 365)
noncurrent_version_glacier_transition_days = optional(number, 365)
enable_deeparchive_transition = optional(bool, false)
deeparchive_transition_days = optional(number, 0)
noncurrent_version_deeparchive_transition_days = optional(number, 0)
enable_standard_ia_transition = optional(bool, false)
standard_transition_days = optional(number, 0)
expiration_days = optional(number, 0)
noncurrent_version_expiration_days = optional(number, 0)
}) | `{}` | no |
+| [context](#input\_context) | Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional\_tag\_map, which are merged. | `any` | {
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
} | no |
+| [delimiter](#input\_delimiter) | Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all. | `string` | `null` | no |
+| [descriptor\_formats](#input\_descriptor\_formats) | Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`{
format = string
labels = list(string)
}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty). | `any` | `{}` | no |
+| [enabled](#input\_enabled) | Set to false to prevent the module from creating any resources | `bool` | `null` | no |
+| [environment](#input\_environment) | ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT' | `string` | `null` | no |
+| [id\_length\_limit](#input\_id\_length\_limit) | Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`. | `number` | `null` | no |
+| [label\_key\_case](#input\_label\_key\_case) | Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`. | `string` | `null` | no |
+| [label\_order](#input\_label\_order) | The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present. | `list(string)` | `null` | no |
+| [label\_value\_case](#input\_label\_value\_case) | Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`. | `string` | `null` | no |
+| [labels\_as\_tags](#input\_labels\_as\_tags) | Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored. | `set(string)` | [
"default"
]
| no |
+| [lifecycle\_rules\_enabled](#input\_lifecycle\_rules\_enabled) | Enable/disable lifecycle management rules for log archive s3 objects | `bool` | `true` | no |
+| [name](#input\_name) | ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input. | `string` | `null` | no |
+| [namespace](#input\_namespace) | ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique | `string` | `null` | no |
+| [object\_lock\_days\_archive](#input\_object\_lock\_days\_archive) | Object lock duration for archive buckets in days | `number` | `7` | no |
+| [object\_lock\_days\_cloudtrail](#input\_object\_lock\_days\_cloudtrail) | Object lock duration for cloudtrail buckets in days | `number` | `7` | no |
+| [object\_lock\_mode\_archive](#input\_object\_lock\_mode\_archive) | Object lock mode for archive bucket. Possible values are COMPLIANCE or GOVERNANCE | `string` | `"COMPLIANCE"` | no |
+| [object\_lock\_mode\_cloudtrail](#input\_object\_lock\_mode\_cloudtrail) | Object lock mode for cloudtrail bucket. Possible values are COMPLIANCE or GOVERNANCE | `string` | `"COMPLIANCE"` | no |
+| [regex\_replace\_chars](#input\_regex\_replace\_chars) | Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits. | `string` | `null` | no |
+| [region](#input\_region) | AWS Region | `string` | n/a | yes |
+| [s3\_force\_destroy](#input\_s3\_force\_destroy) | Set to true to delete non-empty buckets when enabled is set to false | `bool` | `false` | no |
+| [stage](#input\_stage) | ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release' | `string` | `null` | no |
+| [tags](#input\_tags) | Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module. | `map(string)` | `{}` | no |
+| [tenant](#input\_tenant) | ID element \_(Rarely used, not included by default)\_. A customer identifier, indicating who this instance of a resource is for | `string` | `null` | no |
+## Outputs
+| Name | Description |
+|------|-------------|
+| [archive\_id](#output\_archive\_id) | The ID of the environment-specific log archive |
+| [bucket\_arn](#output\_bucket\_arn) | The ARN of the bucket used for log archive storage |
+| [bucket\_domain\_name](#output\_bucket\_domain\_name) | The FQDN of the bucket used for log archive storage |
+| [bucket\_id](#output\_bucket\_id) | The ID (name) of the bucket used for log archive storage |
+| [bucket\_region](#output\_bucket\_region) | The region of the bucket used for log archive storage |
+| [catchall\_id](#output\_catchall\_id) | The ID of the catchall log archive |
+| [cloudtrail\_bucket\_arn](#output\_cloudtrail\_bucket\_arn) | The ARN of the bucket used for access logging via cloudtrail |
+| [cloudtrail\_bucket\_domain\_name](#output\_cloudtrail\_bucket\_domain\_name) | The FQDN of the bucket used for access logging via cloudtrail |
+| [cloudtrail\_bucket\_id](#output\_cloudtrail\_bucket\_id) | The ID (name) of the bucket used for access logging via cloudtrail |
+
@@ -188,6 +205,15 @@ Check out these related projects.
- [Atmos](https://atmos.tools) - Atmos is like docker-compose but for your infrastructure
+## References
+
+For additional context, refer to some of these links.
+
+- [cloudposse/s3-bucket/aws](https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/latest) - Cloud Posse's S3 component
+- [datadog_logs_archive resource](https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/logs_archive) - Datadog's provider documentation for the datadog_logs_archive resource
+
+
+
> [!TIP]
> #### Use Terraform Reference Architectures for AWS
>
@@ -252,6 +278,38 @@ In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.
**NOTE:** Be sure to merge the latest changes from "upstream" before making a pull request!
+
+## Running Terraform Tests
+
+We use [Atmos](https://atmos.tools) to streamline how Terraform tests are run. It centralizes configuration and wraps common test workflows with easy-to-use commands.
+
+All tests are located in the [`test/`](test) folder.
+
+Under the hood, tests are powered by Terratest together with our internal [Test Helpers](https://github.com/cloudposse/test-helpers) library, providing robust infrastructure validation.
+
+Setup dependencies:
+- Install Atmos ([installation guide](https://atmos.tools/install/))
+- Install Go [1.24+ or newer](https://go.dev/doc/install)
+- Install Terraform or OpenTofu
+
+To run tests:
+
+- Run all tests:
+ ```sh
+ atmos test run
+ ```
+- Clean up test artifacts:
+ ```sh
+ atmos test clean
+ ```
+- Explore additional test options:
+ ```sh
+ atmos test --help
+ ```
+The configuration for test commands is centrally managed. To review what's being imported, see the [`atmos.yaml`](https://raw.githubusercontent.com/cloudposse/.github/refs/heads/main/.github/atmos/terraform-module.yaml) file.
+
+Learn more about our [automated testing in our documentation](https://docs.cloudposse.com/community/contribute/automated-testing/) or implementing [custom commands](https://atmos.tools/core-concepts/custom-commands/) with atmos.
+
### 🌎 Slack Community
Join our [Open Source Community](https://cpco.io/slack?utm_source=github&utm_medium=readme&utm_campaign=cloudposse-terraform-components/aws-datadog-logs-archive&utm_content=slack) on Slack. It's **FREE** for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally *sweet* infrastructure.
diff --git a/README.yaml b/README.yaml
index 584785c..8e24a3d 100644
--- a/README.yaml
+++ b/README.yaml
@@ -3,43 +3,30 @@ name: "aws-datadog-logs-archive"
github_repo: "cloudposse-terraform-components/aws-datadog-logs-archive"
# Short description of this project
description: |-
- This component is responsible for provisioning Datadog Log Archives. It creates a single log archive pipeline for each
- AWS account. If the `catchall` flag is set, it creates a catchall archive within the same S3 bucket.
+ This component provisions Datadog Log Archives. It creates a single log archive pipeline for each AWS account. If the `catchall` flag is set, it creates a catchall archive within the same S3 bucket.
- Each log archive filters for the tag `env:$env` where $env is the environment/account name (ie sbx, prd, tools, etc), as
- well as any tags identified in the additional_tags key. The `catchall` archive, as the name implies, filters for '\*'.
+ Each log archive filters for the tag `env:$env` where `$env` is the environment/account name (e.g. `sbx`, `prd`, `tools`), as well as any tags identified in the `additional_query_tags` key. The `catchall` archive, as the name implies, filters for `*`.
- A second bucket is created for cloudtrail, and a cloudtrail is configured to monitor the log archive bucket and log
- activity to the cloudtrail bucket. To forward these cloudtrail logs to datadog, the cloudtrail bucket's id must be added
- to the s3_buckets key for our datadog-lambda-forwarder component.
+ A second bucket is created for CloudTrail, and a CloudTrail is configured to monitor the log archive bucket and log activity to the CloudTrail bucket. To forward these CloudTrail logs to Datadog, the CloudTrail bucket's ID must be added to the `s3_buckets` key for our `datadog-lambda-forwarder` component.
- Both buckets support object lock, with overridable defaults of COMPLIANCE mode with a duration of 7 days.
+ Both buckets support object lock, with overridable defaults of COMPLIANCE mode and a duration of 7 days.
- ## Prerequisites
+ Prerequisites
+ - Datadog integration set up in the target environment
+ - Relies on the Datadog API and App keys added by our Datadog integration component
- - Datadog integration set up in target environment
- - We rely on the datadog api and app keys added by our datadog integration component
+ Issues, Gotchas, Good-to-Knows
+ - Destroy/reprovision process
+ - Because of the protections for S3 buckets, destroying/replacing the bucket may require two passes or a manual bucket delete followed by Terraform cleanup. If the bucket has a full day or more of logs, deleting it manually first helps avoid Terraform timeouts.
+ - Two-step process to destroy via Terraform:
+ 1) Set `s3_force_destroy` to `true` and apply
+ 2) Set `enabled` to `false` and apply, or run `terraform destroy`
+usage: |-
+ Stack Level: Global
- ## Issues, Gotchas, Good-to-Knows
+ It's suggested to apply this component to all accounts from which Datadog receives logs.
- ### Destroy/reprovision process
-
- Because of the protections for S3 buckets, if we want to destroy/replace our bucket, we need to do so in two passes or
- destroy the bucket manually and then use terraform to clean up the rest. If reprovisioning a recently provisioned
- bucket, the two-pass process works well. If the bucket has a full day or more of logs, though, deleting it manually
- first will avoid terraform timeouts, and then the terraform process can be used to clean up everything else.
-
- #### Two step process to destroy via terraform
-
- - first set `s3_force_destroy` var to true and apply
- - next set `enabled` to false and apply or use tf destroy
-
- ## Usage
-
- **Stack Level**: Global
-
- Here's an example snippet for how to use this component. It's suggested to apply this component to all accounts from
- which Datadog receives logs.
+ Example Atmos snippet:
```yaml
components:
@@ -50,89 +37,17 @@ description: |-
workspace_enabled: true
vars:
enabled: true
- # additional_query_tags:
- # - "forwardername:*-dev-datadog-lambda-forwarder-logs"
- # - "account:123456789012"
+ # additional_query_tags:
+ # - "forwardername:*-dev-datadog-lambda-forwarder-logs"
+ # - "account:123456789012"
```
-
- ## Requirements
-
- | Name | Version |
- | --------- | --------- |
- | terraform | >= 0.13.0 |
- | aws | >= 2.0 |
- | datadog | >= 3.3.0 |
- | local | >= 1.3 |
-
- ## Providers
-
- | Name | Version |
- | ------- | -------- |
- | aws | >= 2.0 |
- | datadog | >= 3.7.0 |
- | http | >= 2.1.0 |
-
- ## Modules
-
- | Name | Source | Version |
- | -------------------- | ----------------------------------- | ------- |
- | cloudtrail | cloudposse/cloudtrail/aws | 0.21.0 |
- | cloudtrail_s3_bucket | cloudposse/cloudtrail-s3-bucket/aws | 0.23.1 |
- | iam_roles | ../account-map/modules/iam-roles | n/a |
- | s3_bucket | cloudposse/s3-bucket/aws | 0.46.0 |
- | this | cloudposse/label/null | 0.25.0 |
-
- ## Resources
-
- | Name | Type |
- | --------------------------------------- | ----------- |
- | aws_caller_identity.current | data source |
- | aws_partition.current | data source |
- | aws_ssm_parameter.datadog_api_key | data source |
- | aws_ssm_parameter.datadog_app_key | data source |
- | aws_ssm_parameter.datadog_aws_role_name | data source |
- | aws_ssm_parameter.datadog_external_id | data source |
- | datadog_logs_archive.catchall_archive | resource |
- | datadog_logs_archive.logs_archive | resource |
- | http.current_order | data source |
-
- ## Inputs
-
- | Name | Description | Type | Default | Required |
- | --------------------------- | ----------------------------------------------------------------------------------------------------------------------- | -------- | ------------ | ---------------- |
- | additional_query_tags | Additional tags to include in query for logs for this archive | `list` | [] | no |
- | catchall | Set to true to enable a catchall for logs unmatched by any queries. This should only be used in one environment/account | `bool` | false | no |
- | datadog_aws_account_id | The AWS account ID Datadog's integration servers use for all integrations | `string` | 464622532012 | no |
- | enable_glacier_transition | Enable/disable transition to glacier. Has no effect unless `lifecycle_rules_enabled` set to true | `bool` | true | no |
- | glacier_transition_days | Number of days after which to transition objects to glacier storage | `number` | 365 | no |
- | lifecycle_rules_enabled | Enable/disable lifecycle management rules for s3 objects | `bool` | true | no |
- | object_lock_days_archive | Set duration of archive bucket object lock | `number` | 7 | yes |
- | object_lock_days_cloudtrail | Set duration of cloudtrail bucket object lock | `number` | 7 | yes |
- | object_lock_mode_archive | Set mode of archive bucket object lock | `string` | COMPLIANCE | yes |
- | object_lock_mode_cloudtrail | Set mode of cloudtrail bucket object lock | `string` | COMPLIANCE | yes |
- | s3_force_destroy | Set to true to delete non-empty buckets when `enabled` is set to false | `bool` | false | for destroy only |
-
- ## Outputs
-
- | Name | Description |
- | ----------------------------- | ----------------------------------------------------------- |
- | archive_id | The ID of the environment-specific log archive |
- | bucket_arn | The ARN of the bucket used for log archive storage |
- | bucket_domain_name | The FQDN of the bucket used for log archive storage |
- | bucket_id | The ID (name) of the bucket used for log archive storage |
- | bucket_region | The region of the bucket used for log archive storage |
- | cloudtrail_bucket_arn | The ARN of the bucket used for cloudtrail log storage |
- | cloudtrail_bucket_domain_name | The FQDN of the bucket used for cloudtrail log storage |
- | cloudtrail_bucket_id | The ID (name) of the bucket used for cloudtrail log storage |
- | catchall_id | The ID of the catchall log archive |
-
- ## References
-
- - [cloudposse/s3-bucket/aws](https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/latest) - Cloud Posse's S3
- component
- - [datadog_logs_archive resource]
- (https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/logs_archive) - Datadog's provider
- documentation for the datadog_logs_archive resource
+references:
+ - name: cloudposse/s3-bucket/aws
+ description: "Cloud Posse's S3 component"
+ url: https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/latest
+ - name: datadog_logs_archive resource
+ description: "Datadog's provider documentation for the datadog_logs_archive resource"
+ url: https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/logs_archive
tags:
- component/datadog-logs-archive
- layer/datadog
diff --git a/atmos.yaml b/atmos.yaml
new file mode 100644
index 0000000..481c199
--- /dev/null
+++ b/atmos.yaml
@@ -0,0 +1,11 @@
+# Atmos Configuration — powered by https://atmos.tools
+#
+# This configuration enables centralized, DRY, and consistent project scaffolding using Atmos.
+#
+# Included features:
+# - Organizational custom commands: https://atmos.tools/core-concepts/custom-commands
+# - Automated README generation: https://atmos.tools/cli/commands/docs/generate
+#
+# Import shared configuration used by all modules
+import:
+ - https://raw.githubusercontent.com/cloudposse-terraform-components/.github/refs/heads/main/.github/atmos/terraform-component.yaml
diff --git a/src/README.md b/src/README.md
index def05ca..14c7f97 100644
--- a/src/README.md
+++ b/src/README.md
@@ -8,43 +8,31 @@ tags:
# Component: `datadog-logs-archive`
-This component is responsible for provisioning Datadog Log Archives. It creates a single log archive pipeline for each
-AWS account. If the `catchall` flag is set, it creates a catchall archive within the same S3 bucket.
+This component provisions Datadog Log Archives. It creates a single log archive pipeline for each AWS account. If the `catchall` flag is set, it creates a catchall archive within the same S3 bucket.
-Each log archive filters for the tag `env:$env` where $env is the environment/account name (ie sbx, prd, tools, etc), as
-well as any tags identified in the additional_tags key. The `catchall` archive, as the name implies, filters for '\*'.
+Each log archive filters for the tag `env:$env` where `$env` is the environment/account name (e.g. `sbx`, `prd`, `tools`), as well as any tags identified in the `additional_query_tags` key. The `catchall` archive, as the name implies, filters for `*`.
-A second bucket is created for cloudtrail, and a cloudtrail is configured to monitor the log archive bucket and log
-activity to the cloudtrail bucket. To forward these cloudtrail logs to datadog, the cloudtrail bucket's id must be added
-to the s3_buckets key for our datadog-lambda-forwarder component.
+A second bucket is created for CloudTrail, and a CloudTrail is configured to monitor the log archive bucket and log activity to the CloudTrail bucket. To forward these CloudTrail logs to Datadog, the CloudTrail bucket's ID must be added to the `s3_buckets` key for our `datadog-lambda-forwarder` component.
-Both buckets support object lock, with overridable defaults of COMPLIANCE mode with a duration of 7 days.
+Both buckets support object lock, with overridable defaults of COMPLIANCE mode and a duration of 7 days.
-## Prerequisites
-
-- Datadog integration set up in target environment
- - We rely on the datadog api and app keys added by our datadog integration component
-
-## Issues, Gotchas, Good-to-Knows
-
-### Destroy/reprovision process
-
-Because of the protections for S3 buckets, if we want to destroy/replace our bucket, we need to do so in two passes or
-destroy the bucket manually and then use terraform to clean up the rest. If reprovisioning a recently provisioned
-bucket, the two-pass process works well. If the bucket has a full day or more of logs, though, deleting it manually
-first will avoid terraform timeouts, and then the terraform process can be used to clean up everything else.
-
-#### Two step process to destroy via terraform
-
-- first set `s3_force_destroy` var to true and apply
-- next set `enabled` to false and apply or use tf destroy
+Prerequisites
+- Datadog integration set up in the target environment
+ - Relies on the Datadog API and App keys added by our Datadog integration component
+Issues, Gotchas, Good-to-Knows
+- Destroy/reprovision process
+ - Because of the protections for S3 buckets, destroying/replacing the bucket may require two passes or a manual bucket delete followed by Terraform cleanup. If the bucket has a full day or more of logs, deleting it manually first helps avoid Terraform timeouts.
+ - Two-step process to destroy via Terraform:
+ 1) Set `s3_force_destroy` to `true` and apply
+ 2) Set `enabled` to `false` and apply, or run `terraform destroy`
## Usage
-**Stack Level**: Global
+Stack Level: Global
-Here's an example snippet for how to use this component. It's suggested to apply this component to all accounts from
-which Datadog receives logs.
+It's suggested to apply this component to all accounts from which Datadog receives logs.
+
+Example Atmos snippet:
```yaml
components:
@@ -55,93 +43,13 @@ components:
workspace_enabled: true
vars:
enabled: true
- # additional_query_tags:
- # - "forwardername:*-dev-datadog-lambda-forwarder-logs"
- # - "account:123456789012"
+ # additional_query_tags:
+ # - "forwardername:*-dev-datadog-lambda-forwarder-logs"
+ # - "account:123456789012"
```
-## Requirements
-
-| Name | Version |
-| --------- | --------- |
-| terraform | >= 0.13.0 |
-| aws | >= 2.0 |
-| datadog | >= 3.3.0 |
-| local | >= 1.3 |
-
-## Providers
-
-| Name | Version |
-| ------- | -------- |
-| aws | >= 2.0 |
-| datadog | >= 3.7.0 |
-| http | >= 2.1.0 |
-
-## Modules
-
-| Name | Source | Version |
-| -------------------- | ----------------------------------- | ------- |
-| cloudtrail | cloudposse/cloudtrail/aws | 0.21.0 |
-| cloudtrail_s3_bucket | cloudposse/cloudtrail-s3-bucket/aws | 0.23.1 |
-| iam_roles | ../account-map/modules/iam-roles | n/a |
-| s3_bucket | cloudposse/s3-bucket/aws | 0.46.0 |
-| this | cloudposse/label/null | 0.25.0 |
-
-## Resources
-
-| Name | Type |
-| --------------------------------------- | ----------- |
-| aws_caller_identity.current | data source |
-| aws_partition.current | data source |
-| aws_ssm_parameter.datadog_api_key | data source |
-| aws_ssm_parameter.datadog_app_key | data source |
-| aws_ssm_parameter.datadog_aws_role_name | data source |
-| aws_ssm_parameter.datadog_external_id | data source |
-| datadog_logs_archive.catchall_archive | resource |
-| datadog_logs_archive.logs_archive | resource |
-| http.current_order | data source |
-
-## Inputs
-
-| Name | Description | Type | Default | Required |
-| --------------------------- | ----------------------------------------------------------------------------------------------------------------------- | -------- | ------------ | ---------------- |
-| additional_query_tags | Additional tags to include in query for logs for this archive | `list` | [] | no |
-| catchall | Set to true to enable a catchall for logs unmatched by any queries. This should only be used in one environment/account | `bool` | false | no |
-| datadog_aws_account_id | The AWS account ID Datadog's integration servers use for all integrations | `string` | 464622532012 | no |
-| enable_glacier_transition | Enable/disable transition to glacier. Has no effect unless `lifecycle_rules_enabled` set to true | `bool` | true | no |
-| glacier_transition_days | Number of days after which to transition objects to glacier storage | `number` | 365 | no |
-| lifecycle_rules_enabled | Enable/disable lifecycle management rules for s3 objects | `bool` | true | no |
-| object_lock_days_archive | Set duration of archive bucket object lock | `number` | 7 | yes |
-| object_lock_days_cloudtrail | Set duration of cloudtrail bucket object lock | `number` | 7 | yes |
-| object_lock_mode_archive | Set mode of archive bucket object lock | `string` | COMPLIANCE | yes |
-| object_lock_mode_cloudtrail | Set mode of cloudtrail bucket object lock | `string` | COMPLIANCE | yes |
-| s3_force_destroy | Set to true to delete non-empty buckets when `enabled` is set to false | `bool` | false | for destroy only |
-## Outputs
-
-| Name | Description |
-| ----------------------------- | ----------------------------------------------------------- |
-| archive_id | The ID of the environment-specific log archive |
-| bucket_arn | The ARN of the bucket used for log archive storage |
-| bucket_domain_name | The FQDN of the bucket used for log archive storage |
-| bucket_id | The ID (name) of the bucket used for log archive storage |
-| bucket_region | The region of the bucket used for log archive storage |
-| cloudtrail_bucket_arn | The ARN of the bucket used for cloudtrail log storage |
-| cloudtrail_bucket_domain_name | The FQDN of the bucket used for cloudtrail log storage |
-| cloudtrail_bucket_id | The ID (name) of the bucket used for cloudtrail log storage |
-| catchall_id | The ID of the catchall log archive |
-
-## References
-
-- [cloudposse/s3-bucket/aws](https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/latest) - Cloud Posse's S3
- component
-- [datadog_logs_archive resource]
- (https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/logs_archive) - Datadog's provider
- documentation for the datadog_logs_archive resource
-
-[
](https://cpco.io/homepage?utm_source=github&utm_medium=readme&utm_campaign=cloudposse-terraform-components/aws-datadog-logs-archive&utm_content=)
-
-
+
## Requirements
| Name | Version |
@@ -231,4 +139,19 @@ components:
| [cloudtrail\_bucket\_arn](#output\_cloudtrail\_bucket\_arn) | The ARN of the bucket used for access logging via cloudtrail |
| [cloudtrail\_bucket\_domain\_name](#output\_cloudtrail\_bucket\_domain\_name) | The FQDN of the bucket used for access logging via cloudtrail |
| [cloudtrail\_bucket\_id](#output\_cloudtrail\_bucket\_id) | The ID (name) of the bucket used for access logging via cloudtrail |
-
+
+
+
+
+## References
+
+
+- [cloudposse/s3-bucket/aws](https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/latest) - Cloud Posse's S3 component
+
+- [datadog_logs_archive resource](https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/logs_archive) - Datadog's provider documentation for the datadog_logs_archive resource
+
+
+
+
+[
](https://cpco.io/homepage?utm_source=github&utm_medium=readme&utm_campaign=cloudposse-terraform-components/aws-datadog-logs-archive&utm_content=)
+