Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ aws-assumed-role/
*.iml
.direnv
.envrc
.cache
.atmos

# Compiled and auto-generated files
# Note that the leading "**/" appears necessary for Docker even if not for Git
Expand Down
35 changes: 35 additions & 0 deletions AGENTS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Repository Guidelines

## Project Structure & Module Organization
- `src/`: Terraform component (`main.tf`, `variables.tf`, `outputs.tf`, `providers.tf`, `versions.tf`, `context.tf`). This is the source of truth.
- `test/`: Go Terratest suite using Atmos fixtures (`component_test.go`, `fixtures/`, `test_suite.yaml`). Tests deploy/destroy real AWS resources.
- `README.yaml`: Source for the generated `README.md` (via atmos + terraform-docs).
- `.github/`: CI/CD, Renovate/Dependabot, labels, and automerge settings.
- `docs/`: Project docs (if any). Keep lightweight and current.

## Build, Test, and Development Commands
- To install atmos read this docs https://github.com/cloudposse/atmos
- `atmos docs generate readme`: Regenerate `README.md` from `README.yaml` and terraform source.
- `atmos docs generate readme-simple`: Regenerate `src/README.md` from `README.yaml` and terraform source.
- `atmos test run`: Run Terratest suite in `test/` (uses Atmos fixtures; creates and destroys AWS resources).
- Pre-commit locally: `pre-commit install && pre-commit run -a` (runs `terraform_fmt`, `terraform_docs`, `tflint`).
- TFLint plugin setup: `tflint --init` (uses `.tflint.hcl`).

## Coding Style & Naming Conventions
- Indentation: Terraform 2 spaces; YAML/Markdown 2 spaces.
- Terraform: prefer lower_snake_case for variables/locals; keep resources/data sources descriptive and aligned with Cloud Posse null-label patterns.
- Lint/format: `terraform fmt -recursive`, TFLint rules per `.tflint.hcl`. Do not commit formatting or lint violations.

## Testing Guidelines
- Framework: Go Terratest with `github.com/cloudposse/test-helpers` and `atmos` fixtures.
- Location/naming: put tests in `test/` and name files `*_test.go`. Add scenarios under `test/fixtures/stacks/catalog/usecase/`.
- Run: `atmos test run`. Ensure AWS credentials are configured; tests may incur AWS costs and will clean up after themselves.

## Commit & Pull Request Guidelines
- Commits: follow Conventional Commits (e.g., `feat:`, `fix:`, `chore(deps):`, `docs:`). Keep messages concise and scoped.
- PRs: include a clear description, linked issues, and any behavioral changes. Update `README.yaml` when inputs/outputs change and run `atmos docs generate readme`.
- CI: ensure pre-commit, TFLint, and tests pass. Avoid unrelated changes in the same PR.

## Security & Configuration Tips
- Never commit secrets. Configure AWS credentials/role assumption externally; the provider setup in `src/providers.tf` supports role assumption via the `iam_roles` module.
- Global quotas must be applied in `us-east-1`; place in the `gbl` stack and set `region: us-east-1` in `vars`.
8 changes: 0 additions & 8 deletions Makefile

This file was deleted.

266 changes: 162 additions & 104 deletions README.md

Large diffs are not rendered by default.

139 changes: 27 additions & 112 deletions README.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,43 +3,30 @@ name: "aws-datadog-logs-archive"
github_repo: "cloudposse-terraform-components/aws-datadog-logs-archive"
# Short description of this project
description: |-
This component is responsible for provisioning Datadog Log Archives. It creates a single log archive pipeline for each
AWS account. If the `catchall` flag is set, it creates a catchall archive within the same S3 bucket.
This component provisions Datadog Log Archives. It creates a single log archive pipeline for each AWS account. If the `catchall` flag is set, it creates a catchall archive within the same S3 bucket.

Each log archive filters for the tag `env:$env` where $env is the environment/account name (ie sbx, prd, tools, etc), as
well as any tags identified in the additional_tags key. The `catchall` archive, as the name implies, filters for '\*'.
Each log archive filters for the tag `env:$env` where `$env` is the environment/account name (e.g. `sbx`, `prd`, `tools`), as well as any tags identified in the `additional_query_tags` key. The `catchall` archive, as the name implies, filters for `*`.

A second bucket is created for cloudtrail, and a cloudtrail is configured to monitor the log archive bucket and log
activity to the cloudtrail bucket. To forward these cloudtrail logs to datadog, the cloudtrail bucket's id must be added
to the s3_buckets key for our datadog-lambda-forwarder component.
A second bucket is created for CloudTrail, and a CloudTrail is configured to monitor the log archive bucket and log activity to the CloudTrail bucket. To forward these CloudTrail logs to Datadog, the CloudTrail bucket's ID must be added to the `s3_buckets` key for our `datadog-lambda-forwarder` component.

Both buckets support object lock, with overridable defaults of COMPLIANCE mode with a duration of 7 days.
Both buckets support object lock, with overridable defaults of COMPLIANCE mode and a duration of 7 days.

## Prerequisites
Prerequisites
- Datadog integration set up in the target environment
- Relies on the Datadog API and App keys added by our Datadog integration component

- Datadog integration set up in target environment
- We rely on the datadog api and app keys added by our datadog integration component
Issues, Gotchas, Good-to-Knows
- Destroy/reprovision process
- Because of the protections for S3 buckets, destroying/replacing the bucket may require two passes or a manual bucket delete followed by Terraform cleanup. If the bucket has a full day or more of logs, deleting it manually first helps avoid Terraform timeouts.
- Two-step process to destroy via Terraform:
1) Set `s3_force_destroy` to `true` and apply
2) Set `enabled` to `false` and apply, or run `terraform destroy`
usage: |-
Stack Level: Global

## Issues, Gotchas, Good-to-Knows
It's suggested to apply this component to all accounts from which Datadog receives logs.

### Destroy/reprovision process

Because of the protections for S3 buckets, if we want to destroy/replace our bucket, we need to do so in two passes or
destroy the bucket manually and then use terraform to clean up the rest. If reprovisioning a recently provisioned
bucket, the two-pass process works well. If the bucket has a full day or more of logs, though, deleting it manually
first will avoid terraform timeouts, and then the terraform process can be used to clean up everything else.

#### Two step process to destroy via terraform

- first set `s3_force_destroy` var to true and apply
- next set `enabled` to false and apply or use tf destroy

## Usage

**Stack Level**: Global

Here's an example snippet for how to use this component. It's suggested to apply this component to all accounts from
which Datadog receives logs.
Example Atmos snippet:

```yaml
components:
Expand All @@ -50,89 +37,17 @@ description: |-
workspace_enabled: true
vars:
enabled: true
# additional_query_tags:
# - "forwardername:*-dev-datadog-lambda-forwarder-logs"
# - "account:123456789012"
# additional_query_tags:
# - "forwardername:*-dev-datadog-lambda-forwarder-logs"
# - "account:123456789012"
```

## Requirements

| Name | Version |
| --------- | --------- |
| terraform | >= 0.13.0 |
| aws | >= 2.0 |
| datadog | >= 3.3.0 |
| local | >= 1.3 |

## Providers

| Name | Version |
| ------- | -------- |
| aws | >= 2.0 |
| datadog | >= 3.7.0 |
| http | >= 2.1.0 |

## Modules

| Name | Source | Version |
| -------------------- | ----------------------------------- | ------- |
| cloudtrail | cloudposse/cloudtrail/aws | 0.21.0 |
| cloudtrail_s3_bucket | cloudposse/cloudtrail-s3-bucket/aws | 0.23.1 |
| iam_roles | ../account-map/modules/iam-roles | n/a |
| s3_bucket | cloudposse/s3-bucket/aws | 0.46.0 |
| this | cloudposse/label/null | 0.25.0 |

## Resources

| Name | Type |
| --------------------------------------- | ----------- |
| aws_caller_identity.current | data source |
| aws_partition.current | data source |
| aws_ssm_parameter.datadog_api_key | data source |
| aws_ssm_parameter.datadog_app_key | data source |
| aws_ssm_parameter.datadog_aws_role_name | data source |
| aws_ssm_parameter.datadog_external_id | data source |
| datadog_logs_archive.catchall_archive | resource |
| datadog_logs_archive.logs_archive | resource |
| http.current_order | data source |

## Inputs

| Name | Description | Type | Default | Required |
| --------------------------- | ----------------------------------------------------------------------------------------------------------------------- | -------- | ------------ | ---------------- |
| additional_query_tags | Additional tags to include in query for logs for this archive | `list` | [] | no |
| catchall | Set to true to enable a catchall for logs unmatched by any queries. This should only be used in one environment/account | `bool` | false | no |
| datadog_aws_account_id | The AWS account ID Datadog's integration servers use for all integrations | `string` | 464622532012 | no |
| enable_glacier_transition | Enable/disable transition to glacier. Has no effect unless `lifecycle_rules_enabled` set to true | `bool` | true | no |
| glacier_transition_days | Number of days after which to transition objects to glacier storage | `number` | 365 | no |
| lifecycle_rules_enabled | Enable/disable lifecycle management rules for s3 objects | `bool` | true | no |
| object_lock_days_archive | Set duration of archive bucket object lock | `number` | 7 | yes |
| object_lock_days_cloudtrail | Set duration of cloudtrail bucket object lock | `number` | 7 | yes |
| object_lock_mode_archive | Set mode of archive bucket object lock | `string` | COMPLIANCE | yes |
| object_lock_mode_cloudtrail | Set mode of cloudtrail bucket object lock | `string` | COMPLIANCE | yes |
| s3_force_destroy | Set to true to delete non-empty buckets when `enabled` is set to false | `bool` | false | for destroy only |

## Outputs

| Name | Description |
| ----------------------------- | ----------------------------------------------------------- |
| archive_id | The ID of the environment-specific log archive |
| bucket_arn | The ARN of the bucket used for log archive storage |
| bucket_domain_name | The FQDN of the bucket used for log archive storage |
| bucket_id | The ID (name) of the bucket used for log archive storage |
| bucket_region | The region of the bucket used for log archive storage |
| cloudtrail_bucket_arn | The ARN of the bucket used for cloudtrail log storage |
| cloudtrail_bucket_domain_name | The FQDN of the bucket used for cloudtrail log storage |
| cloudtrail_bucket_id | The ID (name) of the bucket used for cloudtrail log storage |
| catchall_id | The ID of the catchall log archive |

## References

- [cloudposse/s3-bucket/aws](https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/latest) - Cloud Posse's S3
component
- [datadog_logs_archive resource]
(https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/logs_archive) - Datadog's provider
documentation for the datadog_logs_archive resource
references:
- name: cloudposse/s3-bucket/aws
description: "Cloud Posse's S3 component"
url: https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/latest
- name: datadog_logs_archive resource
description: "Datadog's provider documentation for the datadog_logs_archive resource"
url: https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/logs_archive
tags:
- component/datadog-logs-archive
- layer/datadog
Expand Down
11 changes: 11 additions & 0 deletions atmos.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Atmos Configuration — powered by https://atmos.tools
#
# This configuration enables centralized, DRY, and consistent project scaffolding using Atmos.
#
# Included features:
# - Organizational custom commands: https://atmos.tools/core-concepts/custom-commands
# - Automated README generation: https://atmos.tools/cli/commands/docs/generate
#
# Import shared configuration used by all modules
import:
- https://raw.githubusercontent.com/cloudposse-terraform-components/.github/refs/heads/main/.github/atmos/terraform-component.yaml
Loading
Loading