Skip to content

Commit d4cf73b

Browse files
authored
Migrate README generation to atmos (#63)
* chore: Migrate README generation to atmos * Update README.yaml
1 parent 8263673 commit d4cf73b

File tree

7 files changed

+274
-338
lines changed

7 files changed

+274
-338
lines changed

.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,8 @@ aws-assumed-role/
77
*.iml
88
.direnv
99
.envrc
10+
.cache
11+
.atmos
1012

1113
# Compiled and auto-generated files
1214
# Note that the leading "**/" appears necessary for Docker even if not for Git

AGENTS.md

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
# Repository Guidelines
2+
3+
## Project Structure & Module Organization
4+
- `src/`: Terraform component (`main.tf`, `variables.tf`, `outputs.tf`, `providers.tf`, `versions.tf`, `context.tf`). This is the source of truth.
5+
- `test/`: Go Terratest suite using Atmos fixtures (`component_test.go`, `fixtures/`, `test_suite.yaml`). Tests deploy/destroy real AWS resources.
6+
- `README.yaml`: Source for the generated `README.md` (via atmos + terraform-docs).
7+
- `.github/`: CI/CD, Renovate/Dependabot, labels, and automerge settings.
8+
- `docs/`: Project docs (if any). Keep lightweight and current.
9+
10+
## Build, Test, and Development Commands
11+
- To install atmos read this docs https://github.com/cloudposse/atmos
12+
- `atmos docs generate readme`: Regenerate `README.md` from `README.yaml` and terraform source.
13+
- `atmos docs generate readme-simple`: Regenerate `src/README.md` from `README.yaml` and terraform source.
14+
- `atmos test run`: Run Terratest suite in `test/` (uses Atmos fixtures; creates and destroys AWS resources).
15+
- Pre-commit locally: `pre-commit install && pre-commit run -a` (runs `terraform_fmt`, `terraform_docs`, `tflint`).
16+
- TFLint plugin setup: `tflint --init` (uses `.tflint.hcl`).
17+
18+
## Coding Style & Naming Conventions
19+
- Indentation: Terraform 2 spaces; YAML/Markdown 2 spaces.
20+
- Terraform: prefer lower_snake_case for variables/locals; keep resources/data sources descriptive and aligned with Cloud Posse null-label patterns.
21+
- Lint/format: `terraform fmt -recursive`, TFLint rules per `.tflint.hcl`. Do not commit formatting or lint violations.
22+
23+
## Testing Guidelines
24+
- Framework: Go Terratest with `github.com/cloudposse/test-helpers` and `atmos` fixtures.
25+
- Location/naming: put tests in `test/` and name files `*_test.go`. Add scenarios under `test/fixtures/stacks/catalog/usecase/`.
26+
- Run: `atmos test run`. Ensure AWS credentials are configured; tests may incur AWS costs and will clean up after themselves.
27+
28+
## Commit & Pull Request Guidelines
29+
- Commits: follow Conventional Commits (e.g., `feat:`, `fix:`, `chore(deps):`, `docs:`). Keep messages concise and scoped.
30+
- PRs: include a clear description, linked issues, and any behavioral changes. Update `README.yaml` when inputs/outputs change and run `atmos docs generate readme`.
31+
- CI: ensure pre-commit, TFLint, and tests pass. Avoid unrelated changes in the same PR.
32+
33+
## Security & Configuration Tips
34+
- Never commit secrets. Configure AWS credentials/role assumption externally; the provider setup in `src/providers.tf` supports role assumption via the `iam_roles` module.
35+
- Global quotas must be applied in `us-east-1`; place in the `gbl` stack and set `region: us-east-1` in `vars`.

Makefile

Lines changed: 0 additions & 8 deletions
This file was deleted.

README.md

Lines changed: 162 additions & 104 deletions
Some generated files are not rendered by default. Learn more about customizing how changed files appear on GitHub.

README.yaml

Lines changed: 27 additions & 112 deletions
Original file line numberDiff line numberDiff line change
@@ -3,43 +3,30 @@ name: "aws-datadog-logs-archive"
33
github_repo: "cloudposse-terraform-components/aws-datadog-logs-archive"
44
# Short description of this project
55
description: |-
6-
This component is responsible for provisioning Datadog Log Archives. It creates a single log archive pipeline for each
7-
AWS account. If the `catchall` flag is set, it creates a catchall archive within the same S3 bucket.
6+
This component provisions Datadog Log Archives. It creates a single log archive pipeline for each AWS account. If the `catchall` flag is set, it creates a catchall archive within the same S3 bucket.
87
9-
Each log archive filters for the tag `env:$env` where $env is the environment/account name (ie sbx, prd, tools, etc), as
10-
well as any tags identified in the additional_tags key. The `catchall` archive, as the name implies, filters for '\*'.
8+
Each log archive filters for the tag `env:$env` where `$env` is the environment/account name (e.g. `sbx`, `prd`, `tools`), as well as any tags identified in the `additional_query_tags` key. The `catchall` archive, as the name implies, filters for `*`.
119
12-
A second bucket is created for cloudtrail, and a cloudtrail is configured to monitor the log archive bucket and log
13-
activity to the cloudtrail bucket. To forward these cloudtrail logs to datadog, the cloudtrail bucket's id must be added
14-
to the s3_buckets key for our datadog-lambda-forwarder component.
10+
A second bucket is created for CloudTrail, and a CloudTrail is configured to monitor the log archive bucket and log activity to the CloudTrail bucket. To forward these CloudTrail logs to Datadog, the CloudTrail bucket's ID must be added to the `s3_buckets` key for our `datadog-lambda-forwarder` component.
1511
16-
Both buckets support object lock, with overridable defaults of COMPLIANCE mode with a duration of 7 days.
12+
Both buckets support object lock, with overridable defaults of COMPLIANCE mode and a duration of 7 days.
1713
18-
## Prerequisites
14+
Prerequisites
15+
- Datadog integration set up in the target environment
16+
- Relies on the Datadog API and App keys added by our Datadog integration component
1917
20-
- Datadog integration set up in target environment
21-
- We rely on the datadog api and app keys added by our datadog integration component
18+
Issues, Gotchas, Good-to-Knows
19+
- Destroy/reprovision process
20+
- Because of the protections for S3 buckets, destroying/replacing the bucket may require two passes or a manual bucket delete followed by Terraform cleanup. If the bucket has a full day or more of logs, deleting it manually first helps avoid Terraform timeouts.
21+
- Two-step process to destroy via Terraform:
22+
1) Set `s3_force_destroy` to `true` and apply
23+
2) Set `enabled` to `false` and apply, or run `terraform destroy`
24+
usage: |-
25+
Stack Level: Global
2226
23-
## Issues, Gotchas, Good-to-Knows
27+
It's suggested to apply this component to all accounts from which Datadog receives logs.
2428
25-
### Destroy/reprovision process
26-
27-
Because of the protections for S3 buckets, if we want to destroy/replace our bucket, we need to do so in two passes or
28-
destroy the bucket manually and then use terraform to clean up the rest. If reprovisioning a recently provisioned
29-
bucket, the two-pass process works well. If the bucket has a full day or more of logs, though, deleting it manually
30-
first will avoid terraform timeouts, and then the terraform process can be used to clean up everything else.
31-
32-
#### Two step process to destroy via terraform
33-
34-
- first set `s3_force_destroy` var to true and apply
35-
- next set `enabled` to false and apply or use tf destroy
36-
37-
## Usage
38-
39-
**Stack Level**: Global
40-
41-
Here's an example snippet for how to use this component. It's suggested to apply this component to all accounts from
42-
which Datadog receives logs.
29+
Example Atmos snippet:
4330
4431
```yaml
4532
components:
@@ -50,89 +37,17 @@ description: |-
5037
workspace_enabled: true
5138
vars:
5239
enabled: true
53-
# additional_query_tags:
54-
# - "forwardername:*-dev-datadog-lambda-forwarder-logs"
55-
# - "account:123456789012"
40+
# additional_query_tags:
41+
# - "forwardername:*-dev-datadog-lambda-forwarder-logs"
42+
# - "account:123456789012"
5643
```
57-
58-
## Requirements
59-
60-
| Name | Version |
61-
| --------- | --------- |
62-
| terraform | >= 0.13.0 |
63-
| aws | >= 2.0 |
64-
| datadog | >= 3.3.0 |
65-
| local | >= 1.3 |
66-
67-
## Providers
68-
69-
| Name | Version |
70-
| ------- | -------- |
71-
| aws | >= 2.0 |
72-
| datadog | >= 3.7.0 |
73-
| http | >= 2.1.0 |
74-
75-
## Modules
76-
77-
| Name | Source | Version |
78-
| -------------------- | ----------------------------------- | ------- |
79-
| cloudtrail | cloudposse/cloudtrail/aws | 0.21.0 |
80-
| cloudtrail_s3_bucket | cloudposse/cloudtrail-s3-bucket/aws | 0.23.1 |
81-
| iam_roles | ../account-map/modules/iam-roles | n/a |
82-
| s3_bucket | cloudposse/s3-bucket/aws | 0.46.0 |
83-
| this | cloudposse/label/null | 0.25.0 |
84-
85-
## Resources
86-
87-
| Name | Type |
88-
| --------------------------------------- | ----------- |
89-
| aws_caller_identity.current | data source |
90-
| aws_partition.current | data source |
91-
| aws_ssm_parameter.datadog_api_key | data source |
92-
| aws_ssm_parameter.datadog_app_key | data source |
93-
| aws_ssm_parameter.datadog_aws_role_name | data source |
94-
| aws_ssm_parameter.datadog_external_id | data source |
95-
| datadog_logs_archive.catchall_archive | resource |
96-
| datadog_logs_archive.logs_archive | resource |
97-
| http.current_order | data source |
98-
99-
## Inputs
100-
101-
| Name | Description | Type | Default | Required |
102-
| --------------------------- | ----------------------------------------------------------------------------------------------------------------------- | -------- | ------------ | ---------------- |
103-
| additional_query_tags | Additional tags to include in query for logs for this archive | `list` | [] | no |
104-
| catchall | Set to true to enable a catchall for logs unmatched by any queries. This should only be used in one environment/account | `bool` | false | no |
105-
| datadog_aws_account_id | The AWS account ID Datadog's integration servers use for all integrations | `string` | 464622532012 | no |
106-
| enable_glacier_transition | Enable/disable transition to glacier. Has no effect unless `lifecycle_rules_enabled` set to true | `bool` | true | no |
107-
| glacier_transition_days | Number of days after which to transition objects to glacier storage | `number` | 365 | no |
108-
| lifecycle_rules_enabled | Enable/disable lifecycle management rules for s3 objects | `bool` | true | no |
109-
| object_lock_days_archive | Set duration of archive bucket object lock | `number` | 7 | yes |
110-
| object_lock_days_cloudtrail | Set duration of cloudtrail bucket object lock | `number` | 7 | yes |
111-
| object_lock_mode_archive | Set mode of archive bucket object lock | `string` | COMPLIANCE | yes |
112-
| object_lock_mode_cloudtrail | Set mode of cloudtrail bucket object lock | `string` | COMPLIANCE | yes |
113-
| s3_force_destroy | Set to true to delete non-empty buckets when `enabled` is set to false | `bool` | false | for destroy only |
114-
115-
## Outputs
116-
117-
| Name | Description |
118-
| ----------------------------- | ----------------------------------------------------------- |
119-
| archive_id | The ID of the environment-specific log archive |
120-
| bucket_arn | The ARN of the bucket used for log archive storage |
121-
| bucket_domain_name | The FQDN of the bucket used for log archive storage |
122-
| bucket_id | The ID (name) of the bucket used for log archive storage |
123-
| bucket_region | The region of the bucket used for log archive storage |
124-
| cloudtrail_bucket_arn | The ARN of the bucket used for cloudtrail log storage |
125-
| cloudtrail_bucket_domain_name | The FQDN of the bucket used for cloudtrail log storage |
126-
| cloudtrail_bucket_id | The ID (name) of the bucket used for cloudtrail log storage |
127-
| catchall_id | The ID of the catchall log archive |
128-
129-
## References
130-
131-
- [cloudposse/s3-bucket/aws](https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/latest) - Cloud Posse's S3
132-
component
133-
- [datadog_logs_archive resource]
134-
(https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/logs_archive) - Datadog's provider
135-
documentation for the datadog_logs_archive resource
44+
references:
45+
- name: cloudposse/s3-bucket/aws
46+
description: "Cloud Posse's S3 component"
47+
url: https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/latest
48+
- name: datadog_logs_archive resource
49+
description: "Datadog's provider documentation for the datadog_logs_archive resource"
50+
url: https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/logs_archive
13651
tags:
13752
- component/datadog-logs-archive
13853
- layer/datadog

atmos.yaml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
# Atmos Configuration — powered by https://atmos.tools
2+
#
3+
# This configuration enables centralized, DRY, and consistent project scaffolding using Atmos.
4+
#
5+
# Included features:
6+
# - Organizational custom commands: https://atmos.tools/core-concepts/custom-commands
7+
# - Automated README generation: https://atmos.tools/cli/commands/docs/generate
8+
#
9+
# Import shared configuration used by all modules
10+
import:
11+
- https://raw.githubusercontent.com/cloudposse-terraform-components/.github/refs/heads/main/.github/atmos/terraform-component.yaml

0 commit comments

Comments
 (0)