You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- PRs: include a clear description, linked issues, and any behavioral changes. Update `README.yaml` when inputs/outputs change and run `atmos docs generate readme`.
31
+
- CI: ensure pre-commit, TFLint, and tests pass. Avoid unrelated changes in the same PR.
32
+
33
+
## Security & Configuration Tips
34
+
- Never commit secrets. Configure AWS credentials/role assumption externally; the provider setup in `src/providers.tf` supports role assumption via the `iam_roles` module.
35
+
- Global quotas must be applied in `us-east-1`; place in the `gbl` stack and set `region: us-east-1` in `vars`.
This component is responsible for provisioning Datadog Log Archives. It creates a single log archive pipeline for each
7
-
AWS account. If the `catchall` flag is set, it creates a catchall archive within the same S3 bucket.
6
+
This component provisions Datadog Log Archives. It creates a single log archive pipeline for each AWS account. If the `catchall` flag is set, it creates a catchall archive within the same S3 bucket.
8
7
9
-
Each log archive filters for the tag `env:$env` where $env is the environment/account name (ie sbx, prd, tools, etc), as
10
-
well as any tags identified in the additional_tags key. The `catchall` archive, as the name implies, filters for '\*'.
8
+
Each log archive filters for the tag `env:$env` where `$env` is the environment/account name (e.g. `sbx`, `prd`, `tools`), as well as any tags identified in the `additional_query_tags` key. The `catchall` archive, as the name implies, filters for `*`.
11
9
12
-
A second bucket is created for cloudtrail, and a cloudtrail is configured to monitor the log archive bucket and log
13
-
activity to the cloudtrail bucket. To forward these cloudtrail logs to datadog, the cloudtrail bucket's id must be added
14
-
to the s3_buckets key for our datadog-lambda-forwarder component.
10
+
A second bucket is created for CloudTrail, and a CloudTrail is configured to monitor the log archive bucket and log activity to the CloudTrail bucket. To forward these CloudTrail logs to Datadog, the CloudTrail bucket's ID must be added to the `s3_buckets` key for our `datadog-lambda-forwarder` component.
15
11
16
-
Both buckets support object lock, with overridable defaults of COMPLIANCE mode with a duration of 7 days.
12
+
Both buckets support object lock, with overridable defaults of COMPLIANCE mode and a duration of 7 days.
17
13
18
-
## Prerequisites
14
+
Prerequisites
15
+
- Datadog integration set up in the target environment
16
+
- Relies on the Datadog API and App keys added by our Datadog integration component
19
17
20
-
- Datadog integration set up in target environment
21
-
- We rely on the datadog api and app keys added by our datadog integration component
18
+
Issues, Gotchas, Good-to-Knows
19
+
- Destroy/reprovision process
20
+
- Because of the protections for S3 buckets, destroying/replacing the bucket may require two passes or a manual bucket delete followed by Terraform cleanup. If the bucket has a full day or more of logs, deleting it manually first helps avoid Terraform timeouts.
21
+
- Two-step process to destroy via Terraform:
22
+
1) Set `s3_force_destroy` to `true` and apply
23
+
2) Set `enabled` to `false` and apply, or run `terraform destroy`
24
+
usage: |-
25
+
Stack Level: Global
22
26
23
-
## Issues, Gotchas, Good-to-Knows
27
+
It's suggested to apply this component to all accounts from which Datadog receives logs.
24
28
25
-
### Destroy/reprovision process
26
-
27
-
Because of the protections for S3 buckets, if we want to destroy/replace our bucket, we need to do so in two passes or
28
-
destroy the bucket manually and then use terraform to clean up the rest. If reprovisioning a recently provisioned
29
-
bucket, the two-pass process works well. If the bucket has a full day or more of logs, though, deleting it manually
30
-
first will avoid terraform timeouts, and then the terraform process can be used to clean up everything else.
31
-
32
-
#### Two step process to destroy via terraform
33
-
34
-
- first set `s3_force_destroy` var to true and apply
35
-
- next set `enabled` to false and apply or use tf destroy
36
-
37
-
## Usage
38
-
39
-
**Stack Level**: Global
40
-
41
-
Here's an example snippet for how to use this component. It's suggested to apply this component to all accounts from
| additional_query_tags | Additional tags to include in query for logs for this archive | `list` | [] | no |
104
-
| catchall | Set to true to enable a catchall for logs unmatched by any queries. This should only be used in one environment/account | `bool` | false | no |
105
-
| datadog_aws_account_id | The AWS account ID Datadog's integration servers use for all integrations | `string` | 464622532012 | no |
106
-
| enable_glacier_transition | Enable/disable transition to glacier. Has no effect unless `lifecycle_rules_enabled` set to true | `bool` | true | no |
107
-
| glacier_transition_days | Number of days after which to transition objects to glacier storage | `number` | 365 | no |
108
-
| lifecycle_rules_enabled | Enable/disable lifecycle management rules for s3 objects | `bool` | true | no |
109
-
| object_lock_days_archive | Set duration of archive bucket object lock | `number` | 7 | yes |
110
-
| object_lock_days_cloudtrail | Set duration of cloudtrail bucket object lock | `number` | 7 | yes |
111
-
| object_lock_mode_archive | Set mode of archive bucket object lock | `string` | COMPLIANCE | yes |
112
-
| object_lock_mode_cloudtrail | Set mode of cloudtrail bucket object lock | `string` | COMPLIANCE | yes |
113
-
| s3_force_destroy | Set to true to delete non-empty buckets when `enabled` is set to false | `bool` | false | for destroy only |
0 commit comments