diff --git a/.github/banner.png b/.github/banner.png
index c958881..68a542e 100644
Binary files a/.github/banner.png and b/.github/banner.png differ
diff --git a/README.md b/README.md
index f28568e..bd252cf 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,7 @@

-

+
-Description of this component
+This component is responsible for provisioning Datadog Log Archives. It creates a single log archive pipeline for each
+AWS account. If the `catchall` flag is set, it creates a catchall archive within the same S3 bucket.
+Each log archive filters for the tag `env:$env` where $env is the environment/account name (ie sbx, prd, tools, etc), as
+well as any tags identified in the additional_tags key. The `catchall` archive, as the name implies, filters for '\*'.
----
-> [!NOTE]
-> This project is part of Cloud Posse's comprehensive ["SweetOps"](https://cpco.io/homepage?utm_source=github&utm_medium=readme&utm_campaign=cloudposse-terraform-components/template&utm_content=) approach towards DevOps.
-> Learn More
->
-> It's 100% Open Source and licensed under the [APACHE2](LICENSE).
->
->
+A second bucket is created for cloudtrail, and a cloudtrail is configured to monitor the log archive bucket and log
+activity to the cloudtrail bucket. To forward these cloudtrail logs to datadog, the cloudtrail bucket's id must be added
+to the s3_buckets key for our datadog-lambda-forwarder component.
-
+Both buckets support object lock, with overridable defaults of COMPLIANCE mode with a duration of 7 days.
+## Prerequisites
+- Datadog integration set up in target environment
+ - We rely on the datadog api and app keys added by our datadog integration component
+## Issues, Gotchas, Good-to-Knows
-## Usage
+### Destroy/reprovision process
+
+Because of the protections for S3 buckets, if we want to destroy/replace our bucket, we need to do so in two passes or
+destroy the bucket manually and then use terraform to clean up the rest. If reprovisioning a recently provisioned
+bucket, the two-pass process works well. If the bucket has a full day or more of logs, though, deleting it manually
+first will avoid terraform timeouts, and then the terraform process can be used to clean up everything else.
+#### Two step process to destroy via terraform
+- first set `s3_force_destroy` var to true and apply
+- next set `enabled` to false and apply or use tf destroy
+
+## Usage
-**Stack Level**: Regional or Test47
+**Stack Level**: Global
-Here's an example snippet for how to use this component.
+Here's an example snippet for how to use this component. It's suggested to apply this component to all accounts from
+which Datadog receives logs.
```yaml
components:
terraform:
- foo:
+ datadog-logs-archive:
+ settings:
+ spacelift:
+ workspace_enabled: true
vars:
enabled: true
+ # additional_query_tags:
+ # - "forwardername:*-dev-datadog-lambda-forwarder-logs"
+ # - "account:123456789012"
```
-
-
-
-
-
-
## Requirements
-| Name | Version |
-|------|---------|
-| [terraform](#requirement\_terraform) | >= 1.0.0 |
+| Name | Version |
+| --------- | --------- |
+| terraform | >= 0.13.0 |
+| aws | >= 2.0 |
+| datadog | >= 3.3.0 |
+| local | >= 1.3 |
## Providers
-No providers.
+| Name | Version |
+| ------- | -------- |
+| aws | >= 2.0 |
+| datadog | >= 3.7.0 |
+| http | >= 2.1.0 |
## Modules
-| Name | Source | Version |
-|------|--------|---------|
-| [this](#module\_this) | cloudposse/label/null | 0.25.0 |
+| Name | Source | Version |
+| -------------------- | ----------------------------------- | ------- |
+| cloudtrail | cloudposse/cloudtrail/aws | 0.21.0 |
+| cloudtrail_s3_bucket | cloudposse/cloudtrail-s3-bucket/aws | 0.23.1 |
+| iam_roles | ../account-map/modules/iam-roles | n/a |
+| s3_bucket | cloudposse/s3-bucket/aws | 0.46.0 |
+| this | cloudposse/label/null | 0.25.0 |
## Resources
-No resources.
+| Name | Type |
+| --------------------------------------- | ----------- |
+| aws_caller_identity.current | data source |
+| aws_partition.current | data source |
+| aws_ssm_parameter.datadog_api_key | data source |
+| aws_ssm_parameter.datadog_app_key | data source |
+| aws_ssm_parameter.datadog_aws_role_name | data source |
+| aws_ssm_parameter.datadog_external_id | data source |
+| datadog_logs_archive.catchall_archive | resource |
+| datadog_logs_archive.logs_archive | resource |
+| http.current_order | data source |
## Inputs
-| Name | Description | Type | Default | Required |
-|------|-------------|------|---------|:--------:|
-| [additional\_tag\_map](#input\_additional\_tag\_map) | Additional key-value pairs to add to each map in `tags_as_list_of_maps`. Not added to `tags` or `id`.
This is for some rare cases where resources want additional configuration of tags
and therefore take a list of maps with tag key, value, and additional configuration. | `map(string)` | `{}` | no |
-| [attributes](#input\_attributes) | ID element. Additional attributes (e.g. `workers` or `cluster`) to add to `id`,
in the order they appear in the list. New attributes are appended to the
end of the list. The elements of the list are joined by the `delimiter`
and treated as a single ID element. | `list(string)` | `[]` | no |
-| [context](#input\_context) | Single object for setting entire context at once.
See description of individual variables for details.
Leave string and numeric variables as `null` to use default value.
Individual variable settings (non-null) override settings in context object,
except for attributes, tags, and additional\_tag\_map, which are merged. | `any` | {
"additional_tag_map": {},
"attributes": [],
"delimiter": null,
"descriptor_formats": {},
"enabled": true,
"environment": null,
"id_length_limit": null,
"label_key_case": null,
"label_order": [],
"label_value_case": null,
"labels_as_tags": [
"unset"
],
"name": null,
"namespace": null,
"regex_replace_chars": null,
"stage": null,
"tags": {},
"tenant": null
} | no |
-| [delimiter](#input\_delimiter) | Delimiter to be used between ID elements.
Defaults to `-` (hyphen). Set to `""` to use no delimiter at all. | `string` | `null` | no |
-| [descriptor\_formats](#input\_descriptor\_formats) | Describe additional descriptors to be output in the `descriptors` output map.
Map of maps. Keys are names of descriptors. Values are maps of the form
`{
format = string
labels = list(string)
}`
(Type is `any` so the map values can later be enhanced to provide additional options.)
`format` is a Terraform format string to be passed to the `format()` function.
`labels` is a list of labels, in order, to pass to `format()` function.
Label values will be normalized before being passed to `format()` so they will be
identical to how they appear in `id`.
Default is `{}` (`descriptors` output will be empty). | `any` | `{}` | no |
-| [enabled](#input\_enabled) | Set to false to prevent the module from creating any resources | `bool` | `null` | no |
-| [environment](#input\_environment) | ID element. Usually used for region e.g. 'uw2', 'us-west-2', OR role 'prod', 'staging', 'dev', 'UAT' | `string` | `null` | no |
-| [id\_length\_limit](#input\_id\_length\_limit) | Limit `id` to this many characters (minimum 6).
Set to `0` for unlimited length.
Set to `null` for keep the existing setting, which defaults to `0`.
Does not affect `id_full`. | `number` | `null` | no |
-| [label\_key\_case](#input\_label\_key\_case) | Controls the letter case of the `tags` keys (label names) for tags generated by this module.
Does not affect keys of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper`.
Default value: `title`. | `string` | `null` | no |
-| [label\_order](#input\_label\_order) | The order in which the labels (ID elements) appear in the `id`.
Defaults to ["namespace", "environment", "stage", "name", "attributes"].
You can omit any of the 6 labels ("tenant" is the 6th), but at least one must be present. | `list(string)` | `null` | no |
-| [label\_value\_case](#input\_label\_value\_case) | Controls the letter case of ID elements (labels) as included in `id`,
set as tag values, and output by this module individually.
Does not affect values of tags passed in via the `tags` input.
Possible values: `lower`, `title`, `upper` and `none` (no transformation).
Set this to `title` and set `delimiter` to `""` to yield Pascal Case IDs.
Default value: `lower`. | `string` | `null` | no |
-| [labels\_as\_tags](#input\_labels\_as\_tags) | Set of labels (ID elements) to include as tags in the `tags` output.
Default is to include all labels.
Tags with empty values will not be included in the `tags` output.
Set to `[]` to suppress all generated tags.
**Notes:**
The value of the `name` tag, if included, will be the `id`, not the `name`.
Unlike other `null-label` inputs, the initial setting of `labels_as_tags` cannot be
changed in later chained modules. Attempts to change it will be silently ignored. | `set(string)` | [
"default"
]
| no |
-| [name](#input\_name) | ID element. Usually the component or solution name, e.g. 'app' or 'jenkins'.
This is the only ID element not also included as a `tag`.
The "name" tag is set to the full `id` string. There is no tag with the value of the `name` input. | `string` | `null` | no |
-| [namespace](#input\_namespace) | ID element. Usually an abbreviation of your organization name, e.g. 'eg' or 'cp', to help ensure generated IDs are globally unique | `string` | `null` | no |
-| [regex\_replace\_chars](#input\_regex\_replace\_chars) | Terraform regular expression (regex) string.
Characters matching the regex will be removed from the ID elements.
If not set, `"/[^a-zA-Z0-9-]/"` is used to remove all characters other than hyphens, letters and digits. | `string` | `null` | no |
-| [stage](#input\_stage) | ID element. Usually used to indicate role, e.g. 'prod', 'staging', 'source', 'build', 'test', 'deploy', 'release' | `string` | `null` | no |
-| [tags](#input\_tags) | Additional tags (e.g. `{'BusinessUnit': 'XYZ'}`).
Neither the tag keys nor the tag values will be modified by this module. | `map(string)` | `{}` | no |
-| [tenant](#input\_tenant) | ID element \_(Rarely used, not included by default)\_. A customer identifier, indicating who this instance of a resource is for | `string` | `null` | no |
+| Name | Description | Type | Default | Required |
+| --------------------------- | ----------------------------------------------------------------------------------------------------------------------- | -------- | ------------ | ---------------- |
+| additional_query_tags | Additional tags to include in query for logs for this archive | `list` | [] | no |
+| catchall | Set to true to enable a catchall for logs unmatched by any queries. This should only be used in one environment/account | `bool` | false | no |
+| datadog_aws_account_id | The AWS account ID Datadog's integration servers use for all integrations | `string` | 464622532012 | no |
+| enable_glacier_transition | Enable/disable transition to glacier. Has no effect unless `lifecycle_rules_enabled` set to true | `bool` | true | no |
+| glacier_transition_days | Number of days after which to transition objects to glacier storage | `number` | 365 | no |
+| lifecycle_rules_enabled | Enable/disable lifecycle management rules for s3 objects | `bool` | true | no |
+| object_lock_days_archive | Set duration of archive bucket object lock | `number` | 7 | yes |
+| object_lock_days_cloudtrail | Set duration of cloudtrail bucket object lock | `number` | 7 | yes |
+| object_lock_mode_archive | Set mode of archive bucket object lock | `string` | COMPLIANCE | yes |
+| object_lock_mode_cloudtrail | Set mode of cloudtrail bucket object lock | `string` | COMPLIANCE | yes |
+| s3_force_destroy | Set to true to delete non-empty buckets when `enabled` is set to false | `bool` | false | for destroy only |
## Outputs
-| Name | Description |
-|------|-------------|
-| [mock](#output\_mock) | Mock output example for the Cloud Posse Terraform component template |
-
+| Name | Description |
+| ----------------------------- | ----------------------------------------------------------- |
+| archive_id | The ID of the environment-specific log archive |
+| bucket_arn | The ARN of the bucket used for log archive storage |
+| bucket_domain_name | The FQDN of the bucket used for log archive storage |
+| bucket_id | The ID (name) of the bucket used for log archive storage |
+| bucket_region | The region of the bucket used for log archive storage |
+| cloudtrail_bucket_arn | The ARN of the bucket used for cloudtrail log storage |
+| cloudtrail_bucket_domain_name | The FQDN of the bucket used for cloudtrail log storage |
+| cloudtrail_bucket_id | The ID (name) of the bucket used for cloudtrail log storage |
+| catchall_id | The ID of the catchall log archive |
+## References
-## Related Projects
+- [cloudposse/s3-bucket/aws](https://registry.terraform.io/modules/cloudposse/s3-bucket/aws/latest) - Cloud Posse's S3
+ component
+- [datadog_logs_archive resource]
+ (https://registry.terraform.io/providers/DataDog/datadog/latest/docs/resources/logs_archive) - Datadog's provider
+ documentation for the datadog_logs_archive resource
-Check out these related projects.
-- [Cloud Posse Terraform Modules](https://docs.cloudposse.com/modules/) - Our collection of reusable Terraform modules used by our reference architectures.
-- [Atmos](https://atmos.tools) - Atmos is like docker-compose but for your infrastructure
+---
+> [!NOTE]
+> This project is part of Cloud Posse's comprehensive ["SweetOps"](https://cpco.io/homepage?utm_source=github&utm_medium=readme&utm_campaign=cloudposse-terraform-components/aws-datadog-logs-archive&utm_content=) approach towards DevOps.
+> Learn More
+>
+> It's 100% Open Source and licensed under the [APACHE2](LICENSE).
+>
+>
+
+
+
-## References
-For additional context, refer to some of these links.
-- [Cloud Posse Documentation](https://docs.cloudposse.com) - Complete documentation for the Cloud Posse solution
-- [Reference Architectures](https://cloudposse.com/) - Launch effortlessly with our turnkey reference architectures, built either by your team or ours.
+
+
+
+
+## Related Projects
+
+Check out these related projects.
+
+- [Cloud Posse Terraform Modules](https://docs.cloudposse.com/modules/) - Our collection of reusable Terraform modules used by our reference architectures.
+- [Atmos](https://atmos.tools) - Atmos is like docker-compose but for your infrastructure
+
## β¨ Contributing
This project is under active development, and we encourage contributions from our community.
Many thanks to our outstanding contributors:
-
-
+
+
### π Bug Reports & Feature Requests
-Please use the [issue tracker](https://github.com/cloudposse-terraform-components/template/issues) to report any bugs or file feature requests.
+Please use the [issue tracker](https://github.com/cloudposse-terraform-components/aws-datadog-logs-archive/issues) to report any bugs or file feature requests.
### π» Developing
If you are interested in being a contributor and want to get involved in developing this project or help out with Cloud Posse's other projects, we would love to hear from you!
-Hit us up in [Slack](https://cpco.io/slack?utm_source=github&utm_medium=readme&utm_campaign=cloudposse-terraform-components/template&utm_content=slack), in the `#cloudposse` channel.
+Hit us up in [Slack](https://cpco.io/slack?utm_source=github&utm_medium=readme&utm_campaign=cloudposse-terraform-components/aws-datadog-logs-archive&utm_content=slack), in the `#cloudposse` channel.
In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.
- 1. Review our [Code of Conduct](https://github.com/cloudposse-terraform-components/template/?tab=coc-ov-file#code-of-conduct) and [Contributor Guidelines](https://github.com/cloudposse/.github/blob/main/CONTRIBUTING.md).
+ 1. Review our [Code of Conduct](https://github.com/cloudposse-terraform-components/aws-datadog-logs-archive/?tab=coc-ov-file#code-of-conduct) and [Contributor Guidelines](https://github.com/cloudposse/.github/blob/main/CONTRIBUTING.md).
2. **Fork** the repo on GitHub
3. **Clone** the project to your own machine
4. **Commit** changes to your own branch
@@ -161,28 +215,28 @@ In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.
### π Slack Community
-Join our [Open Source Community](https://cpco.io/slack?utm_source=github&utm_medium=readme&utm_campaign=cloudposse-terraform-components/template&utm_content=slack) on Slack. It's **FREE** for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally *sweet* infrastructure.
+Join our [Open Source Community](https://cpco.io/slack?utm_source=github&utm_medium=readme&utm_campaign=cloudposse-terraform-components/aws-datadog-logs-archive&utm_content=slack) on Slack. It's **FREE** for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally *sweet* infrastructure.
### π° Newsletter
-Sign up for [our newsletter](https://cpco.io/newsletter?utm_source=github&utm_medium=readme&utm_campaign=cloudposse-terraform-components/template&utm_content=newsletter) and join 3,000+ DevOps engineers, CTOs, and founders who get insider access to the latest DevOps trends, so you can always stay in the know.
+Sign up for [our newsletter](https://cpco.io/newsletter?utm_source=github&utm_medium=readme&utm_campaign=cloudposse-terraform-components/aws-datadog-logs-archive&utm_content=newsletter) and join 3,000+ DevOps engineers, CTOs, and founders who get insider access to the latest DevOps trends, so you can always stay in the know.
Dropped straight into your Inbox every week β and usually a 5-minute read.
-### π Office Hours
+### π Office Hours
-[Join us every Wednesday via Zoom](https://cloudposse.com/office-hours?utm_source=github&utm_medium=readme&utm_campaign=cloudposse-terraform-components/template&utm_content=office_hours) for your weekly dose of insider DevOps trends, AWS news and Terraform insights, all sourced from our SweetOps community, plus a _live Q&A_ that you canβt find anywhere else.
+[Join us every Wednesday via Zoom](https://cloudposse.com/office-hours?utm_source=github&utm_medium=readme&utm_campaign=cloudposse-terraform-components/aws-datadog-logs-archive&utm_content=office_hours) for your weekly dose of insider DevOps trends, AWS news and Terraform insights, all sourced from our SweetOps community, plus a _live Q&A_ that you canβt find anywhere else.
It's **FREE** for everyone!
## About
-This project is maintained by Cloud Posse, LLC.
-
+This project is maintained by Cloud Posse, LLC.
+
-We are a [**DevOps Accelerator**](https://cpco.io/commercial-support?utm_source=github&utm_medium=readme&utm_campaign=cloudposse-terraform-components/template&utm_content=commercial_support) for funded startups and enterprises.
+We are a [**DevOps Accelerator**](https://cpco.io/commercial-support?utm_source=github&utm_medium=readme&utm_campaign=cloudposse-terraform-components/aws-datadog-logs-archive&utm_content=commercial_support) for funded startups and enterprises.
Use our ready-to-go terraform architecture blueprints for AWS to get up and running quickly.
We build it with you. You own everything. Your team wins. Plus, we stick around until you succeed.
-
+
*Your team can operate like a pro today.*
@@ -203,7 +257,7 @@ Ensure that your team succeeds by using our proven process and turnkey blueprint
- **Bug Fixes.** We'll rapidly work with you to fix any bugs in our projects.
-
+
## License
@@ -242,6 +296,6 @@ All other trademarks referenced herein are the property of their respective owne
Copyright Β© 2017-2024 [Cloud Posse, LLC](https://cpco.io/copyright)
-
+
-
+