Skip to content

Commit f604bfa

Browse files
Merge pull request #5 from RohitSquareops/patch-1
Log bucket can now have CW logs and added retention period on bucket
2 parents 5ce52e3 + 5c43c75 commit f604bfa

File tree

6 files changed

+110
-38
lines changed

6 files changed

+110
-38
lines changed

README.md

Lines changed: 23 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -11,12 +11,17 @@ Terraform module to create Remote State Storage resources for workload deploymen
1111

1212
```hcl
1313
module "backend" {
14-
source = "squareops/tfstate/aws"
15-
logging = true
16-
environment = "Production"
17-
bucket_name = "tfstate"
18-
force_destroy = true
19-
versioning_enabled = true
14+
source = "squareops/tfstate/aws"
15+
logging = true
16+
bucket_name = "production-tfstate-bucket" #unique global s3 bucket name
17+
environment = "prod"
18+
force_destroy = true
19+
versioning_enabled = true
20+
cloudwatch_logging_enabled = true
21+
log_retention_in_days = 90
22+
log_bucket_lifecycle_enabled = true
23+
s3_ia_retention_in_days = 90
24+
s3_galcier_retention_in_days = 180
2025
}
2126
2227
```
@@ -30,6 +35,9 @@ Terraform state locking is a mechanism used to prevent multiple users from simul
3035

3136
An Amazon S3 bucket and a DynamoDB table can be used as a remote backend to store and manage the Terraform state file, and also to implement state locking. The S3 bucket is used to store the state file, while the DynamoDB table is used to store the lock information, such as who acquired the lock and when. Terraform will check the lock state in the DynamoDB table before making changes to the state file in the S3 bucket, and will wait or retry if the lock is already acquired by another instance. This provides a centralized and durable mechanism for managing the Terraform state and ensuring that changes are made in a controlled and safe manner.
3237

38+
Additionally, you may have a log bucket configured to store CloudTrail and CloudWatch logs. This log bucket can have a bucket lifecycle policy in place to automatically manage the lifecycle of log data. For example, log data can be transitioned to Amazon S3 Glacier for long-term storage after a certain period, and eventually to Amazon S3 Infrequent Access storage. This helps in optimizing storage costs and ensures that log data is retained according to your organization's compliance requirements.
39+
40+
3341
## Security & Compliance [<img src=" https://prowler.pro/wp-content/themes/prowler-pro/assets/img/logo.svg" width="250" align="right" />](https://prowler.pro/)
3442

3543
Security scanning is graciously provided by Prowler. Proowler is the leading fully hosted, cloud-native solution providing continuous cluster security and compliance.
@@ -85,17 +93,23 @@ In this module, we have implemented the following CIS Compliance checks for S3:
8593
| Name | Description | Type | Default | Required |
8694
|------|-------------|------|---------|:--------:|
8795
| <a name="input_bucket_name"></a> [bucket\_name](#input\_bucket\_name) | Name of the S3 bucket to be created. | `string` | `""` | no |
88-
| <a name="input_environment"></a> [environment](#input\_environment) | Specify the type of environment(dev, demo, prod) in which the S3 bucket will be created. | `string` | `"demo"` | no |
96+
| <a name="input_cloudwatch_logging_enabled"></a> [cloudwatch\_logging\_enabled](#input\_cloudwatch\_logging\_enabled) | Enable or disable CloudWatch log group logging. | `bool` | `true` | no |
97+
| <a name="input_environment"></a> [environment](#input\_environment) | Specify the type of environment(dev, demo, prod) in which the S3 bucket will be created. | `string` | `""` | no |
8998
| <a name="input_force_destroy"></a> [force\_destroy](#input\_force\_destroy) | Whether or not to delete all objects from the bucket to allow for destruction of the bucket without error. | `bool` | `false` | no |
99+
| <a name="input_log_bucket_lifecycle_enabled"></a> [log\_bucket\_lifecycle\_enabled](#input\_log\_bucket\_lifecycle\_enabled) | Enable or disable the S3 bucket's lifecycle rule for log data. | `bool` | `true` | no |
100+
| <a name="input_log_retention_in_days"></a> [log\_retention\_in\_days](#input\_log\_retention\_in\_days) | Retention period (in days) for CloudWatch log groups. | `number` | `90` | no |
90101
| <a name="input_logging"></a> [logging](#input\_logging) | Configuration for S3 bucket access logging. | `bool` | `true` | no |
102+
| <a name="input_s3_galcier_retention_in_days"></a> [s3\_galcier\_retention\_in\_days](#input\_s3\_galcier\_retention\_in\_days) | Retention period (in days) for moving S3 log data to Glacier storage. | `number` | `180` | no |
103+
| <a name="input_s3_ia_retention_in_days"></a> [s3\_ia\_retention\_in\_days](#input\_s3\_ia\_retention\_in\_days) | Retention period (in days) for moving S3 log data to Infrequent Access storage. | `number` | `90` | no |
91104
| <a name="input_versioning_enabled"></a> [versioning\_enabled](#input\_versioning\_enabled) | Whether or not to enable versioning for the S3 bucket, which allows multiple versions of an object to be stored in the same bucket. | `bool` | `false` | no |
92105

93106
## Outputs
94107

95108
| Name | Description |
96109
|------|-------------|
97-
| <a name="output_dynamodb_table_name"></a> [dynamodb\_table\_name](#output\_dynamodb\_table\_name) | Name of the DynamoDB table that will be used to manage locking and unlocking of the Terraform state file. |
98-
| <a name="output_log_bucket_name"></a> [log\_bucket\_name](#output\_log\_bucket\_name) | Name of the S3 bucket that will be used to store logs for this module. |
110+
| <a name="output_dynamodb_table_name"></a> [dynamodb\_table\_name](#output\_dynamodb\_table\_name) | Name of the DynamoDB table that will be used to manage locking and unlocking of the terraform state file. |
111+
| <a name="output_log_bucket_name"></a> [log\_bucket\_name](#output\_log\_bucket\_name) | Name of the S3 bucket that will be used to store logs. |
112+
| <a name="output_region"></a> [region](#output\_region) | Name of the region in which Cloudtrail is created |
99113
| <a name="output_state_bucket_name"></a> [state\_bucket\_name](#output\_state\_bucket\_name) | Specify the region in which an S3 bucket will be created by the module. |
100114
<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
101115

examples/state-storage-backend/main.tf

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -9,10 +9,15 @@ locals {
99
}
1010

1111
module "backend" {
12-
source = "squareops/tfstate/aws"
13-
logging = true
14-
environment = local.environment
15-
bucket_name = "production-tfstate-bucket" #unique global s3 bucket name
16-
force_destroy = true
17-
versioning_enabled = true
12+
source = "squareops/tfstate/aws"
13+
logging = true
14+
bucket_name = "production-tfstate-bucket" #unique global s3 bucket name
15+
environment = local.environment
16+
force_destroy = true
17+
versioning_enabled = true
18+
cloudwatch_logging_enabled = true
19+
log_retention_in_days = 90
20+
log_bucket_lifecycle_enabled = true
21+
s3_ia_retention_in_days = 90
22+
s3_galcier_retention_in_days = 180
1823
}

examples/state-storage-backend/outputs.tf

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
output "bucket_region" {
22
description = "Specify the region in which an S3 bucket will be created by the module."
3-
value = local.region
3+
value = module.backend.region
44
}
55

66
output "state_bucket_name" {

logging.tf

Lines changed: 37 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@ resource "aws_cloudtrail" "s3_cloudtrail" {
77
include_global_service_events = false
88
enable_logging = true
99
enable_log_file_validation = true
10-
cloud_watch_logs_role_arn = aws_iam_role.s3_cloudtrail_cloudwatch_role[0].arn
11-
cloud_watch_logs_group_arn = "${aws_cloudwatch_log_group.s3_cloudwatch[0].arn}:*"
10+
cloud_watch_logs_role_arn = var.cloudwatch_logging_enabled ? aws_iam_role.s3_cloudtrail_cloudwatch_role[0].arn : null
11+
cloud_watch_logs_group_arn = var.cloudwatch_logging_enabled ? "${aws_cloudwatch_log_group.s3_cloudwatch[0].arn}:*" : null
1212
kms_key_id = module.kms_key[0].key_arn
1313
event_selector {
1414
read_write_type = "All"
@@ -17,6 +17,10 @@ resource "aws_cloudtrail" "s3_cloudtrail" {
1717
type = "AWS::S3::Object"
1818
values = ["arn:aws:s3"]
1919
}
20+
data_resource {
21+
type = "AWS::Lambda::Function"
22+
values = ["arn:aws:lambda"]
23+
}
2024
}
2125
tags = merge(
2226
{ "Name" = format("%s-%s-S3", var.bucket_name, data.aws_caller_identity.current.account_id) },
@@ -25,20 +29,18 @@ resource "aws_cloudtrail" "s3_cloudtrail" {
2529
}
2630

2731
resource "aws_cloudwatch_log_group" "s3_cloudwatch" {
28-
count = var.logging ? 1 : 0
29-
name = format("%s-%s-S3", var.bucket_name, data.aws_caller_identity.current.account_id)
30-
kms_key_id = module.kms_key[0].key_arn
31-
provisioner "local-exec" {
32-
command = "sleep 10"
33-
}
32+
count = var.logging && var.cloudwatch_logging_enabled ? 1 : 0
33+
name = format("%s-%s-S3", var.bucket_name, data.aws_caller_identity.current.account_id)
34+
kms_key_id = module.kms_key[0].key_arn
35+
retention_in_days = var.log_retention_in_days
3436
tags = merge(
3537
{ "Name" = format("%s-%s-S3", var.bucket_name, data.aws_caller_identity.current.account_id) },
3638
local.tags,
3739
)
3840
}
3941

4042
resource "aws_iam_role" "s3_cloudtrail_cloudwatch_role" {
41-
count = var.logging ? 1 : 0
43+
count = var.logging && var.cloudwatch_logging_enabled ? 1 : 0
4244
name = format("%s-cloudtrail-cloudwatch-S3", var.bucket_name)
4345
assume_role_policy = data.aws_iam_policy_document.cloudtrail_assume_role[0].json
4446
tags = merge(
@@ -61,7 +63,7 @@ data "aws_iam_policy_document" "cloudtrail_assume_role" {
6163
}
6264

6365
resource "aws_iam_policy" "s3_cloudtrail_cloudwatch_policy" {
64-
count = var.logging ? 1 : 0
66+
count = var.logging && var.cloudwatch_logging_enabled ? 1 : 0
6567
name = format("%s-cloudtrail-cloudwatch-S3", var.bucket_name)
6668
policy = <<EOF
6769
{
@@ -99,14 +101,11 @@ EOF
99101

100102

101103
resource "aws_iam_role_policy_attachment" "s3_cloudtrail_policy_attachment" {
102-
count = var.logging ? 1 : 0
104+
count = var.logging && var.cloudwatch_logging_enabled ? 1 : 0
103105
role = aws_iam_role.s3_cloudtrail_cloudwatch_role[0].name
104106
policy_arn = aws_iam_policy.s3_cloudtrail_cloudwatch_policy[0].arn
105107
}
106108

107-
108-
109-
110109
module "log_bucket" {
111110
count = var.logging ? 1 : 0
112111
source = "terraform-aws-modules/s3-bucket/aws"
@@ -116,13 +115,32 @@ module "log_bucket" {
116115
attach_elb_log_delivery_policy = true
117116
attach_lb_log_delivery_policy = true
118117
attach_deny_insecure_transport_policy = true
118+
versioning = {
119+
enabled = var.versioning_enabled
120+
}
119121
# S3 bucket-level Public Access Block configuration
120122
block_public_acls = true
121123
block_public_policy = true
122124
ignore_public_acls = true
123125
restrict_public_buckets = true
124-
attach_policy = true
125-
policy = <<POLICY
126+
lifecycle_rule = [
127+
{
128+
id = "log"
129+
enabled = var.log_bucket_lifecycle_enabled
130+
131+
transition = [
132+
{
133+
days = var.s3_ia_retention_in_days
134+
storage_class = "ONEZONE_IA"
135+
}, {
136+
days = var.s3_galcier_retention_in_days
137+
storage_class = "GLACIER"
138+
}
139+
]
140+
}
141+
]
142+
attach_policy = true
143+
policy = <<POLICY
126144
{
127145
"Version": "2012-10-17",
128146
"Statement": [
@@ -189,7 +207,7 @@ data "aws_iam_policy_document" "default" {
189207
condition {
190208
test = "StringLike"
191209
variable = "kms:EncryptionContext:aws:cloudtrail:arn"
192-
values = ["arn:aws:cloudtrail:*:XXXXXXXXXXXX:trail/*"]
210+
values = ["arn:aws:cloudtrail:*:${data.aws_caller_identity.current.account_id}:trail/*"]
193211
}
194212
}
195213

@@ -220,12 +238,12 @@ data "aws_iam_policy_document" "default" {
220238
test = "StringEquals"
221239
variable = "kms:CallerAccount"
222240
values = [
223-
"XXXXXXXXXXXX"]
241+
"${data.aws_caller_identity.current.account_id}"]
224242
}
225243
condition {
226244
test = "StringLike"
227245
variable = "kms:EncryptionContext:aws:cloudtrail:arn"
228-
values = ["arn:aws:cloudtrail:*:XXXXXXXXXXXX:trail/*"]
246+
values = ["arn:aws:cloudtrail:*:${data.aws_caller_identity.current.account_id}:trail/*"]
229247
}
230248
}
231249

outputs.tf

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,16 @@ output "state_bucket_name" {
44
}
55

66
output "dynamodb_table_name" {
7-
description = "Name of the DynamoDB table that will be used to manage locking and unlocking of the Terraform state file."
7+
description = "Name of the DynamoDB table that will be used to manage locking and unlocking of the terraform state file."
88
value = aws_dynamodb_table.dynamodb_table.id
99
}
1010

1111
output "log_bucket_name" {
12-
description = "Name of the S3 bucket that will be used to store logs for this module."
12+
description = "Name of the S3 bucket that will be used to store logs."
1313
value = var.logging ? module.log_bucket[0].s3_bucket_id : null
1414
}
15+
16+
output "region" {
17+
description = "Name of the region in which Cloudtrail is created"
18+
value = var.logging ? aws_cloudtrail.s3_cloudtrail[0].home_region : null
19+
}

variables.tf

Lines changed: 31 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,36 @@ variable "logging" {
2424

2525
variable "environment" {
2626
description = "Specify the type of environment(dev, demo, prod) in which the S3 bucket will be created. "
27-
default = "demo"
27+
default = ""
2828
type = string
2929
}
30+
31+
variable "cloudwatch_logging_enabled" {
32+
description = "Enable or disable CloudWatch log group logging."
33+
default = true
34+
type = bool
35+
}
36+
37+
variable "log_retention_in_days" {
38+
description = "Retention period (in days) for CloudWatch log groups."
39+
default = 90
40+
type = number
41+
}
42+
43+
variable "s3_galcier_retention_in_days" {
44+
description = "Retention period (in days) for moving S3 log data to Glacier storage."
45+
default = 180
46+
type = number
47+
}
48+
49+
variable "s3_ia_retention_in_days" {
50+
description = "Retention period (in days) for moving S3 log data to Infrequent Access storage."
51+
default = 90
52+
type = number
53+
}
54+
55+
variable "log_bucket_lifecycle_enabled" {
56+
description = "Enable or disable the S3 bucket's lifecycle rule for log data."
57+
default = true
58+
type = bool
59+
}

0 commit comments

Comments
 (0)