You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+23-9Lines changed: 23 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -11,12 +11,17 @@ Terraform module to create Remote State Storage resources for workload deploymen
11
11
12
12
```hcl
13
13
module "backend" {
14
-
source = "squareops/tfstate/aws"
15
-
logging = true
16
-
environment = "Production"
17
-
bucket_name = "tfstate"
18
-
force_destroy = true
19
-
versioning_enabled = true
14
+
source = "squareops/tfstate/aws"
15
+
logging = true
16
+
bucket_name = "production-tfstate-bucket" #unique global s3 bucket name
17
+
environment = "prod"
18
+
force_destroy = true
19
+
versioning_enabled = true
20
+
cloudwatch_logging_enabled = true
21
+
log_retention_in_days = 90
22
+
log_bucket_lifecycle_enabled = true
23
+
s3_ia_retention_in_days = 90
24
+
s3_galcier_retention_in_days = 180
20
25
}
21
26
22
27
```
@@ -30,6 +35,9 @@ Terraform state locking is a mechanism used to prevent multiple users from simul
30
35
31
36
An Amazon S3 bucket and a DynamoDB table can be used as a remote backend to store and manage the Terraform state file, and also to implement state locking. The S3 bucket is used to store the state file, while the DynamoDB table is used to store the lock information, such as who acquired the lock and when. Terraform will check the lock state in the DynamoDB table before making changes to the state file in the S3 bucket, and will wait or retry if the lock is already acquired by another instance. This provides a centralized and durable mechanism for managing the Terraform state and ensuring that changes are made in a controlled and safe manner.
32
37
38
+
Additionally, you may have a log bucket configured to store CloudTrail and CloudWatch logs. This log bucket can have a bucket lifecycle policy in place to automatically manage the lifecycle of log data. For example, log data can be transitioned to Amazon S3 Glacier for long-term storage after a certain period, and eventually to Amazon S3 Infrequent Access storage. This helps in optimizing storage costs and ensures that log data is retained according to your organization's compliance requirements.
Security scanning is graciously provided by Prowler. Proowler is the leading fully hosted, cloud-native solution providing continuous cluster security and compliance.
@@ -85,17 +93,23 @@ In this module, we have implemented the following CIS Compliance checks for S3:
85
93
| Name | Description | Type | Default | Required |
| <aname="input_bucket_name"></a> [bucket\_name](#input\_bucket\_name)| Name of the S3 bucket to be created. |`string`|`""`| no |
88
-
| <aname="input_environment"></a> [environment](#input\_environment)| Specify the type of environment(dev, demo, prod) in which the S3 bucket will be created. |`string`|`"demo"`| no |
96
+
| <aname="input_cloudwatch_logging_enabled"></a> [cloudwatch\_logging\_enabled](#input\_cloudwatch\_logging\_enabled)| Enable or disable CloudWatch log group logging. |`bool`|`true`| no |
97
+
| <aname="input_environment"></a> [environment](#input\_environment)| Specify the type of environment(dev, demo, prod) in which the S3 bucket will be created. |`string`|`""`| no |
89
98
| <aname="input_force_destroy"></a> [force\_destroy](#input\_force\_destroy)| Whether or not to delete all objects from the bucket to allow for destruction of the bucket without error. |`bool`|`false`| no |
99
+
| <aname="input_log_bucket_lifecycle_enabled"></a> [log\_bucket\_lifecycle\_enabled](#input\_log\_bucket\_lifecycle\_enabled)| Enable or disable the S3 bucket's lifecycle rule for log data. |`bool`|`true`| no |
100
+
| <aname="input_log_retention_in_days"></a> [log\_retention\_in\_days](#input\_log\_retention\_in\_days)| Retention period (in days) for CloudWatch log groups. |`number`|`90`| no |
90
101
| <aname="input_logging"></a> [logging](#input\_logging)| Configuration for S3 bucket access logging. |`bool`|`true`| no |
102
+
| <aname="input_s3_galcier_retention_in_days"></a> [s3\_galcier\_retention\_in\_days](#input\_s3\_galcier\_retention\_in\_days)| Retention period (in days) for moving S3 log data to Glacier storage. |`number`|`180`| no |
103
+
| <aname="input_s3_ia_retention_in_days"></a> [s3\_ia\_retention\_in\_days](#input\_s3\_ia\_retention\_in\_days)| Retention period (in days) for moving S3 log data to Infrequent Access storage. |`number`|`90`| no |
91
104
| <aname="input_versioning_enabled"></a> [versioning\_enabled](#input\_versioning\_enabled)| Whether or not to enable versioning for the S3 bucket, which allows multiple versions of an object to be stored in the same bucket. |`bool`|`false`| no |
92
105
93
106
## Outputs
94
107
95
108
| Name | Description |
96
109
|------|-------------|
97
-
| <aname="output_dynamodb_table_name"></a> [dynamodb\_table\_name](#output\_dynamodb\_table\_name)| Name of the DynamoDB table that will be used to manage locking and unlocking of the Terraform state file. |
98
-
| <aname="output_log_bucket_name"></a> [log\_bucket\_name](#output\_log\_bucket\_name)| Name of the S3 bucket that will be used to store logs for this module. |
110
+
| <aname="output_dynamodb_table_name"></a> [dynamodb\_table\_name](#output\_dynamodb\_table\_name)| Name of the DynamoDB table that will be used to manage locking and unlocking of the terraform state file. |
111
+
| <aname="output_log_bucket_name"></a> [log\_bucket\_name](#output\_log\_bucket\_name)| Name of the S3 bucket that will be used to store logs. |
112
+
| <aname="output_region"></a> [region](#output\_region)| Name of the region in which Cloudtrail is created |
99
113
| <aname="output_state_bucket_name"></a> [state\_bucket\_name](#output\_state\_bucket\_name)| Specify the region in which an S3 bucket will be created by the module. |
0 commit comments