You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+35-28Lines changed: 35 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,45 +1,51 @@
1
1
# terraform-example-foundation
2
-
This is an example repo showing how the CFT Terraform modules can be composed to build a secure GCP foundation, following the [Google Cloud security foundations blueprint](https://services.google.com/fh/files/misc/google-cloud-security-foundations-guide.pdf).
3
-
The supplied structure and code is intended to form a starting point for building your own foundation with pragmatic defaults you can customize to meet your own requirements. Currently, the code leverages Google Cloud Build for deployment of the Terraform from step 1 onwards.
4
-
Cloud Build has been chosen to allow teams to quickly get started without needing to deploy a CI/CD tool, although it is worth noting the code can easily be executed by your preferred tool.
5
-
Jenkins is also avaible to be used instead of Cloud Build.
2
+
This is an example repo showing how the CFT Terraform modules can be composed to build a secure GCP foundation, following the [Google Cloud security foundations guide](https://services.google.com/fh/files/misc/google-cloud-security-foundations-guide.pdf).
3
+
The supplied structure and code is intended to form a starting point for building your own foundation with pragmatic defaults you can customize to meet your own requirements. Currently, the step 0 is manually executed.
4
+
From step 1 onwards, the Terraform code is deployed by leveraging either Google Cloud Build (by default) or Jenkins.
5
+
Cloud Build has been chosen by default to allow teams to quickly get started without needing to deploy a CI/CD tool, although it is worth noting the code can easily be executed by your preferred tool.
6
6
7
7
## Overview
8
8
This repo contains several distinct Terraform projects each within their own directory that must be applied separately, but in sequence.
9
9
Each of these Terraform projects are to be layered on top of each other, running in the following order.
10
10
11
11
### [0. bootstrap](./0-bootstrap/)
12
12
13
-
This stage executes the [CFT Bootstrap module](https://github.com/terraform-google-modules/terraform-google-bootstrap) which bootstraps an existing GCP organization, creating all the required GCP resources & permissions to start using the Cloud Foundation Toolkit (CFT). You can use either of these two tools for your CICD pipelines: Cloud Build (by default) or Jenkins. If you want to use Jenkins instead of Cloud Build, please see [README-Jenkins](./0-bootstrap/README-Jenkins.md).
13
+
This stage executes the [CFT Bootstrap module](https://github.com/terraform-google-modules/terraform-google-bootstrap) which bootstraps an existing GCP organization, creating all the required GCP resources & permissions to start using the Cloud Foundation Toolkit (CFT).
14
+
For CI/CD pipelines, you can use either Cloud Build (by default) or Jenkins. If you want to use Jenkins instead of Cloud Build, please see [README-Jenkins](./0-bootstrap/README-Jenkins.md) on how to use the included Jenkins sub-module.
14
15
15
16
The bootstrap step includes:
16
17
- The `cft-seed` project, which contains:
17
18
- Terraform state bucket
18
19
- Custom Service Account used by Terraform to create new resources in GCP
19
20
- The `cft-cloudbuild` project (`prj-cicd` if using Jenkins), which contains:
20
-
- A CICD pipeline implemented with either Cloud Build or Jenkins
21
+
- A CI/CD pipeline implemented with either Cloud Build or Jenkins
21
22
- If using Cloud Build:
22
23
- Cloud Source Repository
23
24
- If using Jenkins:
25
+
- A GCE Instance configured as a Jenkins Agent
24
26
- Custom Service Account to run Jenkins Agents GCE instances
25
27
- VPN connection with on-prem (or where ever your Jenkins Master is located)
26
28
27
-
It is a best practice to have two separate projects here (`cft-seed` and `prj-cicd`) for separation of concerns. On one hand, `cft-seed` stores terraform state and has the Service Account able to create / modify infrastructure. On the other hand, the deployment of that infrastructure is coordinated by a tool of your choice (either Cloud Build or Jenkins), which is implemented in `prj-cicd`.
29
+
It is a best practice to separate concerns by having two projects here: one for the CFT resources and one for the CI/CD tool.
30
+
The `cft-seed` project stores Terraform state and has the Service Account able to create / modify infrastructure.
31
+
On the other hand, the deployment of that infrastructure is coordinated by a CI/CD tool of your choice allocated in a second project (named `cft-cloudbuild` project if using Google Cloud Build and `prj-cicd` project if using Jenkins).
28
32
29
-
If using Cloud Build, its default service account `@cloudbuild.gserviceaccount.com` is granted access to generate tokens over the Terraform custom service account. If using Jenkins, the custom service account used by the GCE instance is granted the access.
33
+
To further separate the concerns at the IAM level as well, the service account of the CI/CD tool is given different permissions than the Terraform account.
34
+
The CI/CD tool account (`@cloudbuild.gserviceaccount.com` if using Cloud Build and `[email protected]` if using Jenkins) is granted access to generate tokens over the Terraform custom service account.
35
+
In this configuration, the baseline permissions of the CI/CD tool are limited and the Terraform custom Service Account is granted the IAM permissions required to build the foundation.
30
36
31
37
After executing this step, you will have the following structure:
32
38
33
39
```
34
40
example-organization/
35
41
└── fldr-bootstrap
36
-
├── cft-cloudbuild
42
+
├── cft-cloudbuild (prj-cicd if using Jenkins)
37
43
└── cft-seed
38
44
```
39
45
40
-
In addition, this step uses the optional Cloud Build submodule, which sets up Cloud Build and Cloud Source Repositories for each of the stages below.
41
-
A simple trigger mechanism is configured, which runs a `terraform plan` for any non master branch and `terraform apply` when changes are merged to the master branch.
42
-
Usage instructions are available in the bootstrap [README](./0-bootstrap/README.md).
46
+
When this step uses the Cloud Build submodule, it sets up Cloud Build and Cloud Source Repositories for each of the stages below.
47
+
Triggers are configured to run a `terraform plan` for any non environment branch and `terraform apply` when changes are merged to an environment branch (`development`, `non-production` & `production`).
48
+
Usage instructions are available in the 0-bootstrap [README](./0-bootstrap/README.md).
43
49
44
50
### [1. org](./1-org/)
45
51
@@ -59,11 +65,12 @@ example-organization
59
65
60
66
#### Logs
61
67
62
-
Among the four projects created under the common folder, two projects (prj-c-logging, prj-c-org-billing-logs) are used for logging. The first one for organization wide audit logs and another for billing logs.
68
+
Among the six projects created under the common folder, two projects (`prj-c-logging`, `prj-c-billing-logs`) are used for logging.
69
+
The first one for organization wide audit logs and the latter for billing logs.
63
70
In both cases the logs are collected into BigQuery datasets which can then be used general querying, dashboarding & reporting. Logs are also exported to Pub/Sub and GCS bucket.
64
71
_The various audit log types being captured in BigQuery are retained for 30 days._
65
72
66
-
For billing data, a BigQuery dataset is created with permissions attached however you will need to configure a billing export [manually](https://cloud.google.com/billing/docs/how-to/export-data-bigquery) as there is no easy way to automate this currently.
73
+
For billing data, a BigQuery dataset is created with permissions attached, however you will need to configure a billing export [manually](https://cloud.google.com/billing/docs/how-to/export-data-bigquery), as there is no easy way to automate this at the moment.
67
74
68
75
#### DNS Hub
69
76
@@ -75,7 +82,7 @@ Under the common folder, one project is created. This project will host the Inte
75
82
76
83
#### SCC Notification
77
84
78
-
Under the common folder, one project is created. This project will host the SCC Notification resources at organization level.
85
+
Under the common folder, one project is created. This project will host the SCC Notification resources at the organization level.
79
86
This project will contain a Pub/Sub topic and subscription, a [SCC Notification](https://cloud.google.com/security-command-center/docs/how-to-notifications) configured to send all new Findings to the topic created.
80
87
You can adjust the filter when deploying this step.
81
88
@@ -111,41 +118,41 @@ example-organization
111
118
112
119
#### Monitoring
113
120
114
-
Under the environment folder, a project is created per environment (development, production & non-production) which is intended to be used as a [Cloud Monitoring workspace](https://cloud.google.com/monitoring/workspaces) for all projects in that environment.
121
+
Under the environment folder, a project is created per environment (`development`, `non-production` & `production`), which is intended to be used as a [Cloud Monitoring workspace](https://cloud.google.com/monitoring/workspaces) for all projects in that environment.
115
122
Please note that creating the [workspace and linking projects](https://cloud.google.com/monitoring/workspaces/create) can currently only be completed through the Cloud Console.
116
-
If you have strong IAM requirements for these monitoring workspaces, it is worth considering creating these at a more granular level such as per business unit or per application.
123
+
If you have strong IAM requirements for these monitoring workspaces, it is worth considering creating these at a more granular level, such as per business unit or per application.
117
124
118
125
#### Networking
119
126
120
-
Under the environment folder, two projects, one for private and another for restricted network, are created per environment (development, production & non-production) which is intended to be used as a [Shared VPC Host project](https://cloud.google.com/vpc/docs/shared-vpc) for all projects in that environment.
127
+
Under the environment folder, two projects, one for base and another for restricted network, are created per environment (`development`, `non-production` & `production`) which is intended to be used as a [Shared VPC Host project](https://cloud.google.com/vpc/docs/shared-vpc) for all projects in that environment.
121
128
This stage only creates the projects and enables the correct APIs, the following [networks stage](./3-networks/) creates the actual Shared VPC networks.
122
129
123
130
#### Secrets
124
131
125
132
Under the environment folder, one project is created. This is allocated for [GCP Secret Manager](https://cloud.google.com/secret-manager) for secrets shared by the environment.
126
133
127
-
Usage instructions are available for the org step in the [README](./2-environments/README.md).
134
+
Usage instructions are available for the environments step in the [README](./2-environments/README.md).
128
135
129
136
### [3. networks](./3-networks/)
130
137
131
-
This step focuses on creating a Shared VPC per environment (development, production & non-production) in a standard configuration with a reasonable security baseline. Currently this includes:
138
+
This step focuses on creating a Shared VPC per environment (`development`, `non-production` & `production`) in a standard configuration with a reasonable security baseline. Currently this includes:
132
139
133
-
- Optional - Example subnets for development, production & non-production inclusive of secondary ranges for those that want to use GKE.
140
+
- Optional - Example subnets for `development`, `non-production` & `production` inclusive of secondary ranges for those that want to use GKE.
134
141
- Optional - Default firewall rules created to allow remote access to VMs through IAP, without needing public IPs.
135
142
-`allow-iap-ssh` and `allow-iap-rdp` network tags respectively
136
143
- Optional - Default firewall rule created to allow for load balancing using `allow-lb` tag.
137
-
-[Private service networking](https://cloud.google.com/vpc/docs/configure-private-services-access) configured to enable workloaded dependend resources like Cloud SQL.
144
+
-[Private service networking](https://cloud.google.com/vpc/docs/configure-private-services-access) configured to enable workload dependant resources like Cloud SQL.
138
145
- Base Shared VPC with [private.googleapis.com](https://cloud.google.com/vpc/docs/configure-private-google-access#private-domains) configured for base access to googleapis.com and gcr.io. Route added for VIP so no internet access is required to access APIs.
139
146
- Restricted Shared VPC with [restricted.googleapis.com](https://cloud.google.com/vpc-service-controls/docs/supported-products) configured for restricted access to googleapis.com and gcr.io. Route added for VIP so no internet access is required to access APIs.
140
147
- Default routes to internet removed, with tag based route `egress-internet` required on VMs in order to reach the internet.
141
148
- Optional - Cloud NAT configured for all subnets with logging and static outbound IPs.
142
149
- Default Cloud DNS policy applied, with DNS logging and [inbound query forwarding](https://cloud.google.com/dns/docs/overview#dns-server-policy-in) turned on.
143
150
144
-
Usage instructions are available for the network step in the [README](./3-networks/README.md).
151
+
Usage instructions are available for the networks step in the [README](./3-networks/README.md).
145
152
146
153
### [4. projects](./4-projects/)
147
154
148
-
This step, is focused on creating service projects in a standard configuration that are attached to the Shared VPC created in the previous step.
155
+
This step, is focused on creating service projects with a standard configuration and that are attached to the Shared VPC created in the previous step.
149
156
Running this code as-is should generate a structure as shown below:
150
157
151
158
```
@@ -176,7 +183,7 @@ The code in this step includes two options for creating projects.
176
183
The first is the standard projects module which creates a project per environment and the second creates a standalone project for one environment.
177
184
If relevant for your use case, there are also two optional submodules which can be used to create a subnet per project and a dedicated private DNS zone per project.
178
185
179
-
Usage instructions are available for the network step in the [README](./4-projects/README.md).
186
+
Usage instructions are available for the projects step in the [README](./4-projects/README.md).
180
187
181
188
### Final View
182
189
@@ -225,14 +232,14 @@ example-organization
225
232
├── prj-p-shared-base
226
233
└── prj-p-shared-restricted
227
234
└── fldr-bootstrap
228
-
├── cft-cloudbuild
235
+
├── cft-cloudbuild (prj-cicd if using Jenkins)
229
236
└── cft-seed
230
237
```
231
238
### Branching strategy
232
239
233
-
There are three main named branches - `development`, `non-production` and `production` that reflect the corresponding environments. These branches should be [protected](https://docs.github.com/en/github/administering-a-repository/about-protected-branches). When the CI pipeline (Jenkins/CloudBuild) runs on a particular named branch (say for instance `development`), only the corresponding environment (`development`) is applied. An exception is the `shared` environment which is only applied when triggered on the `production` branch. This is because any changes in the `shared` environment may affect resources in other environments and can have adverse effects if not validated correctly.
240
+
There are three main named branches - `development`, `non-production` and `production` that reflect the corresponding environments. These branches should be [protected](https://docs.github.com/en/github/administering-a-repository/about-protected-branches). When the CI/CD pipeline (Jenkins/CloudBuild) runs on a particular named branch (say for instance `development`), only the corresponding environment (`development`) is applied. An exception is the `shared` environment which is only applied when triggered on the `production` branch. This is because any changes in the `shared` environment may affect resources in other environments and can have adverse effects if not validated correctly.
234
241
235
-
Development happens on feature/bugfix branches (which can be named `feature/new-foo`, `bugfix/fix-bar` etc) and when complete, a [pull request (PR)](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests) or [merge request (MR)](https://docs.gitlab.com/ee/user/project/merge_requests/) can be opened targeting the `development` branch. This will trigger the CI pipeline to perform a plan and validate against all environments (`development`, `non-production`, `shared` and `production`). Once code review is complete and changes are validated, this branch can be merged into `development`. This will trigger a CI pipeline that applies the latest changes in the `development` branch on the `development` environment.
242
+
Development happens on feature/bugfix branches (which can be named `feature/new-foo`, `bugfix/fix-bar`, etc.) and when complete, a [pull request (PR)](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests) or [merge request (MR)](https://docs.gitlab.com/ee/user/project/merge_requests/) can be opened targeting the `development` branch. This will trigger the CI pipeline to perform a plan and validate against all environments (`development`, `non-production`, `shared` and `production`). Once code review is complete and changes are validated, this branch can be merged into `development`. This will trigger a CI pipeline that applies the latest changes in the `development` branch on the `development` environment.
236
243
237
244
Once validated in `development`, changes can be promoted to `non-production` by opening a PR/MR targeting the `non-production` branch and merging them. Similarly changes can be promoted from `non-production` to `production`.
0 commit comments