Skip to content

Commit 2574aa7

Browse files
committed
Fixing image links, ordering
We can (and should) use autogenerated sidebars for ease of maintenance. In order to migrate to that, this change adds a directory structure that mirrors the old content's static structure. Note: rather than linking to the same file from multiple places, which is a bit confusing, I've just included the file once in the first pillar it was found. As a result some pillars are empty and other have fewer contents than the previous site. This is a feature, not a bug
1 parent 2719f8c commit 2574aa7

22 files changed

+161
-258
lines changed

docs/architecture-pillars/automated_remediation.md renamed to docs/architecture-pillars/1-Reliability/automated_remediation.md

Lines changed: 9 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,7 @@
1-
---
2-
id: automated_remediation
3-
title: AWS Config for enabling automated remediation
4-
---
1+
# AWS Config for enabling automated remediation
52

63
### Context
7-
AWS Config is an AWS service that provides a method to assess if the configuration of AWS resources deployed in an account are compliant with a set of manually configured rules and allows us to keep track of all changes made.
4+
AWS Config is an AWS service that provides a method to assess if the configuration of AWS resources deployed in an account are compliant with a set of manually configured rules and allows us to keep track of all changes made.
85

96
AWS provides a set of predefined rules that can be used, but also has the option of creating custom ones. The rules that are being evaluated are chosen by the customer on a per account/per resource basis based on the service and organisation needs and identify if the configuration of an existing or newly provisioned resource is compliant with common best practices - for example, if encryption is enabled for DynamoDB tables.
107

@@ -13,7 +10,7 @@ AWS Config official documentation that details additional features can be found
1310
In Hackney, we are adopting several mechanisms to ensure that deployed resources are compliant with our preferred configuration, e.g. no database should be publicly accessible. Those include:
1411
- Following the [least privilege principle](./dev_least_principles.md), where engineers are unable to manually create AWS resources but must instead automate the creation via IaC (infrastructure as code) and CI/CD pipelines.
1512
- Adopting [Terraform-compliance](https://playbook.hackney.gov.uk/API-Playbook/terraform_compliance) security and compliance testing framework for performing pre-deployment checks to ensure only compliant resources are deployed.
16-
- Using [serverless safeguards](https://playbook.hackney.gov.uk/API-Playbook/serverless_safegaurd) to ensure AWS resources provisioned via the Serverless framework are also compliant.
13+
- Using [serverless safeguards](https://playbook.hackney.gov.uk/API-Playbook/serverless_safegaurd) to ensure AWS resources provisioned via the Serverless framework are also compliant.
1714

1815
However, some of those must be set up on a per-project basis, thus not guaranteeing security risks prevention by the proposed mechanisms due to no current way to ensure 100% take up amongst project teams.
1916

@@ -22,16 +19,16 @@ Enabling AWS Config with automated remediation can be done on account level, ens
2219
### Vision
2320
- Have a defined list of compliance rules for commonly used AWS resources that is frequently reviewed and updated as and when required so that we have consistency for rules applied as well as a process to iterate that list when we change our use of existing or new AWS services.
2421
- Continuously monitor and detect non-compliant AWS resources so that we have visibility over potential security risks and performance, availability and cost impacting resources in our environments .
25-
- Automate remediation for breached rules so that our teams can focus on the other stages of a service delivery lifecycle and we have assurance that non-compliant resources would be corrected even if teams are unavailable to action a given compliance breach.
22+
- Automate remediation for breached rules so that our teams can focus on the other stages of a service delivery lifecycle and we have assurance that non-compliant resources would be corrected even if teams are unavailable to action a given compliance breach.
2623
- Provide assurance for our security setup so that stakeholders have a strong confidence in the setup of our environments.
2724
- Ensure that resources provisioned are set up for performance, high availability and are cost optimised so that our services are more reliable and cost efficient.
2825

2926
### User needs
3027
As an **architect**, I want to ensure
3128
- that a solution’s design is implemented with the appropriate infrastructure resources configuration that is compliant with Hackney’s list of compliance requirements, which aim to promote better security, performance, reliability and availability at the lowest possible cost for a given service.
3229

33-
As an **engineer**, I want
34-
- a way to automate remediation of non-compliant AWS resources so that I can focus on the rest of the software lifecycle / platform activities.
30+
As an **engineer**, I want
31+
- a way to automate remediation of non-compliant AWS resources so that I can focus on the rest of the software lifecycle / platform activities.
3532

3633
As a **security analyst**, I want
3734
- assurance that services are configured as per agreed security requirements for infrastructure components.
@@ -41,7 +38,7 @@ As a **TDA member**, I want
4138
- to give assurance to Hackney senior stakeholders, staff and residents that services built are compliant and if audited, all criteria would be satisfied.
4239

4340
### Method
44-
Implement AWS Config rules and automated remediation, via Terraform, per account for the following types of resources (as part of the first iteration).
41+
Implement AWS Config rules and automated remediation, via Terraform, per account for the following types of resources (as part of the first iteration).
4542
- AWS S3 [add terraform module link]
4643
- Automatically block public access
4744
- AWS RDS[add terraform module link]
@@ -58,11 +55,11 @@ Implement AWS Config rules and automated remediation, via Terraform, per account
5855

5956
*The list of resources will be expanded following the trial of the tool. The initial list was prepared based on AWS resources commonly used for the development of our services.*
6057

61-
## Considerations
58+
## Considerations
6259
### Cost
6360
As this is an AWS paid service and we have other compliance testing mechanisms in place (e.g. terraform-compliance), should we consider implementing AWS Config only for our Production environment?
6461

6562
### Implementation
66-
As a first iteration, this paper proposes to implement AWS Config with the set of rules and remediations already provided by AWS, instead of building custom ones. Should the tool work well for us, as well as if we identify a need for a custom rule/remediation, then that should be considered.
63+
As a first iteration, this paper proposes to implement AWS Config with the set of rules and remediations already provided by AWS, instead of building custom ones. Should the tool work well for us, as well as if we identify a need for a custom rule/remediation, then that should be considered.
6764

6865
Further to this, can we use Control Tower to automatically enable those rules for new and existing AWS accounts instead of using terraform?

docs/architecture-pillars/core_resource_compliance.md renamed to docs/architecture-pillars/1-Reliability/core_resource_compliance.md

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,15 @@
1-
---
2-
id: core_resource_compliance
3-
title: Core AWS resources compliance checks
4-
---
1+
# Core AWS resources compliance checks
52

63
### Context
74
At Hackney, we follow an infrastructure-as-code (IaC) approach and use Terraform to provision most of our AWS cloud resources. For our APIs, which are Lambda functions exposed via AWS API Gateway, we use the Serverless framework as it significantly speeds up the delivery and resource creation. For more information please refer to [our playbook](https://playbook.hackney.gov.uk/API-Playbook/).
85

9-
From a Development perspective, each project manages its own Terraform files(or Serverless configuration) to provision resources for our microservices and frontend applications. Terraform is then applied automatically as part of the CI/CD pipeline workflow during deployment.
6+
From a Development perspective, each project manages its own Terraform files(or Serverless configuration) to provision resources for our microservices and frontend applications. Terraform is then applied automatically as part of the CI/CD pipeline workflow during deployment.
107

11-
As Terraform files live in the same repository as the service, for which they are used to create cloud resources, we use our Pull Request process to identify any potential issues with changes to AWS resources or the configuration for adding new ones. Despite having a very thorough pull request process, there is still room for error if something gets missed during a review.
8+
As Terraform files live in the same repository as the service, for which they are used to create cloud resources, we use our Pull Request process to identify any potential issues with changes to AWS resources or the configuration for adding new ones. Despite having a very thorough pull request process, there is still room for error if something gets missed during a review.
129

13-
To ensure we have security assurance in every step, we started using terraform-compliance - a security and compliance test framework, and Serverless Safeguards to assess if changes are compliant with a predefined set of compliance rules and terminate deployment in case of a failure.
10+
To ensure we have security assurance in every step, we started using terraform-compliance - a security and compliance test framework, and Serverless Safeguards to assess if changes are compliant with a predefined set of compliance rules and terminate deployment in case of a failure.
1411

15-
This document aims to outline the core compliance checks that **must** be performed for the various AWS resources provisioned as part of the Software Delivery Lifecycle **only** and not the wider platform.
12+
This document aims to outline the core compliance checks that **must** be performed for the various AWS resources provisioned as part of the Software Delivery Lifecycle **only** and not the wider platform.
1613

1714

1815
## Resources
@@ -24,7 +21,7 @@ Terraform compliance checks:
2421

2522
### API Gateway
2623
Serverless safeguards policies:
27-
1. Require Lambda authorizer for all API endpoints exposed by API Gateway with the exception of Swagger endpoints.
24+
1. Require Lambda authorizer for all API endpoints exposed by API Gateway with the exception of Swagger endpoints.
2825
- Example implementation [here](https://github.com/LBHackney-IT/asset-information-api/pull/51/files).
2926
2. Ensure cors is enabled.
3027
3. WAF is enabled.

docs/architecture-pillars/preferred_data_source.md renamed to docs/architecture-pillars/1-Reliability/preferred_data_source.md

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,20 +1,17 @@
1-
---
2-
id: preferred_data_source
3-
title: Preferred types of databases and when to use
4-
---
1+
# Preferred types of databases and when to use
52

63
### Creating a new database
74

85
**Before creating a new database**, please consult one of the Senior Engineers and/or confirm at the Data meetup if this type of data is stored elsewhere already.
96
- Data might already exist and can be reused as per our approaches.
10-
- Existing data entities could potentially be re-iterated and expanded to include additional data properties, instead of creating a new data source making it less restrictive for reusability.
7+
- Existing data entities could potentially be re-iterated and expanded to include additional data properties, instead of creating a new data source making it less restrictive for reusability.
118
- Check our [SwaggerHub page](https://app.swaggerhub.com/organizations/Hackney) and [Developer Hub](https://developer-api.hackney.gov.uk/), which lists all of our APIs.
129

1310
**If a new data store is required:**
1411
- Perform an evaluation if SQL or NoSQL is more suitable for your project’s needs.
1512
- [Guidance provided further down in this document.](#choosing-the-right-type-of-database-technology)
16-
- Design the API that will interact with the data and present it at the Data Meetup as per our [API specifications assessment process.](https://playbook.hackney.gov.uk/api-specifications/assessment_process/)
17-
- Use Terraform to provision the new database resource in AWS.
13+
- Design the API that will interact with the data and present it at the Data Meetup as per our [API specifications assessment process.](https://playbook.hackney.gov.uk/api-specifications/assessment_process/)
14+
- Use Terraform to provision the new database resource in AWS.
1815
- Use one of the [Terraform common repository](https://github.com/LBHackney-IT/aws-hackney-common-terraform) templates (if applicable)
1916

2017

@@ -30,8 +27,8 @@ Assuming knowledge of the main differences between a SQL and a NoSQL database, t
3027
- *No: DynamoDB* - When our data structure changes and there is a need to have flexibility, NoSQL is a good choice, especially for continuously evolving data entities as part of agile development.
3128

3229
**3. Do we need to support a lot of queries on different entity’s properties (*this question excludes search functionalities*)?**
33-
- *Yes: PostgreSQL* - Queries on properties different than Id and PartitionKey are anti-pattern for scaling out and so not suitable for NoSQL DB, those queries should be executed occasionally.
34-
- *No: DynamoDB*
30+
- *Yes: PostgreSQL* - Queries on properties different than Id and PartitionKey are anti-pattern for scaling out and so not suitable for NoSQL DB, those queries should be executed occasionally.
31+
- *No: DynamoDB*
3532

3633
**4. Do we need low latency/sub-second data access? (this question excludes search functionalities)?**
3734
- *Yes: DynamoDB* - When there is a need for low latency data access, NoSQL tends to be really fast and in the order of ~10ms

0 commit comments

Comments
 (0)