Skip to content

Commit 2719f8c

Browse files
committed
Migrate architecture-pillars
This commit migrates the content of https://github.com/LBHackney-IT/architecture-pillars at commit LBHackney-IT/architecture-pillars-microsite@d1d9a2a.
1 parent df8bad1 commit 2719f8c

32 files changed

+1560
-18
lines changed

docs/architecture-pillars/adr.md

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
---
2+
id: adr
3+
title: Architecture Decision Records
4+
---
5+
### Context
6+
7+
Previously technical decisions were captured as part of spike documentation that was kept in project specific google drive. They were not open for other projects to review, adopt and adapt. Often, new developers were not aware of decisions as they were not aware of where to look for documentation.
8+
9+
Hence, we agreed to create Architecture Decision Records (ADRs) and add them to a single github repo [https://github.com/LBHackney-IT/lbh-adrs] to ensure that we have enough documentation, that all decisions are kept in the same location that is easy to find and to document how and why a decision was reached within a codebase.
10+
11+
This will also achieve governance and uniformity around all projects. Also ADRs help to give context around the decisions that were taken so that we can revisit them. Other benefits that we have identified when using ADRs are:
12+
- Improves onboarding for new developers
13+
- Improves agility when handing over project ownership between external team to internal or vice versa
14+
- Improves alignment across the teams regarding best practices
15+
16+
### What is an Architectural Decision Record?
17+
18+
An architecture decision record (ADR) is a document that captures an important architectural decision made along with its context and consequences of adopting the decision.
19+
20+
### When to write an ADR?
21+
An ADR should be written whenever a decision of significant impact is made; it is up to each project team to align on what defines a significant impact. They can be:
22+
23+
- Backfilling a decision which was made previously.
24+
- Proposing large changes to a solution/spike.
25+
- Proposing no/small changes for a spike
26+
- Proposing changes that differ from the overall agreed standard across our current ecosystem.
27+
28+
### How to start using ADRs
29+
30+
*Decision identification:*
31+
- How urgent and how important is Architecture Decision?
32+
- Design methods and practices can assist with decision identification and decision making.
33+
- Ideally maintain a decision to-do list which aligns with the service to-do list.
34+
35+
*Decision making:*
36+
-Group decision making via Community of practices or project team workshops to validate the findings can help in decision making.
37+
- Better informed decision via ADRs which are available openly and people can collaborate on it.
38+
39+
*Decision enactment and enforcement:*
40+
- ADRs are used in software design; hence they have to be communicated to, and accepted by, various stakeholders of the services that fund, develop, consume and operate it.
41+
- Architecturally evident coding styles and code reviews that focus on architectural concerns and decisions are two related practices.
42+
- ADRs also have to be (re-)considered when modernizing a software system in software evolution.
43+
44+
*Decision sharing (optional):*
45+
- Many ADRs recurring across various project development..
46+
- Experiences from other projects and reusable components could enforce knowledge management strategy and contribute towards our emerging Community of practices meetups such as Data and Architecture etc.
47+
- Dependency matrix evaluation.
48+
49+
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
---
2+
id: api_compliance
3+
title: API Compliance Checklist
4+
---
5+
6+
### Context
7+
Every API deployed to development, staging and production environments must be compliant with the set of standards listed in this document. An API should not be promoted from one environment to another if it does not satisfy all requirements listed.
8+
9+
The set of compliance items that form this checklist are put in place to ensure that any API developed does not duplicate effort, is built in a reusable way, follows security best practices, is consistent with other APIs and follows all development standards defined.
10+
11+
The APIs compliance checklist will be used as part of future Service Standard Assessments and ongoing check-ins to ensure any identified issues are tackled early on and no technical debt is accumulated.
12+
13+
### Checklist
14+
1. The API has corresponding SwaggerHub documentation for all of the API endpoints it exposes.
15+
2. The API has completed the [API specification assessment process](https://playbook.hackney.gov.uk/api-specifications/assessment_process/)
16+
3. The API has been developed in Hackney’s preferred tech tech stack, unless otherwise agreed and as per standards defined in our [API playbook](https://playbook.hackney.gov.uk/API-Playbook/).
17+
4. The API has been developed following the TDD approach and has end-to-end tets in place
18+
- End-to-end tests guide for DynamoDB
19+
- End-to-end tests guide for PostgreSQL
20+
5. The API has monitoring and logging tools enabled, as per defined standards.
21+
- X-Ray is enabled for request tracing
22+
- Canaries are created for availability monitoring
23+
- CloudWatch is used for application logging.
24+
6. The API has vulnerability scanning enabled via SonarCloud.
25+
- The API should also have no pending findings to review in SonarCloud.
26+
7. The API has the following infrastructure compliance checks in place:
27+
- Terraform-compliance for the AWS resources provisioned as part of the same API deployment
28+
- Serverless safeguards to ensure that the API is using an authorizer for security
29+
8. In Production, the API has been tested as per the ‘Production testing checklist’
30+
9. In Production, the API’s infrastructure is built as per [Production deployment - Live service infrastructure requirements](https://docs.google.com/document/d/1UrT6u4j8AlyPf-aD_E4c30uH27MJgIJoVxYR9kKGzFw/edit)
31+
Lines changed: 62 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,62 @@
1+
---
2+
id: api_spec_asessment_process
3+
title: API Specifications Assessment Process
4+
---
5+
6+
## Purpose
7+
The purpose of this process is to provide a clear,open and consistent method of providing new and amended API specifications, evaluating them and getting them published in a way that is easy for a wider audience to access.
8+
9+
Our API specifications have become a fundamental part of our API development process; all new APIs begin from a set of design specifications. As the number of APIs we are developing continues to grow, we have begun to identify areas of inconsistency that we want to be able to improve on including:
10+
11+
- Where the specifications are stored - specs tend to be stored in different locations by different projects making them difficult to track down when needed which ends up re-inventing the wheel. If they cannot be found then we lose their value.
12+
- The review and approval process - once a specification has been approved, if there needs to be changes to the specifications, there needs to be a consistent approach for getting this done or we risk implementing changes that could present any of a number of risks to consumers of the API including developing against outdated specifications and systems not working as expected.
13+
14+
Our API specifications are different from the API documentation in Swagger whose main purpose is to describe the various endpoints of the APIs. The specifications add more light into the design process and attempt to capture the decisions behind approaches or changes to an API along with the user and data needs. This further contributes to building our APIs in a more consistent way as departures from the standard set of tools and methodologies can be clearly documented here.
15+
16+
We believe in the value of collaboration and having colleagues contribute to the design process allows for shared learning and potentially improves the quality of the output of these reusable APIs. Having this process in place allows us to inject a diverse set of knowledge into the design of the API which would lead to positive and consistent outcomes for our development.
17+
18+
## Vision
19+
20+
- A single, centralised repository holding all of our API design specifications.
21+
- A way to easily and consistently access published API specifications in a way that is familiar to people.
22+
- A way to standardise the management of changes to specifications.
23+
- A way to link the API specifications to the API catalog going forward.
24+
25+
## Developer Needs
26+
- As a developer I want a way to easily find API specifications so that I can better understand and use these APIs effectively as well as stay up to date on any changes that may affect my product.
27+
- As a developer I want to be able to publish specifications in a way that colleagues, stakeholders and interested parties can easily access and provide feedback on.
28+
29+
## The Assessment Process
30+
Below is a diagram that provides some illustration of the process for assessing/evaluating our API specifications. Each step in the process is expanded upon a bit further:
31+
32+
Click image to open in a new tab.
33+
[![](./docs-images/api_spec_assessment_process.png)](./docs-images/api_spec_assessment_process.png)
34+
35+
** Draft Specification ** - The individual or project will draft the API specification as part of their internal design process, ensuring that the specification meets the needs of the identified users and follows our development standards as outlined in the API Playbook. Once the draft is completed it should be exported to an MD (markdown) file and added to the API specifications repository [repository url to be added].
36+
37+
** Raise the PR ** - Once the specification is ready for review, a pull request is raised to facilitate review and merge into the main branch. If this is a new document, a link to it should be added to the repository’s navigation component. A pull request can be reviewed either collaboratively at our tech meetups or offline by the development manager with a retrospective review if the pull request brings significant change.
38+
39+
** Agreement Stage ** - If the specification is agreed then the pull request is approved and merged to the main branch. Once merged it will trigger an update to the specifications web page, publishing any changes. If, however, the specification is not agreed, any comments or concerns raised are added to the pull request and it is returned to the individual or project proposing the specification. The individual or project will then review and address the requested changes, either by making improvements or by providing justification for maintaining the original design.
40+
41+
** Subsequent Discoveries or Reviews ** - It is expected that API specifications will continue to evolve as further discoveries are made. This process will allow for specifications to be iterated and improved. The changes will be made on a new branch, separate from the main branch. Once the changes have been completed a pull request will be raised and the review process will be triggered again.
42+
43+
## Versioning
44+
While we iterate and improve on our API specifications, there needs to be a way to refer to previously agreed versions of the specifications. For this we will use the features of Github to track changes to our designs. With this in place, consumers of our API will be able to see how the APIs have evolved and address any changes that may impact their use.
45+
46+
## Data Meetup Feedback
47+
48+
https://ideaflip.com/b/wa4zzqf97nke/
49+
50+
## Decision
51+
52+
This process was proposed at our technical architecture meetup meeting on Tuesday 20th July, 2021. The process was reviewed and agreed with the following items to consider:
53+
- If we will be able to fully monitor who and where each API is being used across the system(s) so that we can update users; for instance where there are changes in specification.
54+
- If the use of APIs will be added to a fuller system diagram.
55+
- We currently have Swagger docs generated by the APIs themselves as well as manually added to Swagger Hub. Having a third source of documentation might be a bit much. We will need to look at why we have Swagger documents generated in multiple locations.
56+
- We need to check that we do not add any new security risks by publishing more detailed information about our designs.
57+
- We will need to clearly distinguish between target specifications and actual; the idea being these specification documents will be more of a target and the - --Swagger docs will represent the actual.
58+
- We need to ensure that there is a standardised and recorded testing process for changes to the API.
59+
60+
## Consequences
61+
62+
Additional effort will be required to convert the document from a Google Doc (or original format) to the Markdown (MD) file format. This could potentially contribute to documents not being in sync.
Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
---
2+
id: auto_scaling
3+
title: Auto-scaling for AWS resources used in Software Engineering
4+
---
5+
6+
### Context
7+
8+
To achieve cost optimization, all AWS services provisioned as part of the software delivery lifecycle, where capacity is pre-configured at the point of provisioning, must have an associated auto-scaling policy to ensure that we are only paying for what we use.
9+
10+
Cloud resources auto-scaling is the process of automatically scaling up or down based on traffic patterns and usage.
11+
12+
Enabling auto-scaling will reduce the possibility of over-provisioning due to incorrect predictions of what capacity will be required.
13+
14+
This document outlines the common AWS resources used as part of delivering software that will require an auto-scaling policy to be applied.
15+
16+
**Note:** EC2 has been excluded from the list as we strive to use serverless technologies when delivering digital services. EC2 is only used for our bastion hosts, which are already provisioned with minimal capacity.
17+
18+
### RDS
19+
20+
AWS RDS currently supports only [storage autoscaling](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_PIOPS.StorageTypes.html#USER_PIOPS.Autoscaling) as an automated way to scale database instances.
21+
22+
Auto-scaling applies to an RDS database instance when the following factors are true:
23+
- Free available space is less than 10 percent of the allocated storage.
24+
- The low-storage condition lasts at least five minutes.
25+
- At least six hours have passed since the last storage modification, or storage optimization has completed on the instance, whichever is longer.
26+
27+
To provide cost optimization, RDS database instances should be provisioned with a smaller allocated storage space and an auto-scaling option enabled. The provisioned storage capacity **should not** be based on predictions for future service needs - it should instead reflect the foreseeable future data storage needs and scale up automatically only if required.
28+
29+
30+
**Considerations:**
31+
- Always set a reasonable maximum and minimum storage capacity to be used for auto-scaling to avoid scenarios where data storage needs have grown exponentially for a reason different to genuine service needs.
32+
- Example for such scenario is large logs stored in Postgres due to failing DMS task resulting in database storage used increasing continuously. In such situations, alarms should be in place to prompt investigation.
33+
- If you are not sure how much storage space will be required for a database, please start with the default amount suggested by AWS for the given database instance type and enable auto-scaling.
34+
35+
### DynamoDB
36+
You can use the [AWS Application Auto Scaling](https://docs.aws.amazon.com/autoscaling/application/userguide/what-is-application-auto-scaling.html) service to set up automated scaling policies for DynamoDB.
37+
38+
DynamoDB autoscaling can be applied to both - a table and a Global Secondary Index (GSI).
39+
40+
At Hackney, DynamoDB tables uses provisioned capacity mode, which means that we are billed based on read and write capacity units that a table is provisioned with.
41+
42+
To achieve cost optimization, all DynamoDB tables must have a scaling policy associated that will increase or decrease the RCUs and WCUs based on the traffic patterns.
43+
44+
A guide to enabling auto-scaling for DynamoDB tables using Terraform can be found [here](https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/appautoscaling_policy).
45+
46+
### AWS Lambda
47+
Hackney does not use provisioned concurrency for the majority of the Lambda functions used. This means that Lambda handles concurrency and scaling automatically, as described in the [official documentation](https://docs.aws.amazon.com/lambda/latest/dg/invocation-scaling.html). For this reason, there is no need to apply auto-scaling as this is already handled by default.
48+
49+
For any Lambda functions using provisioned concurrency, an auto-scaling policy must be applied, AWS Application Auto Scaling service can be used to automate the scaling process as described [here](https://docs.aws.amazon.com/autoscaling/application/userguide/services-that-can-integrate-lambda.html).
50+
51+
52+
53+

0 commit comments

Comments
 (0)