Skip to content

Commit 194d6ce

Browse files
ivorcandyblundell
andauthored
Reorder assessment and tweak testing section (#62)
* Reorder assessment and tweak testing section * Testing review tweaks * Various review tweaks based on feedback from reviews * Cattle, not pets Co-authored-by: andyblundell <[email protected]>
1 parent 89d1ffc commit 194d6ce

File tree

1 file changed

+58
-55
lines changed

1 file changed

+58
-55
lines changed

review.md

Lines changed: 58 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -75,18 +75,18 @@ Finally (and most importantly) identify actions to move the score upward.
7575

7676
### 1. Mission
7777
* We have clear goals.
78-
* User needs are well understood. They validated through user research.
7978
* We know the metrics which will measure success and how they will be measured.
79+
* User needs are well understood and validated through user research.
8080
* Non-functional requirements are understood and based on user needs.
8181

8282
### 2. Plan
8383
* We have a plan which is visible to all of us.
84-
* Our plan guides us.
84+
* The plan is at the right level and shows what we expect to be delivered each sprint/month &mdash; usually 2–5 items in each increment.
8585
* It is up to date and complete.
8686
* It changes when it should but is stable enough.
8787
* It gives our stakeholders a clear forecast of what is most likely to happen over the coming time periods.
8888
* It makes sure we work on the right things first and helps us predict and avoid issues.
89-
* Functionality is delivered in [thin vertical slices](https://docs.google.com/document/u/1/d/1TCuuu-8Mm14oxsOnlk8DqfZAA1cvtYu9WGv67Yj_sSk/pub), starting by building a [steel thread](https://www.agiledevelopment.org/agile-talk/111-defining-acceptance-criteria-using-the-steel-thread-concept) / [walking skeleton](https://www.henricodolfing.com/2018/04/start-your-project-with-walking-skeleton.html).
89+
* Functionality is delivered in [thin vertical slices](https://docs.google.com/document/u/1/d/1TCuuu-8Mm14oxsOnlk8DqfZAA1cvtYu9WGv67Yj_sSk/pub), starting by building a [steel thread](https://www.agiledevelopment.org/agile-talk/111-defining-acceptance-criteria-using-the-steel-thread-concept) / [walking skeleton](https://www.henricodolfing.com/2018/04/start-your-project-with-walking-skeleton.html)
9090
* Risky items and dependencies are clearly indicated and work to reduce risk is prioritised.
9191
* The plan gets the right balance between delivering features and operational aspects.
9292
* We track risks, issues, assumptions and dependencies ('RAID') and work creatively to resolve them.
@@ -123,12 +123,21 @@ Finally (and most importantly) identify actions to move the score upward.
123123

124124
### 5. Pawns or players
125125
* As a team, we are in control of our destiny!
126-
* We decide what to build and how to build it.
126+
* We are given problems to solve, not just solutions to implement.
127+
* We decide how to build it.
127128

128129
### 6. Outside support
129130
* We always get great support and help from outside the team when we ask for it!
130131
* We are listened to and our ideas are used to improve the organisation.
131132

133+
### 7. Skills and knowledge
134+
* We have the skills and knowledge we need.
135+
* We are familiar with the tech in use and know how to use it well.
136+
* We know the codebase and are comfortable making changes in it.
137+
* We know how to operate the live system reliably and diagnose and fix things when they break.
138+
* We have the skills and knowledge for what we will be doing next.
139+
* Skills and knowledge are well spread between team members.
140+
132141
## Individual component or system
133142
You may wish to score each individual component or system separately for these aspects.
134143
> Identify components based on natural seams in the system. Ultimately, the aim is to make it easy to decide what the appropriate score is for each "component". If you can"t decide between a low and high score for an aspect then this may indicate that component should be broken down to allow finer grained scoring.
@@ -139,74 +148,49 @@ You may wish to score each individual component or system separately for these a
139148
> * The cloud platform
140149
> * The CI/CD system
141150
142-
### 7. Skills and knowledge
143-
* We have the skills and knowledge we need.
144-
* Skills and knowledge are well spread between team members.
145-
* We are familiar with the tech in use and know how to use it well.
146-
* We know the codebase and are comfortable making changes in it.
147-
* We know how to operate the live system reliably and diagnose and fix things when they break.
148-
* We have the skills and knowledge for what we will be doing next.
149-
150-
### 8. Tech and architecture
151-
* The tech helps us deliver value.
152-
* We enjoy working with it and it supports fast, reliable and safe delivery.
153-
* Our system is built as a set of independent services/components where appropriate (see [Architect for Flow](patterns/architect-for-flow.md)).
154-
* The architecture is clean.
155-
* The tech and architecture make testing, local development and live operations easy.
156-
* We use serverless or ephemeral infrastructure.
157-
158-
### 9. Healthy code base
151+
### 8. Healthy code base
159152
* We're proud of the quality of our code!
160-
* It is clean, easy to read, and safe to work with.
153+
* It is clean, easy to read, well structured and safe to work with.
161154

162-
### 10. Testing
155+
### 9. Testing
163156
* We have great test coverage.
164157
* Testing is everyone's responsibility.
165-
* The time we spend on testing is really worthwhile.
166-
* We use the right mixture of tools and techniques, e.g.
167-
* code-level unit and integration tests, and maybe behaviour-driven development
168-
* running-system component, integration and whole-system tests
169-
* Our tests focus on individual components and the contracts between them, not on testing the whole system together.
170-
* We use stubs to insulate our tests from other components and systems.
158+
* Repetitive tests are automated.
159+
* Testing is considered before each work item is started and throughout its delivery.
160+
* We use the right mix of testing techniques including automated checks and exploratory testing.
161+
* We consider whether the system genuinely meets user needs, rather than just following specifications blindly.
162+
* We have code-level unit and integration tests, and maybe practice behaviour-driven development.
163+
* We have component, integration and whole-system tests which interact with a running system.
164+
* Our automated checks focus on individual components and the contracts between them, not on testing the whole system together.
165+
* We use stubs to insulate our tests from other components and systems &mdash; always for automated tests, sometimes for exploratory testing.
166+
* We understand user needs and non-functional requirements and our tests prove they are being met.
167+
* e.g. accessibility, browser compatibility, performance, capacity, resilience.
171168
* Our components have versioned APIs.
172169
* Breaking changes are detected and clearly indicated.
173170
* e.g. using Consumer-Driven Contract testing and semantic versioning.
174-
* We understand user needs and non-functional requirements and our tests prove they are being met.
175-
* e.g. accessibility, browser compatibility, performance, capacity, resilience.
176-
* Test data is automatically generated and has the right properties and scale.
171+
* We use the right combination of automatically generated test data and anonymised live data and our data has the right properties and scale.
172+
173+
### 10. Tech and architecture
174+
* We use modern technologies which work well for us.
175+
* e.g. serverless or ephemeral/immutable instances ([cattle, not pets](http://cloudscaling.com/blog/cloud-computing/the-history-of-pets-vs-cattle)).
176+
* We enjoy working with them and they support fast, reliable and safe delivery.
177+
* The tech and architecture make testing, local development and live operations easy.
178+
* The architecture is clean.
179+
* Our system is built as a set of independent services/components where appropriate (see [Architect for Flow](patterns/architect-for-flow.md)).
177180

178-
### 11. Easy to release
181+
### 11. Easy and safe to release
179182
* It is easy and straightforward to release a change to production.
180183
* We can release on demand, typically multiple times per day.
184+
* Our deployments are automated, including infrastructure and everything needed to build an environment from scratch.
181185
* Every code merge triggers the creation of a potentially releasable build artifact.
182186
* That same artifact is deployed to each environment (e.g. dev, test, prod) rather than a new build being done for each.
183187
* We can deploy any recent version.
184-
* Our deployments are automated, including everything needed to build an environment from scratch.
185188
* Our test and production environments are all in a known state, including configuration parameters.
186189
* The CI/CD system has secure access control and credentials to deploy to each environment are handled securely.
187190
* We use blue-green/canary deployments to safely verify each deployment before fully switching over to the new version.
188191
* Our non-prod environments are cleared down automatically when they're no longer needed.
189192

190-
### 12. Operations
191-
* We consider operations from day one and design the system to be easy to operate.
192-
* We include operability features throughout delivery, treating them as user needs of the support team.
193-
* e.g. monitoring and log aggregation.
194-
* Our systems are reliable.
195-
* We have great insight into how live systems are functioning.
196-
* e.g. metrics dashboards, request tracing and application logs.
197-
* We detect potential issues and take action to prevent them.
198-
* e.g. TLS certificate expiry, hitting quota limits.
199-
* We detect incidents before our users tell us about them and have a slick process for resolving them.
200-
* We classify incidents and work to agreed protocols according to the Service Level Agreement (SLA) for each.
201-
* We learn from incidents using blameless postmortems.
202-
* We use Service Level Objectives (SLOs) and error budgets to balance speed of change with operational reliability.
203-
* We design for failure and we're confident our service will self-heal from most issues.
204-
* Our service is immutable: rather than make changes, we tear down and rebuild every time.
205-
* We can see what is currently deployed in each environment, including configuration and feature flags, and can see the history of changes.
206-
* Our infrastructure scales automatically.
207-
* We have clear visibility of our environment costs, and we regularly check for waste.
208-
209-
### 13. Security and compliance
193+
### 12. Security and compliance
210194
* We are confident our systems are secure.
211195
* We model threats and design systems to be secure.
212196
* Security is baked into our software delivery process.
@@ -225,7 +209,26 @@ You may wish to score each individual component or system separately for these a
225209
* Automated checks are in place for vulnerabilities in dependencies such as code libraries and container or VM base images.
226210
* There is strong separation (e.g. different AWS accounts) for test and production systems.
227211
* Humans don't have write access to production, except via time-limited "break-glass" permissions.
228-
* We keep the versions of technology in our service up to date.
212+
* We keep the versions of technology in our system up to date.
213+
214+
### 13. Operability and live support
215+
* We consider operations from day one and design the system to be easy to operate.
216+
* We include operability features throughout delivery, treating them as user needs of the support team.
217+
* e.g. monitoring and log aggregation.
218+
* Our systems are reliable.
219+
* We have great insight into how live systems are functioning.
220+
* e.g. metrics dashboards, request tracing and application logs.
221+
* We detect potential issues and take action to prevent them.
222+
* e.g. TLS certificate expiry, hitting quota limits.
223+
* We detect incidents before our users tell us about them and have a slick process for resolving them.
224+
* We classify incidents and work to agreed protocols according to the Service Level Agreement (SLA) for each.
225+
* We learn from incidents using blameless postmortems.
226+
* We use Service Level Objectives (SLOs) and error budgets to balance speed of change with operational reliability.
227+
* We design for failure and we're confident our service will self-heal from most issues.
228+
* Our components are immutable: every deployment creates new instances which replace the old ones.
229+
* We can see what is currently deployed in each environment, including configuration and feature flags, and can see the history of changes.
230+
* Our infrastructure scales automatically.
231+
* We have clear visibility of our environment costs, and we regularly check for waste.
229232

230233
# How to facilitate
231234

0 commit comments

Comments
 (0)