Skip to content

Commit d3042e5

Browse files
authored
SPI metrics-post (#261)
* metrics-post post describing our approach to measuring safety * Fix typos Fixed some typos and removed reference to "quantifying" as it's more qual type research.
1 parent 12ef972 commit d3042e5

File tree

3 files changed

+32
-0
lines changed

3 files changed

+32
-0
lines changed
Lines changed: 32 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,32 @@
1+
---
2+
title: Metrics
3+
description: How we're measuring the service
4+
date: 2025-09-29
5+
tags:
6+
- metrics
7+
- measuring
8+
- safety
9+
- service design
10+
---
11+
12+
The outcome of the service is to send invitations to members of the public. For the national vaccination campaigns that we work on this can be millions of invitations. It is extremely important that we invite the right people with the right invitation. It would be costly (reputationally and financially), if we were to send invites to the wrong people. Therefore increasing safety is an important goal for us. By safety we mean mitigating the risk of making a mistake in the configuration and accidentally sending erroneous invitations.
13+
14+
## Measuring safety
15+
16+
We work with a clinical safety team who perform regular reviews to ensure the service doesn't contain any undue risk. But we also wanted to find a way to track perceived safety with users as we test, to ensure we increase the safety of the service.
17+
18+
Working with users, we designed a task for them to create a configuration that would invite people for a hypothetical RSV vaccination. After they had performed the task using the SPI user interface we asked them to rate how safe they perceived the process to be in comparison to the old process (before we had introduced the user interface users had to manually create JSON files to set the configuration).
19+
20+
The chart below shows how users felt about the safety of the process using the SPI user interface in comparison to the previous process (represented by the equivalent line). Although this did not give us an absolute scale, it showed us that users felt the process was safer overall and we could see which areas had improved most.
21+
22+
[![perceived saftey round 1](metrics1.png)](metrics1.png)
23+
24+
## Iterating and measuring again
25+
26+
A few weeks later, after doing work on the [Rule library](/select-people-for-invitation/2025/03/rule-library/), we did a similar exercise. This time with a more complicated hypothetical Covid invitation configuration. The chart below shows how users safety ratings have improved again.
27+
28+
[![perceived saftey round 2](metrics2.png)](metrics2.png)
29+
30+
This is an approach we can continue to use, alongside the regular reviews with the clinical safety team to ensure that we don't introduce any features which reduce safety and that we are focussed on continuing to eliminate risk where possible.
31+
32+
36.7 KB
Loading
37.2 KB
Loading

0 commit comments

Comments
 (0)