Skip to content

Commit 87ec314

Browse files
committed
Getting there - findings and conclusions
1 parent dc2e735 commit 87ec314

File tree

4 files changed

+71
-29
lines changed

4 files changed

+71
-29
lines changed
149 KB
Loading
138 KB
Loading
87.4 KB
Loading

app/posts/personalised-prevention-platform/2025/09/2025-09-01-name.md

Lines changed: 71 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -26,14 +26,14 @@ Our [previous post](/personalised-prevention-platform/2025/04/onboarding-users/)
2626

2727
In this post let’s look at the next stage: presenting next steps.
2828

29-
## Why recommendations?
29+
## Why present next steps?
3030

3131
Everyone we have spoken to told us they believed they could be doing more to maintain their health.
3232

3333
> [!NOTE]
34-
> However they remained largely unaware of the range of help already available to them.
34+
> However they remained largely unaware of the range of help **already** available to them.
3535
36-
We are confident that any given individual can be presented with a range of next steps that could be well suited to them. For example:
36+
We are confident that any given individual can be presented with a range of options that could be well suited to them. For example:
3737

3838
* solo self-directed apps
3939
* in-person group programmes
@@ -58,14 +58,14 @@ It is not news to state that directories of services represent “hard yards”.
5858

5959
However it’s also not news to state that information such as this is required to underpin all kinds of transformational capabilities (not just a weight management journey).
6060

61-
In our case we need to work out how to establish a source of information about next steps we might present in a pilot area.
61+
In our case we need to work out how to establish a source of information about the options we might present to a user in a pilot area.
6262

6363
We’ve made some experimental inroads with some help from the AI Health Coach team (thank you!), asking can algos and agents:
6464

65-
* rapidly assemble a “starter for 10” of relevant services based on set criteria?
65+
* rapidly assemble a “starter for 10” of relevant local services based on set criteria?
6666
* represent a more sophisticated “automated link checker” maintenance approach to changes in information?
6767

68-
Looking forwards we need to bear in mind that the service we’re designing is not the only service that such information provides value to. How do we design our data for re-use as agnostically as possible?
68+
Looking forwards we need to bear in mind that what we’re designing is not the only thing that such information provides value to. How do we design our data for re-use as agnostically as possible?
6969

7070
On top of this, it’s critical to acknowledge that the mechanics of some next steps could be complex, even if their central proposition is not. For example any given option could have multiple:
7171

@@ -92,11 +92,11 @@ Do people understand the connection between the:
9292
* results themselves
9393
* available filters in the results listing?
9494

95-
How easy is it for the user to explore all available options using filters?
95+
How easy is it for the user to explore all available options?
9696

9797
### 4. Can we gauge intent?
9898

99-
A central piece of our proposition (and prevention strategy) is the idea of a feedback loop. We need to be able to check in and support people during their activities.
99+
A central piece of our proposition (and prevention strategy) is the idea of a feedback loop. We need to be able to check in and support people during their activities, playing the role of “interested friend”.
100100

101101
Yet again it is not news to state that “things are not joined up”. There is no consistent underlying capability that allows us to rely on “knowing via tech” what a user has decided (or not) to do next.
102102

@@ -106,8 +106,6 @@ How can we know if a user has:
106106
* attended a community event?
107107
* used a public facility?
108108

109-
A big challenge for us is to figure out how and where in the overall journey we can find out what a user is actually doing.
110-
111109
## What we did
112110

113111
### Expanding the prototype user journey
@@ -153,7 +151,7 @@ to:
153151

154152
* looser content retaining a strong structure
155153
* minimal imagery
156-
* a non-blocking approach to gleaning intent
154+
* a non-blocking approach to getting clues to intent
157155

158156
{% from "nhsuk/components/images/macro.njk" import image as nhsukImage %}
159157
{{ nhsukImage({
@@ -167,50 +165,94 @@ to:
167165

168166
![A sticker with the question 'has it got legs?'](has-it-got-legs@2x.png)
169167

168+
### Blend national and local
169+
170170
Since [discovery](/personalised-prevention-platform/2025/03/discovery-summary/) we’ve continuously proved that presenting a blend of national and local has real value to people. Throughout our sessions people asked if the options were real (they all were), and then make notes to look them up afterwards.
171171

172172
“National” and “local” are false distinctions, very visible to us, as we operate within organisational structures.
173173

174174
But where a thing “comes from” is utterly irrelevant to a user. You can be interested in Active 10, interested in your local Parkrun, and interested in the public gym in your local park.
175175

176-
---
177-
178176
### Strike a balance between needs and wants
179177

180-
Earlier onboarding prototypes included small goal and priority setting segments, along with asking about barriers – things that could get in the way.
178+
Earlier onboarding prototypes included goal and priority setting segments, along with asking about barriers – things that could get in the way.
181179

182-
As we progressed we removed these segments, instead asking a series of questions directly mapped to filters in the results listing, for example:
180+
We removed these segments, instead asking a series of questions directly mapped to filters in the results listing, for example:
183181

184-
> How do you like to be taught or coached?
182+
![A question page asking 'how do you like to be taught or coached?' alongside a column of filters displaying the same](example-filter-question@2x.png 'Onboarding questions mapped to filters')
185183

186-
There were good insights here:
184+
This reductive approach led to some good evidence.
187185

188-
Reliance on asking preference alone means any “recommendation” or even “from left field” aspect is negated. We’re in pure service finder territory.
186+
Reliance on asking preference alone means any “recommendation” or even “from left field” aspect is negated. We’re in pure service finder territory, and we’ve left no room for the unexpected or left-field that might spark engagement.
189187

190-
Some people told us they were basing their preferences on past experience, but that past experience was rooted in activities that had lapsed. So arguably we’re running a risk of simply repeating history.
188+
Some people told us they were basing their preferences on past experience – but that past experience was rooted in activities that had lapsed. So arguably we’re running a risk of simply presenting similar options to those that may have failed the user in the past.
191189

192-
We’ve proven that we need to become more opinionated in the options we present. These options must be mapped to a user’s declared goals, priorities, and barriers. As we onboard, we must shift the balance back away from preferences mapped to filters. We need to explore how we communicate this dual influence clearly.
190+
We’ve proven that we need to become more opinionated in the options we present. These options must be mapped to a user’s declared goals, priorities, barriers, and preferences, but also weighted by our (systemic) opinion.
193191

194-
### The relationship between volume and granularity
192+
### Handle the relationship between volume, variety, and granularity
195193

196194
With 18 services in the “live” API, one or two users combined preferences that led to zero results. This led to immediate disengagement and in real life, dropoff.
197195

198-
There is a living relationship between the volume of information we hold, and the ability to tune (or “personalise”) a set of results. If you only have a few options to offer, you can only offer so much granular control in your interface.
196+
There is a strong relationship between the volume and variety of information we hold, and the ability to tune (or “personalise”) a set of results. If you only have a few options to offer, you can only offer so much granular control in your interface.
197+
198+
Our 18 services represent a generic baseline that we know to be suitable for all areas and a wide range of people. In our pilot we expect to layer local offerings on top of this baseline, and so our volume and variety increases. With an idea of that increase, we get a better idea of how much granularity we can introduce.
199+
200+
Having a localised layer also allows us to practice ”no dead ends”. If our base selection is generic, then a minimum set of options would include all relevant generic options. For example a user’s priority to “exercise or move more” would at the absolute minimum return the Active 10 and Couch to 5k apps.
201+
202+
### “Engagement” can be simple
203+
204+
Unsurprisingly the early presentation of options was not engaging, with users often mentioning how unexciting they were.
205+
206+
What was surprising was how effective deliberately small tweaks were. The addition of a only small amount of imagery (in some cases only a logo) along with a looser content structure alleviated any further comment.
207+
208+
![A service result listing before and after the addition of a logo](logo-addition@2x.png 'Small visual tweaks had marked effect')
209+
210+
### Intent is the next big challenge
211+
212+
A big challenge for us is to figure out how and where in the overall journey we can find out what a user is actually doing.
213+
214+
It’s very easy in the abstract to miss interaction gotchas like this. We show the user the options, they pick one, then we check in later to see how it's going. Easy right?
215+
216+
Not so fast there. Let’s take Parkrun as an example. A potential user journey could be:
217+
218+
1. notice Parkrun in the listing
219+
2. read more in the details and get interested
220+
3. go to the Parkrun site to find out more (leaving our site, right?)
221+
4. getting engaged and registering with Parkrun
222+
5. attending their first event
223+
224+
At point 3 onwards, we will have no idea what they’re doing.
225+
226+
There’s two basic ways to approach this:
227+
228+
1. Gain a “declaration of intent” from the user.
229+
2. Assemble as many clues and indications as we can during the user journey.
199230

200-
Our 18 services represent a generic baseline that we know to be suitable for all areas and a wide range of people.
231+
![Two screenshots, one showing an 'I want to do this' button, and the other showing app links and a 'what do you think?' question](intent-iteration@2x.png 'From blocking to gathering clues')
201232

202-
In our pilot we expect to layer local offerings on top of this baseline, and so our volume increases. With an idea of that increased volume, we get a better idea of how much granularity we can introduce.
233+
Our initial “blocking” design was essentially built with the expectation of failure. We insisted on a declaration: “I want to do this”
203234

204-
Having a localised layer also allows the practice of ”no dead ends”. If our base selection is generic, then a minimum set of options would include all relevant generic options. For example a user’s priority to “exercise or move more” would at the absolute minimum return the Active 10 and Couch to 5k apps.
235+
Wanted to test it (hey if it works then great, that would be neat) but also wanted to draw out solid reason why we should not do it. Tactics right.
205236

206-
### To differentiate we must be opinionated
237+
In order to get a declaration of intent to use an option, we’re asking for an _immediate_ commitment from the user. It’s unrealistic to demand people must commit to a change this quickly, creating a fragility at a key point for us.
207238

208-
Don’t do no results do a minimum selection
239+
To create an interface that _demands_ a declaration means you have to withhold useful information (find out more about, directions, any contact details). If you have a single high importance CTA that you are massively relying on, you cannot provide routes around it, you are aiming to strongly funnel users.
240+
241+
Risk of failure risk to the actual proposition
242+
243+
All this aside, users were confused. Lots of people didn't see or understand what the CTA was for. When prompted to explain what it was answers varied from things like "it would launch the app right?" to "it would display more details" (correct).
244+
245+
Even if you somehow nail this (unlikely - remember the service information problem), the risk of false positives remains.
246+
247+
Clues and indicators both via tracking and via opportunity. None of which is a point of failure.
248+
249+
---
209250

210-
Unsurprisingly the very minimal pages were not engaging.
251+
Without having a decent idea of even the nature of the option chosen - is it an app, in perosn, how long, what's the interval - our aboility tj follow up is hamperd or personalised that follow uto
211252

212-
What was surprising was how little had to be done to alleviate this – the addition of a very small amount of imagery (in some cases only a logo) and a looser approach to the content structure.
253+
Think “patient reported outcomes” for example.
213254

255+
We’ve got ideas about how to glean intent from various clues in this part of the journey, and as we produce our check-ins we’ll be stress testing them.
214256

215257
(Epilogue) What we’re doing next
216258

0 commit comments

Comments
 (0)