You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: app/posts/personalised-prevention-platform/2025/09/2025-09-01-name.md
+72-32Lines changed: 72 additions & 32 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -183,81 +183,121 @@ We removed these segments, instead asking a series of questions directly mapped
183
183
184
184
This reductive approach led to some good evidence.
185
185
186
-
Reliance on asking preference alone means any “recommendation” or even “from left field” aspect is negated. We’re in pure service finder territory, and we’ve left no room for the unexpected or left-field that might spark engagement.
186
+
Reliance on asking preference alone means any recommendation aspect is negated. We’re in pure service finder territory, and we’ve left no room for the unexpected or left-field that might spark engagement.
187
187
188
-
Some people told us they were basing their preferences on past experience – but that past experience was rooted in activities that had lapsed. So arguably we’re running a risk of simply presenting similar options to those that may have failed the user in the past.
188
+
Some people told us they were basing their preferences on past experience – but that past experience was rooted in activities that had lapsed. So arguably we have a risk of presenting similar options to those that may have failed the user in the past.
189
189
190
-
We’ve proven that we need to become more opinionated in the options we present. These options must be mapped to a user’s declared goals, priorities, barriers, and preferences, but also weighted by our (systemic) opinion.
190
+
We’ve proven that we need to become more opinionated in the options we present. These options must be mapped to a user’s declared goals, priorities, barriers, and preferences, but also weighted by “systemic” opinion.
191
191
192
-
### Handle the relationship between volume, variety, and granularity
192
+
### Continuously assess the relationship between volume, variety, and granularity
193
193
194
-
With 18 services in the “live” API, one or two users combined preferences that led to zero results. This led to immediate disengagement and in real life, dropoff.
194
+
With 18 services in our prototype API, one or two users combined preferences that led to zero results. This led to immediate disengagement and in real life, dropoff.
195
195
196
196
There is a strong relationship between the volume and variety of information we hold, and the ability to tune (or “personalise”) a set of results. If you only have a few options to offer, you can only offer so much granular control in your interface.
197
197
198
-
Our 18 services represent a generic baseline that we know to be suitable for all areas and a wide range of people. In our pilot we expect to layer local offerings on top of this baseline, and so our volume and variety increases. With an idea of that increase, we get a better idea of how much granularity we can introduce.
198
+
Our 18 services represent a generic baseline that we know to be suitable for all geographic areas and a wide range of people. In our pilot we expect to layer local offerings on top of this baseline, and so our volume and variety increases. With an idea of that increase, we get a better idea of how much granularity we can introduce.
199
199
200
200
Having a localised layer also allows us to practice ”no dead ends”. If our base selection is generic, then a minimum set of options would include all relevant generic options. For example a user’s priority to “exercise or move more” would at the absolute minimum return the Active 10 and Couch to 5k apps.
201
201
202
-
### “Engagement” can be simple
202
+
### “Being engaging” can be quite simple
203
203
204
-
Unsurprisingly the early presentation of options was not engaging, with users often mentioning how unexciting they were.
204
+
Unsurprisingly the early presentation of options (above) was not engaging, with users often mentioning how unexciting they were.
205
205
206
-
What was surprising was how effective deliberately small tweaks were. The addition of a only small amount of imagery (in some cases only a logo) along with a looser content structure alleviated any further comment.
206
+
What was surprising was how effective deliberately small tweaks were. The addition of a only small amount of imagery –in some cases only a logo– along with a looser content structure alleviated any further comment.
207
207
208
208

209
209
210
210
### Intent is the next big challenge
211
211
212
-
A big challenge for us is to figure out how and where in the overall journey we can find out what a user is actually doing.
212
+
A big challenge for us is to figure out how and where in our overall prevention journey we can find out what a user is actually doing. We need to be able to do this in order to:
213
213
214
-
It’s very easy in the abstract to miss interaction gotchas like this. We show the user the options, they pick one, then we check in later to see how it's going. Easy right?
214
+
* check in with someone in a structured and personalised way – we approach the user with a “subject”
215
+
* match feedback to options in order to improve our recommendations to all users
216
+
* provide feedback to services themselves
217
+
* get a better picture of outcomes
215
218
216
-
Not so fast there. Let’s take Parkrun as an example. A potential user journey could be:
219
+
In the simplest possible scenario, we show the user options, they pick one, then we check in later to see how it’s going. Easy right?
220
+
221
+
Not so fast there. It’s very easy in the abstract to miss interaction gotchas like this.
222
+
223
+
Let’s take Parkrun as an example. A potential user journey could be:
217
224
218
225
1. notice Parkrun in the listing
219
226
2. read more in the details and get interested
220
-
3.go to the Parkrun site to find out more (leaving our site, right?)
227
+
3.click through to the Parkrun site to find out more
221
228
4. getting engaged and registering with Parkrun
222
229
5. attending their first event
223
230
224
-
At point 3 onwards, we will have no idea what they’re doing.
231
+
From point 3, we have no idea of what the user does next. The click through does not represent “starting” or “choosing”, we can only infer it represents a desire to find out a bit more about something before making a decision.
232
+
233
+
Remember, it would be unwise to attempt to replicate, host and maintain information about any possible option in its entirety.
234
+
235
+
There’s three potential ways to approach this:
236
+
237
+
1. Gain a commitment from the user that they are going to take up an option that we’ve presented.
238
+
2. Receive information back from services themselves.
239
+
3. Work to assemble clues and indications as to intent during the user journey.
225
240
226
-
There’s two basic ways to approach this:
241
+
---
227
242
228
-
1. Gain a “declaration of intent” from the user.
229
-
2. Assemble as many clues and indications as we can during the user journey.
243
+
We felt we needed to work to disprove the first approach in order to counter repeated querying moving forward.
230
244
231
245

232
246
233
-
Our initial “blocking” design was essentially built with the expectation of failure. We insisted on a declaration: “I want to do this”
247
+
We started with an initial “blocking” design, insisting on a commitment: “I want to do this”.
248
+
249
+
In sessions, there was inconsistency in users’ understanding of the interaction. When prompted to explain, answers varied from things like “it would launch the app, right?” to “it would display more details” (correct).
250
+
251
+
Here’s why such an approach like this isn't realistic and won’t work for users (or us):
234
252
235
-
Wanted to test it (hey if it works then great, that would be neat) but also wanted to draw out solid reason why we should not do it. Tactics right.
253
+
* We’re asking for an **immediate** commitment from the user.
254
+
* That commitment is required before the user has access to all the information they may need.
255
+
* Demanding commitment this quickly creates unreliability at a key point, risking false positives.
256
+
* To create an interface that requires a declaration means you must remove all affordance to onward journeys, creating friction in exactly the wrong place.
257
+
* There is literally no user need here, we’re making the user do the work to join things up for us.
258
+
259
+
---
236
260
237
-
In order to get a declaration of intent to use an option, we’re asking for an _immediate_ commitment from the user. It’s unrealistic to demand people must commit to a change this quickly, creating a fragility at a key point for us.
261
+
The second potential approach is the ideal: we receive information back from services themselves about usage.
238
262
239
-
To create an interface that _demands_ a declaration means you have to withhold useful information (find out more about, directions, any contact details). If you have a single high importance CTA that you are massively relying on, you cannot provide routes around it, you are aiming to strongly funnel users.
263
+
A reporting approach benefits from being reliable and removes unnecessary work from the user to join things up. It’s definitely something to explore, particularly with options that offer online referral or registration.
240
264
241
-
Risk of failure risk to the actual proposition
265
+
However, we also must consider:
242
266
243
-
All this aside, users were confused. Lots of people didn't see or understand what the CTA was for. When prompted to explain what it was answers varied from things like "it would launch the app right?" to "it would display more details" (correct).
267
+
* informal or small scale community based options, for example a litter picking club
268
+
* services that don’t _want_ to report on an individual level, for example any service offering anonymity of any kind
269
+
* facilities which have zero registration or reporting, for example a public gym in a park
244
270
245
-
Even if you somehow nail this (unlikely - remember the service information problem), the risk of false positives remains.
271
+
All these examples are completely viable – the lack of “being joined up” is not a reason to exclude them.
246
272
247
-
Clues and indicators both via tracking and via opportunity. None of which is a point of failure.
273
+
In short, this approach is strong and we’d likely consider it the best, but we need to be able to handle a range.
248
274
249
275
---
250
276
251
-
Without having a decent idea of even the nature of the option chosen - is it an app, in perosn, how long, what's the interval - our aboility tj follow up is hamperd or personalised that follow uto
277
+
Finally we can work to assemble clues and indications as to someone’s during this onboarding journey.
278
+
279
+
Perhaps we can gain clues in the background by using analytics to:
280
+
281
+
* track and save result sets as the user explores options
282
+
* track visits into detail pages
283
+
* measure dwell time, scroll depth and so on in such pages
284
+
* track outbound clicks
285
+
286
+
We can also experiment more with providing opportunities for the user to communicate interest:
287
+
288
+
* favouriting or liking
289
+
* asking the user what they think of an option in-page
290
+
* include tools to send or share option details
291
+
292
+
Using multiple techniques puts us in the realm of probabilities and likelihoods. This is more realistic and reflective of what we know about people’s lived experiences. It also prevents us from building a dependency on false points of truth.
293
+
294
+
295
+
---
252
296
253
-
Think “patient reported outcomes” for example.
254
297
255
-
We’ve got ideas about how to glean intent from various clues in this part of the journey, and as we produce our check-ins we’ll be stress testing them.
256
298
257
299
(Epilogue) What we’re doing next
258
300
259
301
- Latest work is around "the very first check in"
260
-
- Jumping the gap between presenting the options and figuring out if something's being done
261
-
262
-
- writing up something about intent in the recommnedations bit
263
-
- writing up somethibg about service display
302
+
- Jumping the gap between presenting the options and figuring out if something's being done - continue to explore the how
303
+
as we produce our check-ins we’ll be stress testing them.
0 commit comments