Skip to content

Commit c725c03

Browse files
committed
Nearly there
1 parent 85210c7 commit c725c03

File tree

1 file changed

+72
-32
lines changed

1 file changed

+72
-32
lines changed

app/posts/personalised-prevention-platform/2025/09/2025-09-01-name.md

Lines changed: 72 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -183,81 +183,121 @@ We removed these segments, instead asking a series of questions directly mapped
183183

184184
This reductive approach led to some good evidence.
185185

186-
Reliance on asking preference alone means any recommendation” or even “from left field” aspect is negated. We’re in pure service finder territory, and we’ve left no room for the unexpected or left-field that might spark engagement.
186+
Reliance on asking preference alone means any recommendation aspect is negated. We’re in pure service finder territory, and we’ve left no room for the unexpected or left-field that might spark engagement.
187187

188-
Some people told us they were basing their preferences on past experience – but that past experience was rooted in activities that had lapsed. So arguably we’re running a risk of simply presenting similar options to those that may have failed the user in the past.
188+
Some people told us they were basing their preferences on past experience – but that past experience was rooted in activities that had lapsed. So arguably we have a risk of presenting similar options to those that may have failed the user in the past.
189189

190-
We’ve proven that we need to become more opinionated in the options we present. These options must be mapped to a user’s declared goals, priorities, barriers, and preferences, but also weighted by our (systemic) opinion.
190+
We’ve proven that we need to become more opinionated in the options we present. These options must be mapped to a user’s declared goals, priorities, barriers, and preferences, but also weighted by systemic opinion.
191191

192-
### Handle the relationship between volume, variety, and granularity
192+
### Continuously assess the relationship between volume, variety, and granularity
193193

194-
With 18 services in the “live” API, one or two users combined preferences that led to zero results. This led to immediate disengagement and in real life, dropoff.
194+
With 18 services in our prototype API, one or two users combined preferences that led to zero results. This led to immediate disengagement and in real life, dropoff.
195195

196196
There is a strong relationship between the volume and variety of information we hold, and the ability to tune (or “personalise”) a set of results. If you only have a few options to offer, you can only offer so much granular control in your interface.
197197

198-
Our 18 services represent a generic baseline that we know to be suitable for all areas and a wide range of people. In our pilot we expect to layer local offerings on top of this baseline, and so our volume and variety increases. With an idea of that increase, we get a better idea of how much granularity we can introduce.
198+
Our 18 services represent a generic baseline that we know to be suitable for all geographic areas and a wide range of people. In our pilot we expect to layer local offerings on top of this baseline, and so our volume and variety increases. With an idea of that increase, we get a better idea of how much granularity we can introduce.
199199

200200
Having a localised layer also allows us to practice ”no dead ends”. If our base selection is generic, then a minimum set of options would include all relevant generic options. For example a user’s priority to “exercise or move more” would at the absolute minimum return the Active 10 and Couch to 5k apps.
201201

202-
### Engagement” can be simple
202+
### Being engaging” can be quite simple
203203

204-
Unsurprisingly the early presentation of options was not engaging, with users often mentioning how unexciting they were.
204+
Unsurprisingly the early presentation of options (above) was not engaging, with users often mentioning how unexciting they were.
205205

206-
What was surprising was how effective deliberately small tweaks were. The addition of a only small amount of imagery (in some cases only a logo) along with a looser content structure alleviated any further comment.
206+
What was surprising was how effective deliberately small tweaks were. The addition of a only small amount of imagery – in some cases only a logo – along with a looser content structure alleviated any further comment.
207207

208208
![A service result listing before and after the addition of a logo](logo-addition@2x.png 'Small visual tweaks had marked effect')
209209

210210
### Intent is the next big challenge
211211

212-
A big challenge for us is to figure out how and where in the overall journey we can find out what a user is actually doing.
212+
A big challenge for us is to figure out how and where in our overall prevention journey we can find out what a user is actually doing. We need to be able to do this in order to:
213213

214-
It’s very easy in the abstract to miss interaction gotchas like this. We show the user the options, they pick one, then we check in later to see how it's going. Easy right?
214+
* check in with someone in a structured and personalised way – we approach the user with a “subject”
215+
* match feedback to options in order to improve our recommendations to all users
216+
* provide feedback to services themselves
217+
* get a better picture of outcomes
215218

216-
Not so fast there. Let’s take Parkrun as an example. A potential user journey could be:
219+
In the simplest possible scenario, we show the user options, they pick one, then we check in later to see how it’s going. Easy right?
220+
221+
Not so fast there. It’s very easy in the abstract to miss interaction gotchas like this.
222+
223+
Let’s take Parkrun as an example. A potential user journey could be:
217224

218225
1. notice Parkrun in the listing
219226
2. read more in the details and get interested
220-
3. go to the Parkrun site to find out more (leaving our site, right?)
227+
3. click through to the Parkrun site to find out more
221228
4. getting engaged and registering with Parkrun
222229
5. attending their first event
223230

224-
At point 3 onwards, we will have no idea what they’re doing.
231+
From point 3, we have no idea of what the user does next. The click through does not represent “starting” or “choosing”, we can only infer it represents a desire to find out a bit more about something before making a decision.
232+
233+
Remember, it would be unwise to attempt to replicate, host and maintain information about any possible option in its entirety.
234+
235+
There’s three potential ways to approach this:
236+
237+
1. Gain a commitment from the user that they are going to take up an option that we’ve presented.
238+
2. Receive information back from services themselves.
239+
3. Work to assemble clues and indications as to intent during the user journey.
225240

226-
There’s two basic ways to approach this:
241+
---
227242

228-
1. Gain a “declaration of intent” from the user.
229-
2. Assemble as many clues and indications as we can during the user journey.
243+
We felt we needed to work to disprove the first approach in order to counter repeated querying moving forward.
230244

231245
![Two screenshots, one showing an 'I want to do this' button, and the other showing app links and a 'what do you think?' question](intent-iteration@2x.png 'From blocking to gathering clues')
232246

233-
Our initial “blocking” design was essentially built with the expectation of failure. We insisted on a declaration: “I want to do this”
247+
We started with an initial “blocking” design, insisting on a commitment: “I want to do this”.
248+
249+
In sessions, there was inconsistency in users’ understanding of the interaction. When prompted to explain, answers varied from things like “it would launch the app, right?” to “it would display more details” (correct).
250+
251+
Here’s why such an approach like this isn't realistic and won’t work for users (or us):
234252

235-
Wanted to test it (hey if it works then great, that would be neat) but also wanted to draw out solid reason why we should not do it. Tactics right.
253+
* We’re asking for an **immediate** commitment from the user.
254+
* That commitment is required before the user has access to all the information they may need.
255+
* Demanding commitment this quickly creates unreliability at a key point, risking false positives.
256+
* To create an interface that requires a declaration means you must remove all affordance to onward journeys, creating friction in exactly the wrong place.
257+
* There is literally no user need here, we’re making the user do the work to join things up for us.
258+
259+
---
236260

237-
In order to get a declaration of intent to use an option, we’re asking for an _immediate_ commitment from the user. It’s unrealistic to demand people must commit to a change this quickly, creating a fragility at a key point for us.
261+
The second potential approach is the ideal: we receive information back from services themselves about usage.
238262

239-
To create an interface that _demands_ a declaration means you have to withhold useful information (find out more about, directions, any contact details). If you have a single high importance CTA that you are massively relying on, you cannot provide routes around it, you are aiming to strongly funnel users.
263+
A reporting approach benefits from being reliable and removes unnecessary work from the user to join things up. It’s definitely something to explore, particularly with options that offer online referral or registration.
240264

241-
Risk of failure risk to the actual proposition
265+
However, we also must consider:
242266

243-
All this aside, users were confused. Lots of people didn't see or understand what the CTA was for. When prompted to explain what it was answers varied from things like "it would launch the app right?" to "it would display more details" (correct).
267+
* informal or small scale community based options, for example a litter picking club
268+
* services that don’t _want_ to report on an individual level, for example any service offering anonymity of any kind
269+
* facilities which have zero registration or reporting, for example a public gym in a park
244270

245-
Even if you somehow nail this (unlikely - remember the service information problem), the risk of false positives remains.
271+
All these examples are completely viable – the lack of “being joined up” is not a reason to exclude them.
246272

247-
Clues and indicators both via tracking and via opportunity. None of which is a point of failure.
273+
In short, this approach is strong and we’d likely consider it the best, but we need to be able to handle a range.
248274

249275
---
250276

251-
Without having a decent idea of even the nature of the option chosen - is it an app, in perosn, how long, what's the interval - our aboility tj follow up is hamperd or personalised that follow uto
277+
Finally we can work to assemble clues and indications as to someone’s during this onboarding journey.
278+
279+
Perhaps we can gain clues in the background by using analytics to:
280+
281+
* track and save result sets as the user explores options
282+
* track visits into detail pages
283+
* measure dwell time, scroll depth and so on in such pages
284+
* track outbound clicks
285+
286+
We can also experiment more with providing opportunities for the user to communicate interest:
287+
288+
* favouriting or liking
289+
* asking the user what they think of an option in-page
290+
* include tools to send or share option details
291+
292+
Using multiple techniques puts us in the realm of probabilities and likelihoods. This is more realistic and reflective of what we know about people’s lived experiences. It also prevents us from building a dependency on false points of truth.
293+
294+
295+
---
252296

253-
Think “patient reported outcomes” for example.
254297

255-
We’ve got ideas about how to glean intent from various clues in this part of the journey, and as we produce our check-ins we’ll be stress testing them.
256298

257299
(Epilogue) What we’re doing next
258300

259301
- Latest work is around "the very first check in"
260-
- Jumping the gap between presenting the options and figuring out if something's being done
261-
262-
- writing up something about intent in the recommnedations bit
263-
- writing up somethibg about service display
302+
- Jumping the gap between presenting the options and figuring out if something's being done - continue to explore the how
303+
as we produce our check-ins we’ll be stress testing them.

0 commit comments

Comments
 (0)