Skip to content

Commit b4f13cc

Browse files
authored
Merge pull request #260 from NHSDigital/gauging-intent
Gauging intent (PPP)
2 parents 4c83a6b + 52d9dd7 commit b4f13cc

File tree

5 files changed

+132
-0
lines changed

5 files changed

+132
-0
lines changed
153 KB
Loading
15 KB
Loading
196 KB
Loading
Lines changed: 132 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,132 @@
1+
---
2+
title: "Gauging intent"
3+
description: "How do we know if someone’s actually going to do something based on the options we've shown them?"
4+
date: 2025-10-02
5+
author: Mat Johnson and Roz Strachan
6+
tags:
7+
- prototyping
8+
---
9+
10+
A central piece of our proposition (and that of PPS strategy in general) is the idea of a cycle. Our team is exploring how me might work as a “connecting capability”:
11+
12+
* introducing users to relevant next steps based on what we can know about them (finding support)
13+
* encouraging them to make a start (take action)
14+
* then checking in to see how things are going (maintain a healthier lifestyle)
15+
16+
{% from "nhsuk/components/images/macro.njk" import image as nhsukImage %}
17+
{{ nhsukImage({
18+
classes: "app-media--full-width",
19+
src: "stages-highlighted.png",
20+
alt: "Diagram of a prevention cycle showing stages of information provided, risks explained, support to improve, taking action, maintaining. Find support, taking action, and maintain are highlighted",
21+
caption: "The segments of the cycle we’ve been exploring"
22+
}) }}
23+
24+
We’ve explored a user’s very first contact with our service – [introducing and onboarding,](/personalised-prevention-platform/2025/04/onboarding-users/) followed by [presenting relevant options.](/personalised-prevention-platform/2025/10/presenting-opportunities-to-take-action/)
25+
26+
This post examines how we figure out which (if any) opportunities a user intends to take up, now that we’ve presented them. How can we know if someone has:
27+
28+
* downloaded and started to use an app, for example Active 10?
29+
* attended a community event, for example a Parkrun?
30+
* used a public facility, for example an outdoor gym?
31+
32+
We need to be able to do this so we can:
33+
34+
* check in with someone in a personalised way – we can approach the user with a “subject”
35+
* understand what is working for someone and what isn’t
36+
* examine feedback to improve our recommendations for all users
37+
* understand which options are popular
38+
* potentially provide feedback to services
39+
* get a better picture of outcomes
40+
41+
In the abstract this might seem straightforward. We show the user options, they pick, then we check in later to see how it’s going. Easy right?
42+
43+
Not so fast!
44+
45+
Let’s take Parkrun as an example. A user could:
46+
47+
1. notice Parkrun in our listings
48+
2. read more in our details and become interested
49+
3. click through to the Parkrun site to find out more
50+
4. get engaged and register
51+
5. attend their first event
52+
53+
At point 3, we have no idea of what happens next. The click through does not actually represent “starting” or “choosing”. At this moment we can only infer it represents a desire to find out more about something before making a decision.
54+
55+
Things are not joined up, particularly where options we might present could be varied. There is no consistent underlying capability that allows us to completely rely on “knowing via tech” what a user has decided (or not) to do.
56+
57+
## Can we gauge intent?
58+
59+
As we’ve been prototyping, we’ve been considering ways we could get an indication from the user what they intend to do.
60+
61+
### Interface experiment: asking for a commitment
62+
63+
We created an interface where a user would pick an option: “I want to do this”.
64+
65+
![Example Couch to 5k screen](blocking-example.png)
66+
67+
Logically, for this approach to work, we needed remove all other ways to continue. For example links and contact details were not displayed.
68+
69+
In our research sessions, we noted a lot of inconsistency in understanding this interaction. When prompted to explain, answers varied from things like “it would launch the app, right?” to “it would display more details so I could register”.
70+
71+
![Screen grab showing an I want to do this primary button](blocking-ui.png 'Our experiment required a declaration')
72+
73+
Aside from causing confusion, an approach like this isn’t realistic because:
74+
75+
* we’re asking for an **immediate** commitment from the user
76+
* that commitment is required before the user has access to all the information they may need
77+
* demanding commitment this quickly creates unreliability at a key point, risking false positives
78+
* to create an interface that requires a declaration means you must remove all other ways to continue, creating friction in exactly the wrong place
79+
* there is literally no user need here, we’re making the user do the work to join things up for us
80+
81+
The drawbacks of this approach are self-evident, but we felt we needed to demonstrate very clearly the fundamental difficulty (and problematic nature) of relying on the user and interface to do the work of joining up for us.
82+
83+
### Process: background reporting
84+
85+
A second potential approach does the work behind the scenes: we receive information back from services themselves about usage.
86+
87+
A reporting approach benefits from being reliable and removes unnecessary work from the user to join things up. It’s definitely something to explore, particularly with options that offer online referral or registration.
88+
89+
However, we also must consider:
90+
91+
* informal or small scale community based options, for example a litter picking club
92+
* services that don’t _want_ to report on an individual level, for example any service offering anonymity of any kind
93+
* facilities which have zero registration or reporting, for example a public gym in a park
94+
95+
All these examples are completely viable – the lack of “being joined up” is not a reason to exclude them.
96+
97+
In short, this approach is strong, but we need to be able to handle variety.
98+
99+
### Approach: gathering clues
100+
101+
We can also work to assemble clues and indications as to someone’s intent.
102+
103+
Perhaps we can gain clues in the background by using analytics to:
104+
105+
* track and save result sets as the user explores options
106+
* track visits into detail pages
107+
* measure dwell time, scroll depth and so on in such pages
108+
* track outbound clicks
109+
110+
We can also experiment more with providing opportunities for the user to communicate interest:
111+
112+
* favouriting or liking
113+
* asking the user what they think of an option in-page
114+
* include tools to send or share option details
115+
116+
![Screen grab showing multiple links including a 'what do you think' question](clues-ui.png 'We can encourage but not rely on user interaction')
117+
118+
Using multiple techniques puts us in the realm of probabilities and likelihoods. This is more realistic and reflective of what we know about people’s lived experiences. It also prevents us from building a dependency on false points of truth.
119+
120+
## Intent is the third big challenge
121+
122+
So far we have assembled three big challenges:
123+
124+
1. curating a good selection of local and national options
125+
2. understanding how to recommend those options based on user input (and what we might already know)
126+
3. gleaning what a user decides to do based on those options
127+
128+
## What we’re doing next
129+
130+
We’re thinking about the first conversation we’ll have with a user after they’ve been presented with options. At the point of our first check in, someone has potentially set up some goals, barriers, and preferences. Hopefully they’ve been presented with one or more relevant options that are interesting and engaging.
131+
132+
The first check is the primary point where having knowledge of a user’s intent becomes crucial. Otherwise what will we talk about?
491 KB
Loading

0 commit comments

Comments
 (0)