You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/DISCOVERY.md
+17Lines changed: 17 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -18,3 +18,20 @@ The process can be more art than science. It's often messy, and can suffer from
18
18
- The application generates follow up questions specific to participants / examples. Observations from other participants may be included to specifically probe for interesting disagreements.
19
19
- The findings aggregated by participant/example are presented to the facilitator to _inform_ discussion (not to replace it). Since facilitators often don't know about the domain, this helps reduce cognitive load.
20
20
- Participants autonomously own the process with the faciliator and workshop just providing the framework. They will organically identify common clusters of findings and themes.
21
+
22
+
## How assisted facilitation works (high level)
23
+
24
+
Assisted facilitation helps participants go deeper on each example and helps facilitators guide discussion without needing to be a domain expert.
25
+
26
+
### During participant review (per example)
27
+
28
+
-**Start simple, then go deeper**: each example begins with a baseline prompt (“what makes this effective or ineffective?”). As a participant responds, the application can propose a small number of follow-up questions that encourage deeper thinking (edge cases, missing info, boundary conditions, failure modes).
29
+
-**Probe disagreements intentionally**: when different participants notice different things about the same example, follow-up questions can be tailored to surface the disagreement and clarify the underlying definition of “good” vs “bad”.
30
+
-**Stop when coverage is good**: follow-up questions aren’t infinite; once the key angles have been explored (or a sensible limit is reached), the application stops proposing more so the group can move on.
31
+
32
+
### For the facilitator (across participants and examples)
33
+
34
+
-**Theme extraction and synthesis**: participant observations are summarized into a small set of themes and recurring patterns, both overall and broken down by participant and by example.
35
+
-**Discussion-ready outputs**: the app surfaces key disagreements and provides short discussion prompts that help the facilitator run a productive conversation.
36
+
-**Bridge to rubric creation**: the system can suggest candidate rubric questions (concrete “quality dimensions”) derived from the themes so the group can turn discovery insights into a rubric more quickly.
37
+
-**Progress signals**: simple convergence indicators (how consistently themes appear across participants) help the facilitator judge when the group has enough shared understanding to move into rubric definition and annotation.
0 commit comments