You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: .claude/skills/generate-questions/SKILL.md
+79-15Lines changed: 79 additions & 15 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,18 @@
1
+
---
2
+
name: generate-questions
3
+
description: "Generate high-quality multiple-choice questions for a Knowledge Mapper domain. Use when asked to generate or regenerate questions for a domain (e.g., 'generate questions for biology', 'regenerate the physics question set'). Accepts a domain name as $ARGUMENTS (e.g., /generate-questions quantum-physics). Runs a 5-step iterative pipeline: generate Q+A → review Q+A → generate distractors → review distractors → compile JSON."
4
+
---
5
+
1
6
# Skill: Generate Domain Questions
2
7
3
8
Generate high-quality multiple-choice questions for the Knowledge Mapper application using an iterative multi-agent pipeline.
4
9
10
+
## Arguments
11
+
12
+
This skill accepts a **domain ID** as `$ARGUMENTS` (e.g., `quantum-physics`, `astrophysics`, `biology`).
13
+
14
+
If no argument is provided, ask the user which domain to generate questions for.
15
+
5
16
## When to Use
6
17
7
18
Use this skill when asked to generate or regenerate questions for a domain (e.g., "generate questions for biology", "regenerate the physics question set").
@@ -10,7 +21,7 @@ Use this skill when asked to generate or regenerate questions for a domain (e.g.
10
21
11
22
Knowledge Mapper is a GP-based knowledge estimation app. Users answer multiple-choice questions positioned on a 2D map of Wikipedia articles. Question quality directly impacts the usefulness of knowledge estimation.
12
23
13
-
### Output Format (per question)
24
+
### Working Output Format (per question during generation)
14
25
15
26
```json
16
27
{
@@ -24,7 +35,7 @@ Knowledge Mapper is a GP-based knowledge estimation app. Users answer multiple-c
24
35
}
25
36
```
26
37
27
-
**Do NOT include**: `id`, `x`, `y`, `z`, `options`, or `correct_answer` slot letter. IDs and coordinates are assigned programmatically after generation. Option slot assignment (A/B/C/D) and randomization happen at display time.
38
+
**Do NOT include during generation**: `id`, `x`, `y`, `z`, `options`, or `correct_answer` slot letter. These are assigned during Final Assembly.
28
39
29
40
### Formatting Rules
30
41
- Questions: **50 words or fewer**
@@ -50,11 +61,11 @@ This enables resuming from working files if context runs out.
50
61
51
62
### Prerequisites
52
63
53
-
The orchestrator provides each question generation with:
64
+
The domain ID comes from `$ARGUMENTS`. The orchestrator provides each question generation with:
54
65
- A **CONCEPT** (e.g., "photosynthesis")
55
66
- A **WIKIPEDIA ARTICLE** (full text, fetched via WebFetch)
56
67
- A **DIFFICULTY LEVEL** (integer 1-4)
57
-
- A **DOMAIN ID** (e.g., "biology")
68
+
- A **DOMAIN ID** (from `$ARGUMENTS`, e.g., "biology")
58
69
59
70
### Step 1: Generate Question + Correct Answer
60
71
@@ -181,7 +192,7 @@ Revise any distractors that fail checks.
181
192
182
193
**Instructions to agent**:
183
194
184
-
Compile the final question JSON. Do NOT include `id`, `x`, `y`, `z`, `options`, or `correct_answer` slot letter — these are assigned programmatically.
195
+
Compile the final question JSON for the working file. Do NOT include `id`, `x`, `y`, `z`, `options`, or `correct_answer` slot letter — these are assigned during Final Assembly.
185
196
186
197
**Agent output** (JSON):
187
198
```json
@@ -202,7 +213,7 @@ Compile the final question JSON. Do NOT include `id`, `x`, `y`, `z`, `options`,
Write completed questions to `data/domains/.working/{domain-id}-questions.json` after EVERY question completes Step 5. This file is an array of completed question JSONs. If context runs out, the next agent reads this file to know which questions are done and resumes from where it left off.
241
+
Write completed questions to `data/domains/.working/$ARGUMENTS-questions.json` after EVERY question completes Step 5. This file is an array of completed question JSONs. If context runs out, the next agent reads this file to know which questions are done and resumes from where it left off.
242
+
243
+
## Final Assembly (after all 50 questions complete)
244
+
245
+
After all questions are generated, assemble the final domain JSON file:
246
+
247
+
### Assembly Steps
231
248
232
-
## Assembly (after all 50 questions complete)
249
+
1.**Read working file**: `data/domains/.working/$ARGUMENTS-questions.json`
250
+
2.**Read existing domain file**: `data/domains/$ARGUMENTS.json` to get the existing `domain`, `labels`, and `articles` arrays
251
+
3.**For each question**, assign:
252
+
-**ID**: First 16 hex characters of SHA-256 hash of `question_text`
253
+
-**Option slots**: Randomly assign correct answer and distractors to A/B/C/D slots:
254
+
- Pick a random slot (A, B, C, or D) for the correct answer
255
+
- Fill remaining slots with the 3 distractors in random order
256
+
- Record which slot letter contains the correct answer
257
+
4.**Write final domain JSON** to `data/domains/$ARGUMENTS.json`
258
+
259
+
### Final Domain JSON Structure
260
+
261
+
```json
262
+
{
263
+
"domain": {
264
+
"id": "astrophysics",
265
+
"name": "Astrophysics",
266
+
"parent_id": "physics",
267
+
"level": "sub",
268
+
"region": {
269
+
"x_min": 0.042179,
270
+
"x_max": 0.295656,
271
+
"y_min": 0.413276,
272
+
"y_max": 0.67439
273
+
},
274
+
"grid_size": 70
275
+
},
276
+
"questions": [
277
+
{
278
+
"id": "04a772bcef67e50f",
279
+
"question_text": "What is stellar parallax?",
280
+
"options": {
281
+
"A": "The gravitational bending of light from distant stars...",
282
+
"B": "The redshift observed in a star's light spectrum...",
283
+
"C": "The apparent shift in a nearby star's position against distant background stars...",
284
+
"D": "The dimming of a star's brightness as it passes behind another celestial body..."
285
+
},
286
+
"correct_answer": "C",
287
+
"difficulty": 1,
288
+
"source_article": "Stellar parallax",
289
+
"domain_ids": ["astrophysics"],
290
+
"concepts_tested": ["stellar parallax"]
291
+
}
292
+
],
293
+
"labels": [...],
294
+
"articles": [...]
295
+
}
296
+
```
233
297
234
-
After all questions are generated:
298
+
### Assembly Notes
235
299
236
-
1. Read working file: `data/domains/.working/{domain-id}-questions.json`
3. Assign IDs, coordinates, and option slots programmatically (handled by caller, NOT this skill)
239
-
4. Write completed questions to working file for the caller to assemble
300
+
-**x, y, z coordinates** are NOT assigned by this skill — they come from the embedding pipeline
301
+
-**Preserve existing data**: Keep the `domain`, `labels`, and `articles`arrays from the existing domain file
302
+
-**Replace questions**: The `questions` array is fully replaced with the newly generated questions
303
+
-**Randomization**: Option slot assignment must be truly random to prevent position bias in answers
240
304
241
305
## Important Notes
242
306
243
307
-**Model**: Use Claude Opus (claude-opus-4-6) for all 5 steps. Question quality is paramount.
244
308
-**One domain at a time**: The caller invokes this skill per domain and can parallelize across domains.
245
309
-**Factual accuracy is non-negotiable**: Steps 1 and 2 MUST verify facts via the Wikipedia article and web searches. Any ambiguity must be resolved before proceeding.
246
310
-**TodoWrite is mandatory**: Every step transition and every completed question MUST be reflected in TodoWrite.
247
-
-**No coordinates or IDs**: This skill produces question content only. Spatial embedding and ID assignment happen in a separate post-processing step.
311
+
-**No coordinates**: This skill produces question content only. Spatial embedding (x, y, z coordinates) happens in a separate post-processing step.
0 commit comments