Skip to content

Commit 6b08fee

Browse files
authored
Merge branch 'main' into codex/add-tutorial-section-for-utilities
2 parents 2a9c4a3 + 26a61ec commit 6b08fee

File tree

6 files changed

+143
-24
lines changed

6 files changed

+143
-24
lines changed

docs/api/psyflow.rst

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -52,6 +52,14 @@ psyflow.TriggerSender module
5252
:undoc-members:
5353
:show-inheritance:
5454

55+
psyflow.LLM module
56+
------------------
57+
58+
.. automodule:: psyflow.LLM
59+
:members:
60+
:undoc-members:
61+
:show-inheritance:
62+
5563
psyflow.utils module
5664
--------------------
5765

docs/index.rst

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,8 @@ Start with one of the tutorials below or explore the API reference.
2828
tutorials/send_trigger
2929
tutorials/cli_usage
3030
tutorials/utilities
31-
31+
tutorials/llm_client
32+
3233

3334
.. toctree::
3435
:maxdepth: 2

docs/tutorials/build_stimulus.md

Lines changed: 24 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -195,6 +195,30 @@ feedback:
195195
pos: [0, -2]
196196
```
197197
198+
### Converting Text Stimuli to Voice
199+
200+
`StimBank` can transform text-based stimuli into spoken audio using the
201+
`edge-tts` package. Two helper methods handle the heavy lifting for you:
202+
203+
- **`convert_to_voice()`** – Take one or more already registered text stimuli and
204+
generate an MP3 file for each. The sounds are stored in an `assets/` folder and
205+
new `Sound` stimuli with the suffix `_voice` are automatically registered.
206+
- **`add_voice()`** – Synthesize arbitrary text and register it immediately as a
207+
new voice stimulus.
208+
209+
```python
210+
# Convert existing text stimuli to speech
211+
stim_bank.convert_to_voice(["instructions", "feedback"], voice="en-US-AriaNeural")
212+
213+
# Create a brand new voice stimulus
214+
stim_bank.add_voice(
215+
"welcome_voice",
216+
"Welcome to the experiment. Press space to start.",
217+
voice="en-US-AriaNeural",
218+
)
219+
```
220+
221+
198222
### 3. Retrieving Stimuli
199223

200224
#### Getting Individual Stimuli

docs/tutorials/build_trialunit.md

Lines changed: 12 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ With `StimUnit`, you can create complex trial structures using a clean, chainabl
3030

3131
| Purpose | Method | Example |
3232
|---------|--------|--------|
33-
| Initialize | `StimUnit(win, label, trigger)` | `trial = StimUnit(win, "cue", trigger)` |
33+
| Initialize | `StimUnit(label, win, kb, triggersender=sender)` | `trial = StimUnit("cue", win, kb, triggersender=sender)` |
3434
| Add stimuli | `.add_stim(stim)` | `trial.add_stim(fixation)` |
3535
| Register start hook | `.on_start(fn)` | `@trial.on_start()` |
3636
| Register response hook | `.on_response(keys, fn)` | `@trial.on_response(['left', 'right'])` |
@@ -40,7 +40,7 @@ With `StimUnit`, you can create complex trial structures using a clean, chainabl
4040
| Set auto-close keys | `.close_on(keys)` | `trial.close_on('space')` |
4141
| Simple display | `.show(duration)` | `trial.show(1.0)` |
4242
| Response capture | `.capture_response(keys)` | `trial.capture_response(['left', 'right'])` |
43-
| Full trial control | `.run()` | `trial.run(frame_based=True)` |
43+
| Full trial control | `.run()` | `trial.run()` |
4444
| Pause for input | `.wait_and_continue(keys)` | `trial.wait_and_continue(['space'])` |
4545
| Update state | `.set_state(**kwargs)` | `trial.set_state(correct=True)` |
4646
| Get state | `.state` or `.to_dict()` | `data = trial.to_dict()` |
@@ -61,15 +61,10 @@ win = visual.Window(size=[1024, 768], color="black", units="deg")
6161
kb = Keyboard()
6262

6363
# Create a trigger sender (mock mode for testing)
64-
trigger = TriggerSender(mock=True)
64+
sender = TriggerSender(mock=True)
6565

6666
# Initialize a trial unit
67-
trial = StimUnit(
68-
win=win, # PsychoPy window
69-
unit_label="cue", # Label for this trial (used in state keys)
70-
triggersender=trigger, # Optional trigger sender
71-
keyboard=kb # Optional keyboard (auto-created if None)
72-
)
67+
trial = StimUnit("cue", win, kb, triggersender=sender)
7368
```
7469

7570
For real EEG/MEG experiments, you would use a real trigger function:
@@ -253,7 +248,6 @@ trial.add_stim(fixation, target) \
253248

254249
# Run the trial
255250
trial.run(
256-
frame_based=True, # Use frame counting for timing
257251
terminate_on_response=True # Stop drawing after response
258252
)
259253

@@ -340,7 +334,7 @@ import random
340334
# Setup
341335
win = visual.Window(size=[1024, 768], color="black", units="deg")
342336
kb = Keyboard()
343-
trigger = TriggerSender(mock=True)
337+
sender = TriggerSender(mock=True)
344338

345339
# Create stimuli
346340
fixation = visual.TextStim(win, text="+", height=1.0, color="white")
@@ -366,7 +360,7 @@ def run_trial(condition):
366360
target_trigger = 12
367361

368362
# Create trial unit
369-
trial = StimUnit(win, "choice", trigger, kb)
363+
trial = StimUnit("choice", win, kb, triggersender=sender)
370364

371365
# Register response handler
372366
@trial.on_response(["left", "right"])
@@ -383,7 +377,7 @@ def run_trial(condition):
383377
unit.add_stim(feedback_incorrect)
384378

385379
# Show fixation
386-
fixation_trial = StimUnit(win, "fixation", trigger, kb)
380+
fixation_trial = StimUnit("fixation", win, kb, triggersender=sender)
387381
fixation_trial.add_stim(fixation).show(
388382
duration=(0.8, 1.2), # Jittered duration
389383
onset_trigger=10
@@ -421,7 +415,7 @@ for i, condition in enumerate(conditions):
421415
core.wait(0.5) # Inter-trial interval
422416

423417
# Show completion message
424-
end_trial = StimUnit(win, "end", trigger, kb)
418+
end_trial = StimUnit("end", win, kb, triggersender=sender)
425419
end_text = visual.TextStim(
426420
win,
427421
text="Experiment complete. Thank you!",
@@ -493,23 +487,23 @@ You can create complex trial structures by nesting `StimUnit` instances:
493487
```python
494488
def run_complex_trial():
495489
# Fixation phase
496-
fix_unit = StimUnit(win, "fixation", trigger)
490+
fix_unit = StimUnit("fixation", win, kb, triggersender=sender)
497491
fix_unit.add_stim(fixation).show(duration=0.5, onset_trigger=1)
498492

499493
# Cue phase
500-
cue_unit = StimUnit(win, "cue", trigger)
494+
cue_unit = StimUnit("cue", win, kb, triggersender=sender)
501495
cue_unit.add_stim(cue).show(duration=0.8, onset_trigger=2)
502496

503497
# Target phase with response
504-
target_unit = StimUnit(win, "target", trigger)
498+
target_unit = StimUnit("target", win, kb, triggersender=sender)
505499
target_unit.add_stim(target).capture_response(
506500
keys=["left", "right"],
507501
duration=2.0,
508502
onset_trigger=3
509503
)
510504

511505
# Feedback phase
512-
feedback_unit = StimUnit(win, "feedback", trigger)
506+
feedback_unit = StimUnit("feedback", win, kb, triggersender=sender)
513507
if target_unit.state.get("correct", False):
514508
feedback_unit.add_stim(feedback_correct)
515509
else:

docs/tutorials/getting_started.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -243,8 +243,8 @@ trial = StimUnit("trial_1", win, kb, triggersender=sender)
243243
trial \
244244
.add_stim(fixation, target) \
245245
.on_start(lambda unit: unit.send_trigger(triggers["fix_onset"])) \
246-
.show_stim(fixation, duration=0.5) \
247-
.show_stim(target, duration=1.0, trigger=triggers["target_onset"]) \
246+
.show(fixation, duration=0.5) \
247+
.show(target, duration=1.0, trigger=triggers["target_onset"]) \
248248
.capture_response(
249249
keys=["left", "right"],
250250
duration=2.0,
@@ -257,7 +257,7 @@ trial \
257257
correct_keys=["left"],
258258
) \
259259
.on_end(lambda unit: print(f"Response: {unit.state}")) \
260-
.run(frame_based=True)
260+
.run()
261261

262262
# Access trial results
263263
print(f"RT: {trial.state.get('rt')}")
@@ -303,8 +303,8 @@ def run_trial(condition, trial_idx, block):
303303
# Run trial (simplified)
304304
trial \
305305
.add_stim(fixation, target) \
306-
.show_stim(fixation, duration=0.5) \
307-
.show_stim(target, duration=1.0) \
306+
.show(fixation, duration=0.5) \
307+
.show(target, duration=1.0) \
308308
.capture_response(keys=settings.key_list, duration=2.0) \
309309
.run()
310310

docs/tutorials/llm_client.md

Lines changed: 92 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,92 @@
1+
# Using `LLMClient` for AI-Driven Helpers
2+
3+
## Overview
4+
5+
`LLMClient` provides a unified interface for interacting with several large language model (LLM) providers and includes helpers for converting tasks to documentation, reconstructing tasks from documentation, and translating experiment resources.
6+
7+
It supports:
8+
9+
- **Google Gemini** via the GenAI SDK (`provider="gemini"`)
10+
- **OpenAI** models (`provider="openai"`)
11+
- **Deepseek** models using the OpenAI API format (`provider="deepseek"`)
12+
13+
Custom providers can also be registered programmatically.
14+
15+
## Initialization
16+
17+
Create an instance by specifying the provider, API key, and model name:
18+
19+
```python
20+
from psyflow import LLMClient
21+
22+
client = LLMClient(
23+
provider="openai",
24+
api_key="YOUR_API_KEY",
25+
model="gpt-3.5-turbo"
26+
)
27+
```
28+
29+
The client wraps the underlying SDK and exposes common methods regardless of provider.
30+
31+
## Generating Text
32+
33+
Use `generate(prompt, **kwargs)` to obtain a completion from the configured model. Optional keyword arguments are passed to the underlying provider. Setting `deterministic=True` disables sampling randomness.
34+
35+
```python
36+
reply = client.generate(
37+
"Summarise the Stroop task in one sentence",
38+
deterministic=True,
39+
max_tokens=50
40+
)
41+
print(reply)
42+
```
43+
44+
## Converting Tasks to Documentation
45+
46+
`task2doc()` summarises an existing task into a README. The function loads your task logic and configuration, optionally uses few‑shot examples from `add_knowledge()`, and returns the generated Markdown text. If `output_path` is provided the README is also written to disk.
47+
48+
```python
49+
readme_text = client.task2doc(
50+
logic_paths=["./src/run_trial.py", "./main.py"],
51+
config_paths=["./config/config.yaml"],
52+
deterministic=True,
53+
output_path="./README.md"
54+
)
55+
```
56+
57+
## Recreating Tasks from Documentation
58+
59+
`doc2task()` performs the reverse operation. Given a README or raw description it regenerates the key source files. Provide a directory for outputs via `taps_root` and optionally customise the list of expected file names.
60+
61+
```python
62+
files = client.doc2task(
63+
doc_text="./README.md",
64+
taps_root="./recreated_task",
65+
deterministic=True
66+
)
67+
# files is a dict mapping each file name to its saved path
68+
```
69+
70+
## Translation Utilities
71+
72+
Several helper methods assist with localisation. The base `translate(text, target_language)` function translates arbitrary text while keeping formatting intact. `translate_config()` applies translation to relevant fields in a psyflow YAML configuration and can write the translated file to disk.
73+
74+
```python
75+
translated = client.translate(
76+
"Press the space bar when you see the target word",
77+
target_language="German"
78+
)
79+
80+
new_cfg = client.translate_config(
81+
target_language="German",
82+
config="./config/config.yaml",
83+
output_dir="./i18n"
84+
)
85+
```
86+
87+
These utilities store the last prompt and response for inspection (`last_prompt`, `last_response`) and automatically count tokens for the active model.
88+
89+
## Further Reading
90+
91+
See the API reference for a full description of all attributes and methods provided by [`psyflow.LLMClient`](../api/psyflow#psyflow.LLMClient).
92+

0 commit comments

Comments
 (0)