Skip to content

Commit d7d804c

Browse files
committed
dev main page
1 parent c7005c0 commit d7d804c

31 files changed

+2055
-142
lines changed

docs/.doctrees/LLMs.doctree

34.6 KB
Binary file not shown.

docs/.doctrees/environment.pickle

103 KB
Binary file not shown.
-3.46 KB
Binary file not shown.

docs/.doctrees/index.doctree

828 Bytes
Binary file not shown.
20.9 KB
Binary file not shown.

docs/.doctrees/taps.doctree

3 Bytes
Binary file not shown.

docs/.doctrees/text2voice.doctree

24.5 KB
Binary file not shown.

docs/LLMs.html

Lines changed: 484 additions & 0 deletions
Large diffs are not rendered by default.

docs/_sources/LLMs.md.txt

Lines changed: 183 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,183 @@
1+
## Large Language Models (LLMs) features
2+
Our library offers a lightweight, unified interface for interacting with Large Language Models (LLMs), currently supporting two providers:
3+
4+
-**Gemini**, which provides free-tier access to powerful models—ideal for getting started with no cost
5+
6+
-**Deepseek**, a cost-effective alternative via the OpenAI SDK (for users who don't have access to Gemini)
7+
8+
Instead of relying on heavier frameworks like LangChain, we built our own minimal wrapper to keep things simple: no extra dependencies beyond the provider SDKs, a clean and focused API (generate, translate, count_tokens, etc.), and fast, low-overhead execution.
9+
10+
11+
### 1. Verify the Native SDKs
12+
#### 1.1 Google-GenAI (Gemini)
13+
```python
14+
from google import genai
15+
16+
# 1a) Initialize the Gemini client
17+
genai.configure(api_key="…your Gemini API key…")
18+
client = genai.Client()
19+
20+
# List available model names
21+
model_list = client.models.list()
22+
model_ids = [model.name.split("/")[-1] for model in model_list]
23+
print("Available models:", model_ids)
24+
# Available models: ['embedding-gecko-001', 'gemini-1.0-pro-vision-latest', 'gemini-pro-vision', 'gemini-1.5-pro-latest', 'gemini-1.5-pro-002', 'gemini-1.5-pro', 'gemini-1.5-flash-latest', 'gemini-1.5-flash', 'gemini-1.5-flash-002', 'gemini-1.5-flash-8b', 'gemini-1.5-flash-8b-001', 'gemini-1.5-flash-8b-latest', 'gemini-2.5-pro-exp-03-25', 'gemini-2.5-pro-preview-03-25', 'gemini-2.5-flash-preview-04-17', 'gemini-2.5-flash-preview-05-20', 'gemini-2.5-flash', 'gemini-2.5-flash-preview-04-17-thinking', 'gemini-2.5-flash-lite-preview-06-17', 'gemini-2.5-pro-preview-05-06', 'gemini-2.5-pro-preview-06-05', 'gemini-2.5-pro', 'gemini-2.0-flash-exp', 'gemini-2.0-flash', 'gemini-2.0-flash-001', 'gemini-2.0-flash-lite-001', 'gemini-2.0-flash-lite', 'gemini-2.0-flash-lite-preview-02-05', 'gemini-2.0-flash-lite-preview', 'gemini-2.0-pro-exp', 'gemini-2.0-pro-exp-02-05', 'gemini-exp-1206', 'gemini-2.0-flash-thinking-exp-01-21', 'gemini-2.0-flash-thinking-exp', 'gemini-2.0-flash-thinking-exp-1219', 'gemini-2.5-flash-preview-tts', 'gemini-2.5-pro-preview-tts', 'learnlm-2.0-flash-experimental', 'gemma-3-1b-it', 'gemma-3-4b-it', 'gemma-3-12b-it', 'gemma-3-27b-it', 'gemma-3n-e4b-it', 'embedding-001', 'text-embedding-004', 'gemini-embedding-exp-03-07', 'gemini-embedding-exp', 'aqa', 'imagen-3.0-generate-002', 'veo-2.0-generate-001', 'gemini-2.5-flash-preview-native-audio-dialog', 'gemini-2.5-flash-preview-native-audio-dialog-rai-v3', 'gemini-2.5-flash-exp-native-audio-thinking-dialog', 'gemini-2.0-flash-live-001']
25+
26+
# 1b) Quick echo
27+
response = client.models.generate_content(
28+
model="gemini-1.5-flash",
29+
contents="Hello, how are you?"
30+
)
31+
print(response.text)
32+
# I am doing well, thank you for asking! How are you today?
33+
```
34+
35+
#### 1.2 OpenAI / Deepseek
36+
```python
37+
from openai import OpenAI
38+
client = OpenAI(api_key="…your key…", base_url="https://api.deepseek.com")
39+
40+
# 1a) List raw model names
41+
model_resp = client.models.list()
42+
# extract and print their IDs
43+
model_ids = [m.id for m in model_resp.data]
44+
print("Available models:", model_ids)
45+
# Available models: ['deepseek-chat', 'deepseek-reasoner']
46+
47+
# 1b) Quick echo
48+
response = client.chat.completions.create(
49+
model="deepseek-chat",
50+
messages=[
51+
{"role": "system", "content": "You are a helpful assistant"},
52+
{"role": "user", "content": "Hello"},
53+
],
54+
stream=False
55+
)
56+
print(response.choices[0].message.content)
57+
# Hello! How can I assist you today? 😊
58+
59+
```
60+
61+
### 2. Use Psyflow LLMClient Wrapper
62+
```python
63+
from psyflow import LLMClient, LLMUtil
64+
import os
65+
66+
# 2a) Instantiate
67+
gemini = LLMClient("gemini", "…your key…", "gemini-2.0-flash")
68+
deep = LLMClient("deepseek","…your key…", "deepseek-chat")
69+
70+
# 2b) List via wrapper (should match SDK lists)
71+
print("🔁 Gemini wrapper sees:", gemini.list_models())
72+
print("🔁 Deepseek wrapper sees:", deep.list_models())
73+
74+
# 2c) Echo test via wrapper (this will send a hello to the model)
75+
print("🔊 Gemini wrapper echo:", gemini.test(max_tokens=5))
76+
print("🔊 Deepseek wrapper echo:", deep.test(max_tokens=5))
77+
78+
79+
# 2d) Echo test via wrapper (send message by setting `ping` parameter)
80+
print("🔊 Gemini wrapper echo:", gemini.test(ping='who are you?', max_tokens=5))
81+
print("🔊 Deepseek wrapper echo:", deep.test(ping='who are you?', max_tokens=5))
82+
```
83+
84+
85+
### 3. LLMs-Powered Task Documentation
86+
87+
Our platform leverages Large Language Models (LLMs) to automatically generate human-readable documentation for cognitive tasks. This feature is designed to help developers, collaborators, and reviewers quickly understand the structure and parameters of a task—without having to dig through source code.
88+
89+
Our `LLMClient` includes a powerful `task2doc()` utility that lets you **automatically generate a detailed `README.md`** file for any PsyFlow-based cognitive task.
90+
91+
`task2doc()` analyzes four types of files:
92+
- `main.py` – overall task and block flow.
93+
- `run_trial.py` – trial-level stimulus and response logic.
94+
- `utils.py` – optional controllers or helpers (if present).
95+
- `config/config.yaml` – all task configuration parameters.
96+
97+
It sends these files, along with a structured instruction, to your selected LLM (e.g., Gemini or DeepSeek) and returns a structured markdown document with:
98+
- Task name and meta info
99+
- Task overview and flow tables
100+
- Configuration tables (e.g., stimuli, timing, triggers)
101+
- Method section for academic papers
102+
103+
**Example:**
104+
```python
105+
from psyflow.llm import LLMClient
106+
107+
client = LLMClient(provider="gemini", api_key="your-key", model="gemini-2.5-flash")
108+
readme_text = client.task2doc()
109+
```
110+
This creates a complete README.md based on your current `./main.py`, `./src/run_trial.py`, `./src/utils.py`, and `./config/config.yaml`. If not output_path is specified, it will be saved to `./README.md`.
111+
112+
Each generated `README.md` is organized into the following sections:
113+
114+
1. **Task Name** – Extracted from the configuration.
115+
2. **Meta Information** – A standardized two-column table including fields like version, author, repository, and software requirements.
116+
3. **Task Overview** – A one-paragraph description of the task’s purpose and structure.
117+
4. **Task Flow** – Detailed tables explaining the block-level and trial-level logic, including controller logic if applicable.
118+
5. **Configuration Summary** – Tables for each config section: subject info, window settings, stimuli, timing, triggers, and adaptive parameters.
119+
6. **Methods (for academic writing)** – A well-structured paragraph suitable for use in the Methods section of a scientific manuscript.
120+
121+
This automatic documentation feature reduces the burden on developers, promotes transparency in cognitive task design, and supports open and reproducible science.
122+
123+
124+
### 4. LLMs-Powered Localization
125+
126+
The `LLMClient` also supports automatic translation of task configurations using the `translate_config()` method. This localization feature enables your task templates to be easily adapted into other languages while preserving placeholder tokens and formatting. By combining this with PsyFlow’s localization-ready structure, you can easily localize tasks for global deployment.
127+
128+
`translate_config()` translate the following content in configuration:
129+
- `subinfo_mapping` labels (e.g., `"age"`, `"gender"`)
130+
- Any `stimuli` entries of type `text` or `textbox` (e.g., instructions or messages)
131+
132+
**Example 1: Translate default config (no file saved)**
133+
This reads the default `./config/config.yaml`, performs the translation in memory, and returns the updated config.
134+
135+
```python
136+
from psyflow.llm import LLMClient
137+
138+
client = LLMClient(provider="deepseek", api_key="your-key", model="deepseek-chat")
139+
140+
translated_config = client.translate_config(target_language="Japanese")
141+
```
142+
143+
No file is saved—useful for dynamic translation workflows.
144+
145+
**Example 2: Translate a loaded config dictionary (no file saved)**
146+
You can manually load a config and pass it in to apply translation:
147+
148+
```python
149+
from psyflow import load_config
150+
from psyflow.llm import LLMClient
151+
152+
client = LLMClient(provider="deepseek", api_key="your-key", model="deepseek-chat")
153+
154+
loaded = load_config("./config/config.yaml")
155+
156+
translated = client.translate_config(
157+
target_language="Japanese",
158+
config=loaded # work on this in-memory config
159+
)
160+
```
161+
162+
**Example 3: Translate and save to file**
163+
If `output_dir` is specified, the translated config will be saved to disk.
164+
165+
```python
166+
translated = client.translate_config(
167+
target_language="Japanese",
168+
config="./config/config.yaml",
169+
output_dir="./config",
170+
output_name="config.ja.yaml"
171+
)
172+
```
173+
This writes the translated YAML to `./config/config.ja.yaml`.
174+
175+
176+
**Optional Parameters**
177+
- `prompt`: Customize the translation instruction if needed.
178+
- `deterministic`, `temperature`, `max_tokens`: Control LLM generation behavior.
179+
- Works directly with `load_config()` output for in-memory editing.
180+
181+
182+
183+

docs/_sources/future_directions.md.txt

Lines changed: 0 additions & 5 deletions
This file was deleted.

0 commit comments

Comments
 (0)