Skip to content

Commit ea47724

Browse files
committed
update cli and llm docs
1 parent 8cfa7d1 commit ea47724

File tree

2 files changed

+121
-66
lines changed

2 files changed

+121
-66
lines changed

docs/tutorials/cli_usage.md

Lines changed: 2 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -52,21 +52,18 @@ psyflow-init
5252
Before copying template files, the CLI checks for existing files or folders with the same names. If any conflicts are found, you will be prompted:
5353

5454
```
55-
⚠ Existing file 'main.py' detected. Overwrite? [y/N]:
55+
⚠ Existing file 'main.py' detected. Overwrite this and all remaining? [y/N]:
5656
```
5757

58-
- Enter `y` to proceed and replace the file.
58+
- Enter `y` to proceed and replace all existing files.
5959
- Enter `n` (or press Enter) to skip that file and continue with others.
6060

6161
This interactive confirmation prevents unintentional data loss during in-place initialization.
6262

6363
## 3. How It Works Internally
64-
6564
1. **Locate template**: Uses `importlib.resources` to find the `psyflow.templates` package and the `cookiecutter-psyflow` folder.
6665
2. **Cookiecutter render**:
6766
- **New‑directory mode**: Directly runs Cookiecutter into `./<project_name>`.
6867
- **In‑place mode**: Renders into a temporary directory, then copies files into the current folder.
6968
3. **Cleanup**: In-place mode deletes the temporary render directory when finished.
7069

71-
> *Tip*: All rendering is done with `no_input=True` so the command never pauses for prompts.
72-

docs/tutorials/llm_client.md

Lines changed: 119 additions & 61 deletions
Original file line numberDiff line numberDiff line change
@@ -1,99 +1,157 @@
1-
# Interacting with Large Language Models (LLMs)
1+
## Overview
22

3-
`psyflow` provides a powerful and unified `LLMClient` to connect your experiments with various Large Language Models (LLMs). This client can be used for a variety of tasks, including generating text, creating documentation for your task, and even translating content.
3+
The `LLMClient` class in `psyflow` offers a lightweight, unified interface for interacting with various Large Language Model (LLM) backends, including Google Gemini, OpenAI, Deepseek, and Moonshot. Instead of relying on heavy frameworks like LangChain, we built a minimal wrapper to keep things simple: no extra dependencies beyond provider SDKs, a clean API (e.g., `generate()`, `translate()`, `count_tokens()`), and fast, low-overhead execution.
44

5-
The `LLMClient` supports multiple providers out-of-the-box:
6-
- `gemini` (Google)
7-
- `openai` (OpenAI)
8-
- `deepseek` (DeepSeek)
5+
## Supported Providers
96

10-
## Getting Started: Initializing the Client
7+
Our library supports flexible, cost-effective access across multiple providers:
118

12-
First, you need to import the `LLMClient` and initialize it with your provider details. You will need an API key from your chosen provider.
9+
- **Gemini** (Google GenAI): Free-tier access to powerful models—ideal for getting started at no cost.
10+
- **OpenAI**: Official OpenAI SDK support for GPT‑series models and fine-tuned endpoints.
11+
- **Deepseek**: A cost-effective alternative via the OpenAI-compatible SDK for users without Gemini access.
12+
- **Moonshot**: A cost-effective alternative via the OpenAI-compatible SDK for users without Gemini access.
1313

14-
```python
15-
from psyflow.LLM import LLMClient
16-
import os
14+
## Key Features
1715

18-
# Make sure to set your API key securely
19-
# For example, load it from an environment variable
20-
# api_key = os.environ.get("OPENAI_API_KEY")
16+
| Feature | Description |
17+
| ---------------------- | -------------------------------------------------------------------- |
18+
| Multi-provider support | Out-of-the-box: Gemini, OpenAI, Deepseek, Moonshot |
19+
| Text generation | `generate()` with sampling and deterministic options |
20+
| Model discovery | `list_models()` lists IDs from each provider |
21+
| Task documentation | `task2doc()` auto-creates a structured `README.md` |
22+
| Translation | `translate()` for strings, `translate_config()` for YAML |
23+
| Knowledge management | `add_knowledge()` & `save_knowledge()` manage few-shot examples |
24+
| Error handling | Raises `LLMAPIError` for failures, missing models, or token overflow |
2125

22-
llm_client = LLMClient(
23-
provider="openai",
24-
api_key="YOUR_API_KEY", # Replace with your actual key
25-
model="gpt-3.5-turbo"
26-
)
27-
```
26+
## Quick Reference
27+
28+
| Purpose | Method | Example |
29+
| --------------------- | ------------------------------------------------------- | ------------------------------------------------------------------------ |
30+
| Initialize client | `LLMClient(provider, api_key, model)` | `client = LLMClient("openai", os.getenv("OPENAI_KEY"), "gpt-4o-mini")` |
31+
| Generate text | `generate(prompt, deterministic=False, **kwargs)` | `resp = client.generate("Hello world", temperature=0.5)` |
32+
| List models | `list_models()` | `models = client.list_models()` |
33+
| Smoke-test connection | `test(ping, max_tokens)` | `client.test("Hi", max_tokens=5)` |
34+
| Auto-generate README | `task2doc(logic_paths, config_paths, output_path)` | `client.task2doc(["src/run_trial.py"], ["config/config.yaml"], "./")` |
35+
| Translate string | `translate(text, target_language)` | `client.translate("Welcome", "Japanese")` |
36+
| Translate config YAML | `translate_config(target_language, config, output_dir)` | `client.translate_config("Spanish", "./config/config.yaml", "./config")` |
2837

29-
When you create an `LLMClient` instance, you specify the `provider`, your `api_key`, and the `model` you wish to use.
38+
## Detailed Usage Guide
3039

31-
## Basic Text Generation
40+
### 1. Verify Native SDKs
3241

33-
The most fundamental use of the client is to generate text from a prompt using the `generate()` method.
42+
#### 1.1 Google-GenAI (Gemini)
3443

3544
```python
36-
prompt = "Explain the Stroop effect in one sentence."
37-
response = llm_client.generate(prompt)
38-
print(response)
45+
from google import genai
46+
# Initialize the Gemini client
47+
genai.configure(api_key="…your Gemini API key…")
48+
client = genai.Client()
49+
50+
# List available models
51+
models = client.models.list()
52+
model_ids = [m.name.split('/')[-1] for m in models]
53+
print("Available models:", model_ids)
54+
55+
# Quick echo test
56+
resp = client.models.generate_content(
57+
model="gemini-1.5-flash",
58+
contents="Hello, how are you?"
59+
)
60+
print(resp.text)
61+
# -> I am doing well... How are you today?
3962
```
4063

41-
You can also control the creativity of the response. For a more predictable, less random output, set `deterministic=True`.
64+
#### 1.2 OpenAI / Deepseek
4265

4366
```python
44-
response = llm_client.generate(prompt, deterministic=True)
45-
print(response)
67+
from openai import OpenAI
68+
client = OpenAI(api_key="…your key…", base_url="https://api.deepseek.com")
69+
70+
# List models from Deepseek
71+
resp = client.models.list()
72+
ids = [m.id for m in resp.data]
73+
print("Available models:", ids)
74+
75+
# Quick echo test
76+
echo = client.chat.completions.create(
77+
model="deepseek-chat",
78+
messages=[{"role": "user", "content": "Hello"}],
79+
stream=False
80+
)
81+
print(echo.choices[0].message.content)
82+
# -> Hello! How can I assist you today?
4683
```
4784

48-
## Listing Available Models
49-
50-
If you are not sure which model identifier to use, you can list all available models for your configured provider.
85+
### 2. Use Psyflow `LLMClient` Wrapper
5186

5287
```python
53-
available_models = llm_client.list_models()
54-
print(available_models)
88+
from psyflow import LLMClient
89+
import os
90+
91+
# Instantiate wrappers for each provider
92+
gemini = LLMClient("gemini", os.getenv("GEMINI_KEY"), "gemini-2.0-flash")
93+
deep = LLMClient("deepseek", os.getenv("DEESEEK_KEY"), "deepseek-chat")
94+
95+
# List models via wrapper
96+
print("Gemini sees:", gemini.list_models())
97+
print("Deepseek sees:", deep.list_models())
98+
99+
# Echo test via wrapper
100+
gemini_echo = gemini.test(max_tokens=5)
101+
print("Gemini echo:", gemini_echo)
102+
deepseal_echo = deep.test(max_tokens=5)
103+
print("Deepseek echo:", deepseal_echo)
55104
```
56-
This is a great way to explore and find the perfect model for your needs.
57105

58-
## Advanced Usage: Auto-generating Task Documentation
106+
### 3. LLMs-Powered Task Documentation
59107

60-
One of the powerful features of the `LLMClient` is its ability to automatically generate a `README.md` file for your task based on your source code and configuration. This is done with the `task2doc()` method.
108+
Use `task2doc()` to generate a complete `README.md` for your PsyFlow task:
61109

62110
```python
63-
# This assumes you are running from the root of a psyflow project
64-
readme_content = llm_client.task2doc(
65-
logic_paths=["./src/run_trial.py", "./main.py"],
66-
config_paths=["./config/config.yaml"],
67-
output_path="./" # Save the README.md in the current directory
111+
client = LLMClient("gemini", os.getenv("GEMINI_KEY"), "gemini-2.5-flash")
112+
readme = client.task2doc(
113+
logic_paths=["main.py", "src/run_trial.py"],
114+
config_paths=["config/config.yaml"],
115+
output_path="./"
68116
)
69-
70-
print("README.md has been generated!")
117+
print("Generated README content:")
118+
print(readme)
71119
```
72-
This method reads your task logic and configuration, sends it to the LLM with a carefully crafted prompt, and saves the generated documentation.
73120

74-
## Advanced Usage: Translating Content
121+
This reads your code and config, sends them to the LLM, and writes a structured markdown document with:
122+
123+
- **Meta Information**: version, author, requirements
124+
- **Task Overview** and **Flow Tables**
125+
- **Configuration Summaries**: stimuli, timing, triggers
126+
- **Methods** section ready for manuscripts
127+
128+
### 4. LLMs-Powered Localization
75129

76-
The `LLMClient` can also be used to translate text, which is incredibly useful for creating multilingual experiments.
130+
#### 4.1 In-Memory Translation
77131

78-
### Translating a simple string
79-
You can translate any string to a target language using the `translate()` method.
80132
```python
81-
english_text = "Welcome to the experiment."
82-
german_text = llm_client.translate(english_text, target_language="German")
83-
print(german_text)
84-
# Expected output: Willkommen zum Experiment.
133+
client = LLMClient("deepseek", os.getenv("DEESEEK_KEY"), "deepseek-chat")
134+
translated = client.translate_config(
135+
target_language="Japanese"
136+
)
137+
print(translated)
85138
```
86139

87-
### Translating a configuration file
88-
You can even translate a whole configuration file using the `translate_config()` method. This is useful for localizing instructions or stimuli defined in your `config.yaml`.
140+
#### 4.2 Translate and Save
141+
89142
```python
90-
# This will translate relevant fields in the config file
91-
# and save a new file (e.g., config.translated.yaml)
92-
translated_config = llm_client.translate_config(
143+
translated = client.translate_config(
93144
target_language="Spanish",
94145
config="./config/config.yaml",
95-
output_dir="./config"
146+
output_dir="./config",
147+
output_name="config.es.yaml"
96148
)
97-
print("Translated config has been saved!")
149+
print("Saved to ./config/config.es.yaml")
98150
```
99-
This will automatically find text-based stimuli and other translatable fields in your configuration and translate them.
151+
152+
This updates your YAML fields (labels, stimuli text) and writes a `.translated.yaml` file.
153+
154+
```{Note}
155+
I am trying to implement a more robust doc2task pipeline for PsyFlow tasks.
156+
Stay tuned for updates!
157+
```

0 commit comments

Comments
 (0)