Skip to content

Commit f821a5c

Browse files
committed
Add LLM provider support (Ollama/OpenAI) with new configuration, processor, docs, examples, and updated dependencies.
1 parent a1a0500 commit f821a5c

File tree

8 files changed

+564
-60
lines changed

8 files changed

+564
-60
lines changed

.env.example

Lines changed: 13 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,5 +13,16 @@ REDIS_PASSWORD=
1313
JWT_SECRET=your-secret-key-here
1414
JWT_EXPIRATION=7d
1515

16-
# OpenAI Configuration
17-
OPENAI_API_KEY=your-openai-api-key-here
16+
# LLM Provider Configuration
17+
# Choose which LLM provider to use:
18+
# - ollama: Local models via Ollama (gpt-oss:20b, deepseek-r1:7b, llama2:latest)
19+
# - openai: OpenAI cloud API (gpt-4o, gpt-4o-mini, etc.)
20+
LLM_PROVIDER=ollama
21+
22+
# Ollama Configuration (for local models)
23+
# Runs models like: gpt-oss:20b, deepseek-r1:7b, llama2:latest
24+
OLLAMA_URL=http://localhost:11434/v1
25+
26+
# OpenAI Configuration (OPTIONAL - only if using OpenAI proprietary cloud models)
27+
# OPENAI_API_KEY=sk-your-openai-api-key-here
28+
# OPENAI_BASE_URL=https://api.openai.com/v1

LLM_PROVIDERS.md

Lines changed: 214 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,214 @@
1+
# LLM Provider Configuration Guide
2+
3+
## Overview
4+
5+
Autobox Engine supports two LLM providers through the Vercel AI SDK:
6+
1. **Ollama** - Local models running on your machine (default)
7+
2. **OpenAI** - OpenAI proprietary cloud API (optional)
8+
9+
## Supported Providers
10+
11+
### 1. **Ollama (Local Models)** - Default ✅
12+
13+
Runs models locally on your machine at `localhost:11434`.
14+
15+
**Supported Models:**
16+
- `gpt-oss:20b` - OpenAI-compatible OSS model
17+
- `deepseek-r1:7b` - DeepSeek reasoning model (local)
18+
- `llama2:latest` - Meta's Llama 2
19+
- Any other model available in your local Ollama installation
20+
21+
**Configuration:**
22+
```bash
23+
LLM_PROVIDER=ollama
24+
OLLAMA_URL=http://localhost:11434/v1
25+
```
26+
27+
**Usage:**
28+
```json
29+
// In your simulation config
30+
{
31+
"workers": [
32+
{
33+
"name": "ANA",
34+
"llm": { "model": "deepseek-r1:7b" }
35+
}
36+
]
37+
}
38+
```
39+
40+
---
41+
42+
### 2. **OpenAI (Cloud API)** - Optional
43+
44+
Use OpenAI's proprietary cloud models.
45+
46+
**Supported Models:**
47+
- `gpt-4o` - Latest GPT-4 Omni
48+
- `gpt-4o-mini` - Smaller, faster GPT-4 Omni
49+
- `gpt-3.5-turbo` - GPT-3.5 Turbo
50+
- Any official OpenAI model
51+
52+
**Configuration:**
53+
```bash
54+
LLM_PROVIDER=openai
55+
OPENAI_API_KEY=sk-your-real-openai-api-key
56+
# OPENAI_BASE_URL=https://api.openai.com/v1 # Optional
57+
```
58+
59+
**Usage:**
60+
```json
61+
// In your simulation config
62+
{
63+
"workers": [
64+
{
65+
"name": "ANA",
66+
"llm": { "model": "gpt-4o-mini" }
67+
}
68+
]
69+
}
70+
```
71+
72+
---
73+
74+
## Quick Start
75+
76+
### Option 1: Running with Ollama (Default) ✅
77+
78+
1. **Install Ollama:**
79+
```bash
80+
# macOS
81+
brew install ollama
82+
83+
# Or download from https://ollama.com
84+
```
85+
86+
2. **Pull models:**
87+
```bash
88+
ollama pull gpt-oss:20b
89+
ollama pull deepseek-r1:7b
90+
ollama pull llama2:latest
91+
```
92+
93+
3. **Start Ollama server:**
94+
```bash
95+
ollama serve
96+
```
97+
98+
4. **Configure `.env`:**
99+
```bash
100+
LLM_PROVIDER=ollama
101+
OLLAMA_URL=http://localhost:11434/v1
102+
```
103+
104+
5. **Run simulation:**
105+
```bash
106+
yarn dev --simulation-name=gift_choice_2
107+
```
108+
109+
---
110+
111+
### Option 2: Switching to OpenAI Cloud
112+
113+
1. **Update `.env`:**
114+
```bash
115+
LLM_PROVIDER=openai
116+
OPENAI_API_KEY=sk-your-real-openai-api-key
117+
```
118+
119+
2. **Update simulation config (optional):**
120+
```json
121+
{
122+
"workers": [
123+
{
124+
"name": "ANA",
125+
"llm": { "model": "gpt-4o-mini" }
126+
}
127+
]
128+
}
129+
```
130+
131+
3. **Run simulation:**
132+
```bash
133+
yarn dev --simulation-name=gift_choice_2
134+
```
135+
136+
---
137+
138+
## Important Notes
139+
140+
### 💡 Provider Comparison
141+
142+
| Feature | Ollama (Local) | OpenAI (Cloud) |
143+
|---------|---------------|----------------|
144+
| Cost | Free | Paid per token |
145+
| Speed | Depends on hardware | Fast |
146+
| Privacy | 100% local | Data sent to OpenAI |
147+
| Models | gpt-oss:20b, deepseek-r1:7b, llama2 | gpt-4o, gpt-4o-mini, etc. |
148+
149+
### 🔧 Per-Agent Configuration
150+
151+
You can specify different models for different agents:
152+
153+
```json
154+
{
155+
"planner": {
156+
"llm": { "model": "gpt-oss:20b" }
157+
},
158+
"workers": [
159+
{
160+
"name": "ANA",
161+
"llm": { "model": "deepseek-r1:7b" }
162+
},
163+
{
164+
"name": "JOHN",
165+
"llm": { "model": "llama2:latest" }
166+
}
167+
]
168+
}
169+
```
170+
171+
**Note:** All agents use the same `LLM_PROVIDER` - you cannot mix Ollama and OpenAI in a single simulation.
172+
173+
---
174+
175+
## Troubleshooting
176+
177+
### Error: "Cannot connect to Ollama"
178+
```bash
179+
# Ensure Ollama is running:
180+
ollama serve
181+
182+
# Verify models are available:
183+
ollama list
184+
```
185+
186+
### Error: "Invalid API key" (OpenAI)
187+
```bash
188+
# Verify your .env file:
189+
echo $OPENAI_API_KEY
190+
191+
# Key should start with "sk-"
192+
```
193+
194+
### Error: "Model not found"
195+
```bash
196+
# For Ollama: Pull the model first
197+
ollama pull deepseek-r1:7b
198+
199+
# For OpenAI: Check model name is correct
200+
# Valid: gpt-4o-mini, gpt-4o
201+
# Invalid: gpt-5-nano (doesn't exist on OpenAI cloud)
202+
```
203+
204+
---
205+
206+
## Architecture
207+
208+
Built with **Vercel AI SDK** (version 5), providing:
209+
- ✅ Unified API for Ollama and OpenAI
210+
- ✅ Type-safe model configuration
211+
- ✅ Automatic streaming support
212+
- ✅ Built-in error handling
213+
214+
See `src/core/llm/createAiProcessor.ts` for implementation.
Lines changed: 88 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,88 @@
1+
[
2+
{
3+
"name": "initial_preference_alignment",
4+
"description": "Measures the initial alignment of Ana and John's gift preferences.",
5+
"type": "GAUGE",
6+
"unit": "percentage",
7+
"tags": [
8+
{
9+
"tag": "agent_name",
10+
"description": "The name of the agent (Ana or John)."
11+
}
12+
]
13+
},
14+
{
15+
"name": "final_agreement_score",
16+
"description": "Evaluates how much the final gift decision aligns with each agent's preferences.",
17+
"type": "GAUGE",
18+
"unit": "percentage",
19+
"tags": [
20+
{
21+
"tag": "agent_name",
22+
"description": "The name of the agent (Ana or John)."
23+
}
24+
]
25+
},
26+
{
27+
"name": "decision_iterations",
28+
"description": "Tracks the number of iterations or discussions needed to reach a decision.",
29+
"type": "COUNTER",
30+
"unit": "iterations",
31+
"tags": []
32+
},
33+
{
34+
"name": "influence_score",
35+
"description": "Measures the influence each agent had on the final gift decision.",
36+
"type": "GAUGE",
37+
"unit": "percentage",
38+
"tags": [
39+
{
40+
"tag": "agent_name",
41+
"description": "The name of the agent (Ana or John)."
42+
}
43+
]
44+
},
45+
{
46+
"name": "compromise_index",
47+
"description": "Measures how much each agent had to compromise from their initial preferences.",
48+
"type": "GAUGE",
49+
"unit": "percentage",
50+
"tags": [
51+
{
52+
"tag": "agent_name",
53+
"description": "The name of the agent (Ana or John)."
54+
}
55+
]
56+
},
57+
{
58+
"name": "satisfaction_score",
59+
"description": "Measures each agent's satisfaction with the final gift decision.",
60+
"type": "GAUGE",
61+
"unit": "score",
62+
"tags": [
63+
{
64+
"tag": "agent_name",
65+
"description": "The name of the agent (Ana or John)."
66+
}
67+
]
68+
},
69+
{
70+
"name": "flexibility_score",
71+
"description": "Measures the willingness of each agent to adapt their preferences during the decision process.",
72+
"type": "GAUGE",
73+
"unit": "score",
74+
"tags": [
75+
{
76+
"tag": "agent_name",
77+
"description": "The name of the agent (Ana or John)."
78+
}
79+
]
80+
},
81+
{
82+
"name": "consensus_level",
83+
"description": "Indicates the level of agreement between Ana and John on the final gift decision.",
84+
"type": "GAUGE",
85+
"unit": "percentage",
86+
"tags": []
87+
}
88+
]
Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
{
2+
"name": "Gift choice",
3+
"max_steps": 150,
4+
"timeout_seconds": 120,
5+
"shutdown_grace_period_seconds": 5,
6+
"description": "This simulation is about two friends that need to decide together a gift for a special occasion.",
7+
"task": "The task is VERY simple and should be completed in seconds: Ana and John need to pick a gift for the 30th birthday of another friend called Maria who likes books and reading, mainly fantasy books. She has preferences for Tokien and one book she has never read is 'The Hobbit'. Maria will be happy with any gift, but she will be happier with whatever gift. Ana and John should pick between 'The Hobbit: Illustrated Deluxe Edition, HarperCollins, Alan Lee' and 'The Hobbit, or There and Back Again, J.R.R. Tolkien'. Budget, delivery time, availability and NOTHING else matters, they ONLY need to pick one of the two options. There must not be other discussion, they should pick and that's it.",
8+
"evaluator": {
9+
"name": "EVALUATOR",
10+
"mailbox": {
11+
"max_size": 400
12+
},
13+
"llm": {
14+
"model": "llama2:latest"
15+
}
16+
},
17+
"reporter": {
18+
"name": "REPORTER",
19+
"mailbox": {
20+
"max_size": 400
21+
},
22+
"llm": {
23+
"model": "llama2:latest"
24+
}
25+
},
26+
"planner": {
27+
"name": "PLANNER",
28+
"mailbox": {
29+
"max_size": 400
30+
},
31+
"llm": {
32+
"model": "llama2:latest"
33+
}
34+
},
35+
"orchestrator": {
36+
"name": "ORCHESTRATOR",
37+
"mailbox": {
38+
"max_size": 400
39+
},
40+
"llm": {
41+
"model": "llama2:latest"
42+
}
43+
},
44+
"workers": [
45+
{
46+
"name": "ANA",
47+
"description": "this is ana agent",
48+
"role": "Ana is Maria's friend. Pick quick no much thinking. She will buy.",
49+
"backstory": "She is passionate about reading and books.",
50+
"llm": {
51+
"model": "llama2:latest"
52+
},
53+
"mailbox": {
54+
"max_size": 100
55+
}
56+
},
57+
{
58+
"name": "JOHN",
59+
"description": "this is john agent",
60+
"role": "John is Maria's friend. Pick quick no much thinking.",
61+
"backstory": "He is passionate about books of any kind.",
62+
"llm": {
63+
"model": "llama2:latest"
64+
},
65+
"mailbox": {
66+
"max_size": 100
67+
}
68+
}
69+
],
70+
"logging": {
71+
"verbose": false,
72+
"log_path": "logs",
73+
"log_file": "gift_choice.log"
74+
}
75+
}

0 commit comments

Comments
 (0)