You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update README with Gemini configuration examples and provider table.
Simplify findSettingsFile to always use ~/.solenoid/ instead of walking
the directory tree. Add startup logging for resolved model name and API
key presence, and warn on empty Gemini responses.
Copy file name to clipboardExpand all lines: README.md
+37-14Lines changed: 37 additions & 14 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -33,7 +33,7 @@ See [Development](#development) section below for building from source with Poet
33
33
-**Web Research**: Brave Search integration for real-time web queries
34
34
-**MCP Support**: Model Context Protocol for extensible tool integration (stdio and HTTP servers)
35
35
-**Local Memory System**: SQLite + FTS5 + sqlite-vec for hybrid semantic/keyword search with BGE reranking
36
-
-**Configurable Models**: Support for Ollama models via LiteLLM with automatic model pulling
36
+
-**Configurable Models**: Support for Gemini (default) and Ollama models via Google ADK
37
37
-**Customizable Prompts**: All agent prompts configurable via YAML
38
38
-**In-App Settings Editor**: Edit configuration via `/settings` command with YAML validation
39
39
-**Slash Commands**: Extensible command system for quick actions (`/settings`, `/help`, `/clear`)
@@ -178,31 +178,54 @@ All configuration is managed through `app_settings.yaml` in the project root.
178
178
179
179
### Model Configuration
180
180
181
+
Solenoid supports multiple model providers. Gemini is the default and requires no local infrastructure.
182
+
183
+
#### Gemini (Default)
184
+
185
+
Set your API key as an environment variable:
186
+
187
+
```bash
188
+
export GOOGLE_GENAI_API_KEY="your-api-key"
189
+
# or alternatively:
190
+
export GEMINI_API_KEY="your-api-key"
191
+
```
192
+
181
193
```yaml
182
194
models:
183
195
default:
184
-
name: "ministral-3:8b"
185
-
provider: "ollama_chat"
196
+
name: "gemini-3-flash-preview"
197
+
provider: "gemini"
186
198
context_length: 128000
187
-
agent:
199
+
```
200
+
201
+
#### Ollama (Local Inference)
202
+
203
+
For fully local inference using [Ollama](https://ollama.com/):
204
+
205
+
```yaml
206
+
models:
207
+
default:
188
208
name: "ministral-3:8b"
209
+
provider: "ollama_chat"
189
210
context_length: 128000
190
-
extractor:
191
-
name: "ministral-3:8b"
192
211
```
193
212
194
-
**Model Roles:**
213
+
If a configured Ollama model is not found locally, the application automatically attempts to pull it. Uses model names from the [Ollama library](https://ollama.com/library).
214
+
215
+
#### Model Roles
216
+
195
217
- `default`: Fallback model for unspecified roles
196
218
- `agent`: Used by all agent roles (requires function calling support)
197
219
- `extractor`: Used for memory extraction
198
220
199
-
**Model Requirements:**
200
-
- Models used for the `agent` role must support **function calling** (tool use)
201
-
- Recommended: `ministral-3:8b`, `qwen3:8b`, `llama3.1`, or similar function-calling capable models
202
-
- Uses Ollama model names from the [Ollama library](https://ollama.com/library)
221
+
#### Supported Providers
222
+
223
+
| Provider | Config Value | Notes |
224
+
|----------|-------------|-------|
225
+
| Gemini | `gemini` | Default. Requires `GOOGLE_GENAI_API_KEY` or `GEMINI_API_KEY` env var |
226
+
| Ollama | `ollama_chat` | Local inference. Requires running Ollama server |
203
227
204
-
**Automatic Model Pulling:**
205
-
If a configured model is not found in your local Ollama instance, the application automatically attempts to pull it when the agent starts.
228
+
Models used for the `agent` role must support **function calling** (tool use).
206
229
207
230
### Search Configuration
208
231
@@ -518,7 +541,7 @@ The executable will be created at `dist/solenoid`. This binary replicates the be
518
541
519
542
- Python 3.11+
520
543
- Poetry (for dependency management)
521
-
- Ollama (for local LLM inference)
544
+
- Ollama (only required for `ollama_chat` provider)
agentLogger.debug('[Runner] Skipping empty final event, continuing...');
170
+
agentLogger.warn(
171
+
`[Runner] Empty final event from ${event.author} — model may have failed silently (auth error? invalid model name?). Event: ${JSON.stringify({id: event.id,role: event.content?.role,parts: event.content?.parts?.length??0,actions: event.actions})}`
0 commit comments