You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**Design decision:** LLM evaluates ALL pending mentions and picks the best one, rather than replying to every mention. This creates more authentic engagement.
257
+
**Design decisions:**
258
+
- LLM selects MULTIPLE mentions (with priority) instead of just one
259
+
- Each mention gets individual planning and tool execution
260
+
- User conversation history provides context for personalized replies
261
+
- Empty plan `[]` is valid — most replies don't need tools
262
+
- Tracks which tools were used for analytics
220
263
221
264
### services/llm.py
222
265
`LLMClient` class — async client for OpenRouter API.
@@ -302,27 +345,55 @@ TIER_FEATURES = {
302
345
- Both services receive `tier_manager` instance via constructor
303
346
304
347
### tools/registry.py
305
-
Tool registry for agent function calling. Contains:
306
-
-`TOOLS` — dict mapping tool names to async functions
307
-
-`TOOLS_SCHEMA` — list of JSON schemas in OpenAI function calling format
348
+
Tool registry with **auto-discovery** (v1.3).
349
+
350
+
**How it works:**
351
+
- Uses `pkgutil.iter_modules()` to scan all Python files in `tools/` directory
352
+
- Each tool file that exports `TOOL_SCHEMA` is automatically registered
353
+
- Tool function must have the same name as `schema["function"]["name"]`
354
+
355
+
**Exports:**
356
+
-`TOOLS` — dict mapping tool names to async functions (auto-populated)
357
+
-`TOOLS_SCHEMA` — list of JSON schemas in OpenAI function calling format (auto-populated)
308
358
-`get_tools_description()` — generates human-readable tool descriptions for agent prompts
359
+
-`refresh_tools()` — re-scan tools at runtime
309
360
310
361
**Available tools:**
311
362
-`web_search` — real-time web search via OpenRouter plugins
312
363
-`generate_image` — image generation using Gemini 3 Pro
313
364
314
-
**Dynamic tool discovery:**
315
-
The `get_tools_description()` function automatically generates tool documentation from `TOOLS_SCHEMA`. When you add a new tool, it appears in the agent's system prompt automatically.
365
+
**To add a new tool (zero registry changes needed):**
366
+
1. Create `tools/my_tool.py`
367
+
2. Add `TOOL_SCHEMA` constant with OpenAI function calling format
368
+
3. Create async function with matching name
369
+
4. Done! Tool is auto-discovered on startup
316
370
317
-
To add a new tool:
318
-
1. Create tool function in `tools/` directory
319
-
2. Import and add to `TOOLS` dict
320
-
3. Add JSON schema to `TOOLS_SCHEMA` list
321
-
4. The agent will automatically see and be able to use it
| Agent creates plan → executes tools → generates post |Handles mentions & replies |
122
-
| Dynamic tool usage (web search, image generation) | LLM chooses which mention to reply to|
123
-
| Posts to Twitter with optional media |LLM generates response + optional image|
121
+
| Agent creates plan → executes tools → generates post |Agent selects mentions → plans per mention → generates replies |
122
+
| Dynamic tool usage (web search, image generation) |3 LLM calls per mention (select → plan → reply)|
123
+
| Posts to Twitter with optional media |Tracks tools used per reply|
124
124
125
-
**Agent Architecture:** The autoposting system uses an autonomous agent that decides which tools to use based on context. It can search the web for current information, generate images, or post without any tools — whatever makes the best tweet.
125
+
**Agent Architecture:** Both systems use autonomous agents that decide which tools to use based on context. The mention agent can process multiple mentions per batch, creating individual plans for each selected mention.
126
+
127
+
**Auto-Discovery Tools:** Tools are automatically discovered from the `tools/` directory. Add a new tool file with `TOOL_SCHEMA` and it's available to agents without any registry changes.
126
128
127
129
This separation keeps the codebase simple while enabling both proactive and reactive behavior.
6. If a mention is selected, generates optional image and posts reply
306
-
7. Saves mention interaction to database for history
307
-
308
-
**Why single-call selection:** Instead of replying to every mention, the LLM evaluates all pending mentions and picks the most interesting one. This creates more authentic engagement — your agent chooses conversations worth having, just like a real person would.
306
+
3.**LLM #1: Selection** — Evaluates all mentions, returns array of worth replying to (with priority)
307
+
4. For EACH selected mention:
308
+
- Gets user conversation history from database
309
+
-**LLM #2: Planning** — Creates plan (which tools to use)
310
+
- Executes tools (web_search, generate_image)
311
+
-**LLM #3: Reply** — Generates final reply text
312
+
- Uploads image if generated, posts reply
313
+
- Saves interaction with tools_used tracking
314
+
5. Returns batch summary
315
+
316
+
**Why agent architecture:** Instead of a single LLM call for all mentions, each mention gets individual attention. The agent can use tools to research topics, generate custom images, and craft contextually appropriate replies. User conversation history enables personalized interactions.
309
317
310
318
**Configuration:**
311
319
-`MENTIONS_INTERVAL_MINUTES` — Time between mention checks (default: 20)
320
+
-`MENTIONS_WHITELIST` — Optional list of usernames for testing (empty = all users)
312
321
- Requires Twitter API Basic tier or higher for mention access
Generates images using Gemini 3 Pro via OpenRouter, with support for reference images.
317
326
318
-
**How `assets/` folder works:**
319
-
- Place 1 or more reference images in `assets/` folder (supports: jpg, png, jpeg, gif, webp)
320
-
- Bot randomly selects up to 2 images as style/character reference
327
+
**How `assets/` folder works (v1.3):**
328
+
- Place reference images in `assets/` folder (supports: png, jpg, jpeg, gif, webp, jfif)
329
+
- Bot uses **ALL** reference images (not random selection) for maximum consistency
321
330
- Reference images are sent to the model along with the generation prompt
322
331
- If `assets/` is empty, images are generated without reference (pure text-to-image)
323
332
- Use reference images to maintain consistent character appearance across posts
324
333
325
-
**Example use case:** Place photos of your bot's character/avatar in `assets/`. The model will use these as reference when generating new images, keeping the visual style consistent.
334
+
**Auto-discovery:** Tool exports `TOOL_SCHEMA` and is automatically available to agents.
335
+
336
+
**Example use case:** Place photos of your bot's character/avatar in `assets/`. The model will use all of them as reference when generating new images, keeping the visual style consistent.
0 commit comments