Skip to content

Commit 068e185

Browse files
authored
Merge pull request #29 from elizaOS/remove-local-ai-docs
remove local-ai from docs
2 parents 9b05613 + d4144b2 commit 068e185

File tree

5 files changed

+113
-236
lines changed

5 files changed

+113
-236
lines changed

docs.json

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -259,8 +259,7 @@
259259
"plugins/llm/anthropic",
260260
"plugins/llm/google-genai",
261261
"plugins/llm/ollama",
262-
"plugins/llm/openrouter",
263-
"plugins/llm/local-ai"
262+
"plugins/llm/openrouter"
264263
]
265264
},
266265
{

plugins/llm.mdx

Lines changed: 107 additions & 50 deletions
Original file line numberDiff line numberDiff line change
@@ -10,24 +10,33 @@ ElizaOS uses a plugin-based architecture for integrating different Language Mode
1010

1111
### Model Types
1212

13-
ElizaOS supports three types of model operations:
13+
ElizaOS supports many types of AI operations. Here are the most common ones:
1414

15-
1. **TEXT_GENERATION** - Generating conversational responses
16-
2. **EMBEDDING** - Creating vector embeddings for memory and similarity search
17-
3. **OBJECT_GENERATION** - Structured output generation (JSON/XML)
15+
1. **TEXT_GENERATION** (`TEXT_SMALL`, `TEXT_LARGE`) - Having conversations and generating responses
16+
2. **TEXT_EMBEDDING** - Converting text into numbers for memory and search
17+
3. **OBJECT_GENERATION** (`OBJECT_SMALL`, `OBJECT_LARGE`) - Creating structured data like JSON
18+
19+
Think of it like different tools in a toolbox:
20+
- **Text Generation** = Having a conversation
21+
- **Embeddings** = Creating a "fingerprint" of text for finding similar things later
22+
- **Object Generation** = Filling out forms with specific information
1823

1924
### Plugin Capabilities
2025

21-
Not all LLM plugins support all model types:
26+
Not all LLM plugins support all model types. Here's what each can do:
27+
28+
| Plugin | Text Chat | Embeddings | Structured Output | Runs Offline |
29+
|--------|-----------|------------|-------------------|--------------|
30+
| OpenAI |||||
31+
| Anthropic |||||
32+
| Google GenAI |||||
33+
| Ollama |||||
34+
| OpenRouter |||||
2235

23-
| Plugin | Text Generation | Embeddings | Object Generation |
24-
|--------|----------------|------------|-------------------|
25-
| OpenAI ||||
26-
| Anthropic ||||
27-
| Google GenAI ||||
28-
| Ollama ||||
29-
| OpenRouter ||||
30-
| Local AI ||||
36+
**Key Points:**
37+
- 🌟 **OpenAI & Google GenAI** = Do everything (jack of all trades)
38+
- 💬 **Anthropic & OpenRouter** = Amazing at chat, need help with embeddings
39+
- 🏠 **Ollama** = Your local hero - does almost everything, no internet needed!
3140

3241
## Plugin Loading Order
3342

@@ -39,57 +48,109 @@ plugins: [
3948
'@elizaos/plugin-sql',
4049

4150
// Text-only plugins (no embedding support)
42-
...(process.env.ANTHROPIC_API_KEY ? ['@elizaos/plugin-anthropic'] : []),
43-
...(process.env.OPENROUTER_API_KEY ? ['@elizaos/plugin-openrouter'] : []),
44-
45-
// Embedding-capable plugins last (lowest priority for embedding fallback)
46-
...(process.env.OPENAI_API_KEY ? ['@elizaos/plugin-openai'] : []),
47-
...(process.env.OLLAMA_API_ENDPOINT ? ['@elizaos/plugin-ollama'] : []),
48-
...(process.env.GOOGLE_GENERATIVE_AI_API_KEY ? ['@elizaos/plugin-google-genai'] : []),
49-
50-
// Fallback when no other LLM is configured
51-
...(!process.env.GOOGLE_GENERATIVE_AI_API_KEY &&
52-
!process.env.OLLAMA_API_ENDPOINT &&
53-
!process.env.OPENAI_API_KEY
54-
? ['@elizaos/plugin-local-ai']
55-
: []),
51+
...(process.env.ANTHROPIC_API_KEY?.trim() ? ['@elizaos/plugin-anthropic'] : []),
52+
...(process.env.OPENROUTER_API_KEY?.trim() ? ['@elizaos/plugin-openrouter'] : []),
53+
54+
// Embedding-capable plugins (optional, based on available credentials)
55+
...(process.env.OPENAI_API_KEY?.trim() ? ['@elizaos/plugin-openai'] : []),
56+
...(process.env.GOOGLE_GENERATIVE_AI_API_KEY?.trim() ? ['@elizaos/plugin-google-genai'] : []),
57+
58+
// Ollama as fallback (only if no main LLM providers are configured)
59+
...(process.env.OLLAMA_API_ENDPOINT?.trim() ? ['@elizaos/plugin-ollama'] : []),
5660
]
5761
```
5862

5963
### Understanding the Order
6064

61-
1. **Text-only plugins first** - Anthropic and OpenRouter are loaded first for text generation
62-
2. **Embedding-capable plugins last** - These serve as fallbacks for embedding operations
63-
3. **Local AI as ultimate fallback** - Only loads when no cloud providers are configured
65+
Think of it like choosing team players - you pick specialists first, then all-rounders:
66+
67+
1. **Anthropic & OpenRouter go first** - They're specialists! They're great at text generation but can't do embeddings. By loading them first, they get priority for text tasks.
68+
69+
2. **OpenAI & Google GenAI come next** - These are the all-rounders! They can do everything: text generation, embeddings, and structured output. They act as fallbacks for what the specialists can't do.
70+
71+
3. **Ollama comes last** - This is your local backup player! It supports almost everything (text, embeddings, objects) and runs on your computer. Perfect when cloud services aren't available.
72+
73+
### Why This Order Matters
74+
75+
When you ask ElizaOS to do something, it looks for the best model in order:
76+
77+
- **Generate text?** → Anthropic gets first shot (if loaded)
78+
- **Create embeddings?** → Anthropic can't, so OpenAI steps in
79+
- **No cloud API keys?** → Ollama handles everything locally
80+
81+
This smart ordering means:
82+
- You get the best specialized models for each task
83+
- You always have fallbacks for missing capabilities
84+
- You can run fully offline with Ollama if needed
85+
86+
### Real Example: How It Works
87+
88+
Let's say you have Anthropic + OpenAI configured:
89+
90+
```
91+
Task: "Generate a response"
92+
1. Anthropic: "I got this!" ✅ (Priority 100 for text)
93+
2. OpenAI: "I'm here if needed" (Priority 50)
94+
95+
Task: "Create embeddings for memory"
96+
1. Anthropic: "Sorry, can't do that" ❌
97+
2. OpenAI: "No problem, I'll handle it!" ✅
98+
99+
Task: "Generate structured JSON"
100+
1. Anthropic: "I can do this!" ✅ (Priority 100 for objects)
101+
2. OpenAI: "Standing by" (Priority 50)
102+
```
64103

65104
## Model Registration
66105

67-
Each LLM plugin registers its models with the runtime during initialization:
106+
When plugins load, they "register" what they can do. It's like signing up for different jobs:
68107

69108
```typescript
70-
// Example from a plugin's init method
71-
runtime.registerModel({
72-
type: ModelType.TEXT_GENERATION,
73-
handler: generateText,
74-
provider: 'openai',
75-
priority: 1
76-
});
109+
// Each plugin says "I can do this!"
110+
runtime.registerModel(
111+
ModelType.TEXT_LARGE, // What type of work
112+
generateText, // How to do it
113+
'anthropic', // Who's doing it
114+
100 // Priority (higher = goes first)
115+
);
77116
```
78117

79-
### Priority System
118+
### How ElizaOS Picks the Right Model
119+
120+
When you ask ElizaOS to do something, it:
80121

81-
Models are selected based on:
82-
1. **Explicit provider** - If specified, uses that provider's model
83-
2. **Priority** - Higher priority models are preferred
84-
3. **Registration order** - First registered wins for same priority
122+
1. **Checks what type of work it is** (text? embeddings? objects?)
123+
2. **Looks at who signed up** for that job
124+
3. **Picks based on priority** (higher number goes first)
125+
4. **If tied, first registered wins**
126+
127+
**Example**: You ask for text generation
128+
- Anthropic registered with priority 100 ✅ (wins!)
129+
- OpenAI registered with priority 50
130+
- Ollama registered with priority 10
131+
132+
But for embeddings:
133+
- Anthropic didn't register ❌ (can't do it)
134+
- OpenAI registered with priority 50 ✅ (wins!)
135+
- Ollama registered with priority 10
85136

86137
## Embedding Fallback Strategy
87138

88-
Since not all plugins support embeddings, ElizaOS uses a fallback strategy:
139+
Remember: Not all plugins can create embeddings! Here's how ElizaOS handles this:
140+
141+
**The Problem**:
142+
- You're using Anthropic (great at chat, can't do embeddings)
143+
- But ElizaOS needs embeddings for memory and search
144+
145+
**The Solution**:
146+
ElizaOS automatically finds another plugin that CAN do embeddings!
89147

90148
```typescript
91-
// If primary plugin (e.g., Anthropic) doesn't support embeddings,
92-
// the runtime will automatically use the next available embedding provider
149+
// What happens behind the scenes:
150+
// 1. "I need embeddings!"
151+
// 2. "Can Anthropic do it?" → No ❌
152+
// 3. "Can OpenAI do it?" → Yes ✅
153+
// 4. "OpenAI, you're up!"
93154
```
94155

95156
### Common Patterns
@@ -149,9 +210,6 @@ OPENROUTER_API_KEY=sk-or-...
149210
OPENROUTER_SMALL_MODEL=google/gemini-2.0-flash-001 # Optional: any available model
150211
OPENROUTER_LARGE_MODEL=anthropic/claude-3-opus # Optional: any available model
151212

152-
# Local AI (no API key needed)
153-
```
154-
155213
**Important**: The model names shown are examples. You can use any model available from each provider.
156214

157215
### Character-Specific Secrets
@@ -182,7 +240,6 @@ You can also configure API keys per character:
182240
### Local/Self-Hosted
183241

184242
- [Ollama Plugin](./ollama) - Run models locally with Ollama
185-
- [Local AI Plugin](./local-ai) - Fully offline operation
186243

187244
## Best Practices
188245

0 commit comments

Comments
 (0)