Skip to content

Commit 7b40cdc

Browse files
committed
fix docs
1 parent 219c090 commit 7b40cdc

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

54 files changed

+8520
-7056
lines changed
Lines changed: 348 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,348 @@
1+
# Base Agent API Reference
2+
3+
The `BaseAgent` class is the foundation for all agents in SpoonOS, providing core functionality for LLM interaction, tool management, and conversation handling.
4+
5+
## Class Definition
6+
7+
```python
8+
from spoon_ai.agents.base import BaseAgent
9+
from spoon_ai.tools import ToolManager
10+
from spoon_ai.llm import LLMManager
11+
12+
class BaseAgent:
13+
def __init__(
14+
self,
15+
name: str,
16+
system_prompt: str = None,
17+
llm_manager: LLMManager = None,
18+
tool_manager: ToolManager = None,
19+
**kwargs
20+
)
21+
```
22+
23+
## Parameters
24+
25+
### Required Parameters
26+
27+
- **name** (`str`): Unique identifier for the agent
28+
29+
### Optional Parameters
30+
31+
- **system_prompt** (`str`, optional): System prompt that defines agent behavior
32+
- **llm_manager** (`LLMManager`, optional): LLM manager instance for model interactions
33+
- **tool_manager** (`ToolManager`, optional): Tool manager for available tools
34+
- **kwargs**: Additional configuration options
35+
36+
## Methods
37+
38+
### Core Methods
39+
40+
#### `async run(message: str, **kwargs) -> str`
41+
42+
Execute the agent with a user message.
43+
44+
**Parameters:**
45+
- `message` (str): User input message
46+
- `**kwargs`: Additional execution parameters
47+
48+
**Returns:**
49+
- `str`: Agent response
50+
51+
**Example:**
52+
```python
53+
agent = BaseAgent(name="assistant")
54+
response = await agent.run("Hello, how are you?")
55+
print(response)
56+
```
57+
58+
#### `async chat(messages: List[Dict], **kwargs) -> Dict`
59+
60+
Process a conversation with multiple messages.
61+
62+
**Parameters:**
63+
- `messages` (List[Dict]): List of conversation messages
64+
- `**kwargs`: Additional chat parameters
65+
66+
**Returns:**
67+
- `Dict`: Chat response with metadata
68+
69+
**Example:**
70+
```python
71+
messages = [
72+
{"role": "user", "content": "What's the weather like?"}
73+
]
74+
response = await agent.chat(messages)
75+
```
76+
77+
### Configuration Methods
78+
79+
#### `set_system_prompt(prompt: str)`
80+
81+
Update the agent's system prompt.
82+
83+
**Parameters:**
84+
- `prompt` (str): New system prompt
85+
86+
**Example:**
87+
```python
88+
agent.set_system_prompt("You are a helpful coding assistant.")
89+
```
90+
91+
#### `add_tool(tool: BaseTool)`
92+
93+
Add a tool to the agent's tool manager.
94+
95+
**Parameters:**
96+
- `tool` (BaseTool): Tool instance to add
97+
98+
**Example:**
99+
```python
100+
from spoon_ai.tools import CustomTool
101+
102+
custom_tool = CustomTool()
103+
agent.add_tool(custom_tool)
104+
```
105+
106+
#### `remove_tool(tool_name: str)`
107+
108+
Remove a tool from the agent's tool manager.
109+
110+
**Parameters:**
111+
- `tool_name` (str): Name of the tool to remove
112+
113+
**Example:**
114+
```python
115+
agent.remove_tool("custom_tool")
116+
```
117+
118+
### Information Methods
119+
120+
#### `get_available_tools() -> List[str]`
121+
122+
Get list of available tool names.
123+
124+
**Returns:**
125+
- `List[str]`: List of tool names
126+
127+
**Example:**
128+
```python
129+
tools = agent.get_available_tools()
130+
print(f"Available tools: {tools}")
131+
```
132+
133+
#### `get_config() -> Dict`
134+
135+
Get current agent configuration.
136+
137+
**Returns:**
138+
- `Dict`: Agent configuration dictionary
139+
140+
**Example:**
141+
```python
142+
config = agent.get_config()
143+
print(f"Agent config: {config}")
144+
```
145+
146+
## Properties
147+
148+
### `name: str`
149+
Agent's unique identifier (read-only)
150+
151+
### `system_prompt: str`
152+
Current system prompt
153+
154+
### `llm_manager: LLMManager`
155+
LLM manager instance
156+
157+
### `tool_manager: ToolManager`
158+
Tool manager instance
159+
160+
### `config: Dict`
161+
Agent configuration dictionary
162+
163+
## Events
164+
165+
### `on_message_received(message: str)`
166+
Triggered when agent receives a message
167+
168+
### `on_response_generated(response: str)`
169+
Triggered when agent generates a response
170+
171+
### `on_tool_executed(tool_name: str, result: Any)`
172+
Triggered when a tool is executed
173+
174+
### `on_error(error: Exception)`
175+
Triggered when an error occurs
176+
177+
## Configuration Schema
178+
179+
```json
180+
{
181+
"name": "string",
182+
"system_prompt": "string",
183+
"config": {
184+
"max_steps": "integer",
185+
"temperature": "float",
186+
"max_tokens": "integer",
187+
"timeout": "integer"
188+
},
189+
"tools": [
190+
{
191+
"name": "string",
192+
"type": "builtin|custom|mcp",
193+
"enabled": "boolean",
194+
"config": {}
195+
}
196+
]
197+
}
198+
```
199+
200+
## Error Handling
201+
202+
### Common Exceptions
203+
204+
#### `AgentError`
205+
Base exception for agent-related errors
206+
207+
#### `ConfigurationError`
208+
Raised when agent configuration is invalid
209+
210+
#### `ToolError`
211+
Raised when tool execution fails
212+
213+
#### `LLMError`
214+
Raised when LLM interaction fails
215+
216+
### Error Handling Example
217+
218+
```python
219+
from spoon_ai.agents.base import BaseAgent
220+
from spoon_ai.agents.errors import AgentError, ToolError
221+
222+
try:
223+
agent = BaseAgent(name="test_agent")
224+
response = await agent.run("Hello")
225+
except ConfigurationError as e:
226+
print(f"Configuration error: {e}")
227+
except ToolError as e:
228+
print(f"Tool execution error: {e}")
229+
except AgentError as e:
230+
print(f"Agent error: {e}")
231+
```
232+
233+
## Best Practices
234+
235+
### Initialization
236+
- Always provide a unique name for each agent
237+
- Set appropriate system prompts for your use case
238+
- Configure tools before first use
239+
- Validate configuration before deployment
240+
241+
### Performance
242+
- Reuse agent instances when possible
243+
- Configure appropriate timeouts
244+
- Monitor tool execution times
245+
- Use caching for expensive operations
246+
247+
### Security
248+
- Validate all user inputs
249+
- Sanitize system prompts
250+
- Limit tool permissions
251+
- Monitor agent behavior
252+
253+
### Debugging
254+
- Enable debug logging for troubleshooting
255+
- Use event handlers to monitor agent behavior
256+
- Test with simple inputs first
257+
- Validate tool configurations
258+
259+
## Examples
260+
261+
### Basic Agent Setup
262+
263+
```python
264+
from spoon_ai.agents.base import BaseAgent
265+
from spoon_ai.llm import LLMManager
266+
from spoon_ai.tools import ToolManager
267+
268+
# Create LLM manager
269+
llm_manager = LLMManager(
270+
provider="openai",
271+
model="gpt-4"
272+
)
273+
274+
# Create tool manager
275+
tool_manager = ToolManager()
276+
277+
# Create agent
278+
agent = BaseAgent(
279+
name="my_assistant",
280+
system_prompt="You are a helpful assistant.",
281+
llm_manager=llm_manager,
282+
tool_manager=tool_manager
283+
)
284+
285+
# Use agent
286+
response = await agent.run("What can you help me with?")
287+
print(response)
288+
```
289+
290+
### Agent with Custom Configuration
291+
292+
```python
293+
from spoon_ai.agents.base import BaseAgent
294+
295+
agent = BaseAgent(
296+
name="custom_agent",
297+
system_prompt="You are a specialized assistant.",
298+
config={
299+
"max_steps": 10,
300+
"temperature": 0.7,
301+
"max_tokens": 2000,
302+
"timeout": 30
303+
}
304+
)
305+
306+
# Add event handlers
307+
def on_message(message):
308+
print(f"Received: {message}")
309+
310+
def on_response(response):
311+
print(f"Generated: {response}")
312+
313+
agent.on_message_received = on_message
314+
agent.on_response_generated = on_response
315+
316+
# Use agent
317+
response = await agent.run("Hello, world!")
318+
```
319+
320+
### Multi-turn Conversation
321+
322+
```python
323+
from spoon_ai.agents.base import BaseAgent
324+
325+
agent = BaseAgent(name="conversational_agent")
326+
327+
# Start conversation
328+
messages = [
329+
{"role": "user", "content": "Hello, I need help with Python."}
330+
]
331+
332+
response1 = await agent.chat(messages)
333+
messages.append({"role": "assistant", "content": response1["content"]})
334+
335+
# Continue conversation
336+
messages.append({"role": "user", "content": "Can you show me a simple example?"})
337+
response2 = await agent.chat(messages)
338+
339+
print(f"Final response: {response2['content']}")
340+
```
341+
342+
## See Also
343+
344+
- [Graph Agent API](./graph-agent.md)
345+
- [ToolCall Agent API](./toolcall-agent.md)
346+
- [Tool Manager API](../tools/base-tool.md)
347+
- [LLM Manager API](../llm/providers.md)
348+
- [Agent Development Guide](../../how-to-guides/build-first-agent.md)"}

0 commit comments

Comments
 (0)