Skip to content

Commit 9935daa

Browse files
committed
README 0.1.8
1 parent b20c074 commit 9935daa

File tree

5 files changed

+160
-518
lines changed

5 files changed

+160
-518
lines changed

CHANGELOG.md

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,14 @@
1-
21
0.1.8
3-
- Tool Parser Base Class
4-
- Tool Executor Base Class
5-
- Execute_tools_on_stream – execute tools while the response is streaming
6-
- Docstring docs
2+
- Added base processor classes for extensible tool handling:
3+
- ToolParserBase: Abstract base class for parsing LLM responses
4+
- ToolExecutorBase: Abstract base class for tool execution strategies
5+
- ResultsAdderBase: Abstract base class for managing results
6+
- Added dual support for OpenAPI and XML tool calling patterns:
7+
- XML schema decorator for XML-based tool definitions
8+
- XML-specific processors for parsing and execution
9+
- Standard processors for OpenAPI function calling
10+
- Enhanced streaming capabilities:
11+
- execute_tools_on_stream: Execute tools in real-time during streaming
712

813
0.1.7
9-
- Streaming Responses with Tool Calls
14+
- v1 streaming responses

README.md

Lines changed: 149 additions & 57 deletions
Original file line numberDiff line numberDiff line change
@@ -2,10 +2,47 @@
22

33
AgentPress is a collection of _simple, but powerful_ utilities that serve as building blocks for creating AI agents. *Plug, play, and customize.*
44

5-
- **Threads**: Simple message thread handling utilities
6-
- **Tools**: Flexible tool definition and automatic execution
7-
- **State Management**: Simple JSON key-value state management
5+
## How It Works
6+
7+
Each AI agent iteration follows a clear, modular flow:
8+
9+
1. **Message & LLM Handling**
10+
- Messages are managed in threads via `ThreadManager`
11+
- LLM API calls are made through a unified interface (`llm.py`)
12+
- Supports streaming responses for real-time interaction
13+
14+
2. **Response Processing**
15+
- LLM returns both content and tool calls
16+
- Content is streamed in real-time
17+
- Tool calls are parsed using either:
18+
- Standard OpenAPI function calling
19+
- XML-based tool definitions
20+
- Custom parsers (extend `ToolParserBase`)
21+
22+
3. **Tool Execution**
23+
- Tools are executed either:
24+
- In real-time during streaming (`execute_tools_on_stream`)
25+
- After complete response
26+
- In parallel or sequential order
27+
- Supports both standard and XML tool formats
28+
- Extensible through `ToolExecutorBase`
29+
30+
4. **Results Management**
31+
- Results from both content and tool executions are handled
32+
- Supports different result formats (standard/XML)
33+
- Customizable through `ResultsAdderBase`
34+
35+
This modular architecture allows you to:
36+
- Use standard OpenAPI function calling
37+
- Switch to XML-based tool definitions
38+
- Create custom processors by extending base classes
39+
- Mix and match different approaches
40+
41+
- **Threads**: Simple message thread handling utilities with streaming support
42+
- **Tools**: Flexible tool definition with both OpenAPI and XML formats
43+
- **State Management**: Thread-safe JSON key-value state management
844
- **LLM Integration**: Provider-agnostic LLM calls via LiteLLM
45+
- **Response Processing**: Support for both standard and XML-based tool calling
946

1047
## Installation & Setup
1148

@@ -19,7 +56,7 @@ pip install agentpress
1956
agentpress init
2057
```
2158
Creates a `agentpress` directory with all the core utilities.
22-
Check out [File Overview](#file-overview) for explanations of the generated util files.
59+
Check out [File Overview](#file-overview) for explanations of the generated files.
2360

2461
3. If you selected the example agent during initialization:
2562
- Creates an `agent.py` file with a web development agent example
@@ -31,24 +68,31 @@ Check out [File Overview](#file-overview) for explanations of the generated util
3168

3269
## Quick Start
3370

34-
1. Set up your environment variables (API keys, etc.) in a `.env` file.
35-
- OPENAI_API_KEY, ANTHROPIC_API_KEY, GROQ_API_KEY, etc... Whatever LLM you want to use, we use LiteLLM (https://litellm.ai) (Call 100+ LLMs using the OpenAI Input/Output Format) – set it up in your `.env` file.. Also check out the agentpress/llm.py and modify as needed to support your wanted LLM.
71+
1. Set up your environment variables in a `.env` file:
72+
```bash
73+
OPENAI_API_KEY=your_key_here
74+
ANTHROPIC_API_KEY=your_key_here
75+
GROQ_API_KEY=your_key_here
76+
```
3677

37-
2. Create a calculator_tool.py
78+
2. Create a calculator tool with OpenAPI schema:
3879
```python
39-
from agentpress.tool import Tool, ToolResult, tool_schema
80+
from agentpress.tool import Tool, ToolResult, openapi_schema
4081

4182
class CalculatorTool(Tool):
42-
@tool_schema({
43-
"name": "add",
44-
"description": "Add two numbers",
45-
"parameters": {
46-
"type": "object",
47-
"properties": {
48-
"a": {"type": "number"},
49-
"b": {"type": "number"}
50-
},
51-
"required": ["a", "b"]
83+
@openapi_schema({
84+
"type": "function",
85+
"function": {
86+
"name": "add",
87+
"description": "Add two numbers",
88+
"parameters": {
89+
"type": "object",
90+
"properties": {
91+
"a": {"type": "number"},
92+
"b": {"type": "number"}
93+
},
94+
"required": ["a", "b"]
95+
}
5296
}
5397
})
5498
async def add(self, a: float, b: float) -> ToolResult:
@@ -59,7 +103,29 @@ class CalculatorTool(Tool):
59103
return self.fail_response(f"Failed to add numbers: {str(e)}")
60104
```
61105

62-
3. Use the Thread Manager, create a new thread – or access an existing one. Then Add the Calculator Tool, and run the thread. It will automatically use & execute the python function associated with the tool:
106+
3. Or create a tool with XML schema:
107+
```python
108+
from agentpress.tool import Tool, ToolResult, xml_schema
109+
110+
class FilesTool(Tool):
111+
@xml_schema(
112+
tag_name="create-file",
113+
mappings=[
114+
{"param_name": "file_path", "node_type": "attribute", "path": "."},
115+
{"param_name": "file_contents", "node_type": "content", "path": "."}
116+
],
117+
example='''
118+
<create-file file_path="path/to/file">
119+
File contents go here
120+
</create-file>
121+
'''
122+
)
123+
async def create_file(self, file_path: str, file_contents: str) -> ToolResult:
124+
# Implementation here
125+
pass
126+
```
127+
128+
4. Use the Thread Manager with streaming and tool execution:
63129
```python
64130
import asyncio
65131
from agentpress.thread_manager import ThreadManager
@@ -71,67 +137,93 @@ async def main():
71137
manager.add_tool(CalculatorTool)
72138

73139
# Create a new thread
74-
# Alternatively, you could use an existing thread_id like:
75-
# thread_id = "existing-thread-uuid"
76140
thread_id = await manager.create_thread()
77141

78-
# Add your custom logic here
142+
# Add your message
79143
await manager.add_message(thread_id, {
80144
"role": "user",
81145
"content": "What's 2 + 2?"
82146
})
83147

148+
# Run with streaming and tool execution
84149
response = await manager.run_thread(
85150
thread_id=thread_id,
86151
system_message={
87152
"role": "system",
88153
"content": "You are a helpful assistant with calculation abilities."
89154
},
90-
model_name="gpt-4",
91-
use_tools=True,
92-
execute_tool_calls=True
155+
model_name="anthropic/claude-3-5-sonnet-latest",
156+
stream=True,
157+
native_tool_calling=True,
158+
execute_tools=True,
159+
execute_tools_on_stream=True
93160
)
94-
print("Response:", response)
161+
162+
# Handle streaming response
163+
if isinstance(response, AsyncGenerator):
164+
async for chunk in response:
165+
if hasattr(chunk.choices[0], 'delta'):
166+
delta = chunk.choices[0].delta
167+
if hasattr(delta, 'content') and delta.content:
168+
print(delta.content, end='', flush=True)
95169

96170
asyncio.run(main())
97171
```
98172

99-
4. Autonomous Web Developer Agent (the standard example)
100-
101-
When you run `agentpress init` and select the example agent – you will get code for a simple implementation of an AI Web Developer Agent that leverages architecture similar to platforms like our own [Softgen](https://softgen.ai/) Platform.
102-
103-
- **Files Tool**: Allows the agent to create, read, update, and delete files within the workspace.
104-
- **Terminal Tool**: Enables the agent to execute terminal commands.
105-
- **State Workspace Management**: The agent has access to a workspace whose state is stored and sent on every request. This state includes all file contents, ensuring the agent knows what it is editing.
106-
- **User Interaction via CLI**: After each action, the agent pauses and allows the user to provide further instructions through the CLI.
107-
108-
You can find the complete implementation in our [example-agent](agentpress/examples/example-agent/agent.py) directory.
109-
110-
5. Thread Viewer
111-
112-
Run the thread viewer to view messages of threads in a stylised web UI:
173+
5. View conversation threads in a web UI:
113174
```bash
114175
streamlit run agentpress/thread_viewer_ui.py
115176
```
116177

117-
118178
## File Overview
119179

120-
### agentpress/llm.py
121-
Core LLM API interface using LiteLLM. Supports 100+ LLMs using the OpenAI Input/Output Format. Easy to extend for custom model configurations and API endpoints. `make_llm_api_call()` can be imported to make LLM calls.
122-
123-
### agentpress/thread_manager.py
124-
Orchestrates conversations between users, LLMs, and tools. Manages message history and automatically handles tool execution when LLMs request them. Tools registered here become available for LLM function calls.
125-
126-
### agentpress/tool.py
127-
Base infrastructure for LLM-compatible tools. Inherit from `Tool` class and use `@tool_schema` decorator to create tools that are automatically registered for LLM function calling. Returns standardized `ToolResult` responses.
128-
129-
### agentpress/tool_registry.py
130-
Central registry for tool management. Keeps track of available tools and their schemas, allowing selective function registration. Works with `thread_manager.py` to expose tools to LLMs.
131-
132-
### agentpress/state_manager.py
133-
Simple key-value based state persistence using JSON files. For maintaining environment state, settings, or other persistent data.
134-
180+
### Core Components
181+
182+
#### agentpress/llm.py
183+
LLM API interface using LiteLLM. Supports 100+ LLMs with OpenAI-compatible format. Includes streaming, retry logic, and error handling.
184+
185+
#### agentpress/thread_manager.py
186+
Manages conversation threads with support for:
187+
- Message history management
188+
- Tool registration and execution
189+
- Streaming responses
190+
- Both OpenAPI and XML tool calling patterns
191+
192+
#### agentpress/tool.py
193+
Base infrastructure for tools with:
194+
- OpenAPI schema decorator for standard function calling
195+
- XML schema decorator for XML-based tool calls
196+
- Standardized ToolResult responses
197+
198+
#### agentpress/tool_registry.py
199+
Central registry for tool management:
200+
- Registers both OpenAPI and XML tools
201+
- Maintains tool schemas and implementations
202+
- Provides tool lookup and validation
203+
204+
#### agentpress/state_manager.py
205+
Thread-safe state persistence:
206+
- JSON-based key-value storage
207+
- Atomic operations with locking
208+
- Automatic file handling
209+
210+
### Response Processing
211+
212+
#### agentpress/llm_response_processor.py
213+
Handles LLM response processing with support for:
214+
- Streaming and complete responses
215+
- Tool call extraction and execution
216+
- Result formatting and message management
217+
218+
#### Standard Processing
219+
- `standard_tool_parser.py`: Parses OpenAPI function calls
220+
- `standard_tool_executor.py`: Executes standard tool calls
221+
- `standard_results_adder.py`: Manages standard results
222+
223+
#### XML Processing
224+
- `xml_tool_parser.py`: Parses XML-formatted tool calls
225+
- `xml_tool_executor.py`: Executes XML tool calls
226+
- `xml_results_adder.py`: Manages XML results
135227

136228
## Philosophy
137229
- **Plug & Play**: Start with our defaults, then customize to your needs.
@@ -160,7 +252,7 @@ pip install poetry
160252
poetry install
161253
```
162254

163-
3. For quick testing, you can install directly from the current directory:
255+
3. For quick testing:
164256
```bash
165257
pip install -e .
166258
```

agentpress/agents/simple_web_dev/workspace/index.html

Lines changed: 0 additions & 94 deletions
This file was deleted.

0 commit comments

Comments
 (0)