Skip to content

Commit 6acbbab

Browse files
authored
📝 modify sdk readme file
2 parents f8a6202 + 0306b77 commit 6acbbab

File tree

11 files changed

+575
-179
lines changed

11 files changed

+575
-179
lines changed

doc/docs/.vitepress/config.mts

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
1-
import { defineConfig } from 'vitepress'
1+
import { defineConfig } from 'vitepress'
22

33
// https://vitepress.dev/reference/site-config
44
export default defineConfig({
55
base: '/doc/',
66
title: "Nexent Doc",
7-
description: "A zero-code platform for auto-generating agents no orchestration, no complex drag-and-drop required.",
7+
description: "A zero-code platform for auto-generating agents no orchestration, no complex drag-and-drop required.",
88

99
// Add favicon to head
1010
head: [
@@ -61,6 +61,7 @@ export default defineConfig({
6161
text: 'Core Modules',
6262
items: [
6363
{ text: 'Agents', link: '/en/sdk/core/agents' },
64+
{ text: 'Run agent with agent_run', link: '/en/sdk/core/agent-run' },
6465
{ text: 'Tools', link: '/en/sdk/core/tools' },
6566
{ text: 'Models', link: '/en/sdk/core/models' }
6667
]
@@ -180,6 +181,7 @@ export default defineConfig({
180181
text: '核心模块',
181182
items: [
182183
{ text: '智能体模块', link: '/zh/sdk/core/agents' },
184+
{ text: '使用 agent_run 运行智能体', link: '/zh/sdk/core/agent-run' },
183185
{ text: '工具模块', link: '/zh/sdk/core/tools' },
184186
{ text: '模型模块', link: '/zh/sdk/core/models' }
185187
]

doc/docs/en/sdk/basic-usage.md

Lines changed: 54 additions & 73 deletions
Original file line numberDiff line numberDiff line change
@@ -37,10 +37,10 @@ The development environment includes the following additional features:
3737
### Basic Import
3838

3939
```python
40-
import nexent
41-
from nexent.core import MessageObserver, ProcessType
42-
from nexent.core.agents import CoreAgent, NexentAgent
43-
from nexent.core.models import OpenAIModel
40+
from nexent.core.utils.observer import MessageObserver, ProcessType
41+
from nexent.core.agents.core_agent import CoreAgent
42+
from nexent.core.agents.nexent_agent import NexentAgent
43+
from nexent.core.models.openai_llm import OpenAIModel
4444
from nexent.core.tools import ExaSearchTool, KnowledgeBaseSearchTool
4545
```
4646

@@ -64,7 +64,7 @@ model = OpenAIModel(
6464
### 🛠️ Adding Tools
6565

6666
```python
67-
# Create search tools
67+
# Create search tool
6868
search_tool = ExaSearchTool(
6969
exa_api_key="your-exa-key",
7070
observer=observer,
@@ -95,46 +95,58 @@ agent = CoreAgent(
9595

9696
```python
9797
# Run Agent with your question
98-
result = agent.run("Your question here")
99-
100-
# Access the final answer
101-
print(result.final_answer)
98+
agent.run("Your question here")
10299
```
103100

104-
## 🎯 Advanced Usage Patterns
101+
## 📡 Using agent_run (recommended for streaming)
105102

106-
### 🔧 Custom Tool Integration
103+
When you need to consume messages as an "event stream" on server or client, use `agent_run`. It executes the agent in a background thread and continuously yields JSON messages, making it easy to render in UIs and collect logs.
107104

108-
```python
109-
from nexent.core.tools import BaseTool
110-
111-
class CustomTool(BaseTool):
112-
def __init__(self, observer: MessageObserver):
113-
super().__init__(observer=observer, name="custom_tool")
114-
115-
def run(self, input_text: str) -> str:
116-
# Your custom tool logic here
117-
return f"Processed: {input_text}"
118-
119-
# Add custom tool to agent
120-
custom_tool = CustomTool(observer=observer)
121-
agent.tools.append(custom_tool)
122-
```
105+
Reference: [Run agent with agent_run](./core/agent-run)
123106

124-
### 📡 Streaming Output Processing
107+
Minimal example:
125108

126109
```python
127-
# Monitor streaming output
128-
def handle_stream(message: str, process_type: ProcessType):
129-
if process_type == ProcessType.MODEL_OUTPUT_THINKING:
130-
print(f"🤔 Thinking: {message}")
131-
elif process_type == ProcessType.EXECUTION_LOGS:
132-
print(f"⚙️ Executing: {message}")
133-
elif process_type == ProcessType.FINAL_ANSWER:
134-
print(f"✅ Answer: {message}")
135-
136-
# Set observer with custom handler
137-
observer.set_message_handler(handle_stream)
110+
import json
111+
import asyncio
112+
from threading import Event
113+
114+
from nexent.core.agents.run_agent import agent_run
115+
from nexent.core.agents.agent_model import AgentRunInfo, AgentConfig, ModelConfig
116+
from nexent.core.utils.observer import MessageObserver
117+
118+
async def main():
119+
observer = MessageObserver(lang="en")
120+
stop_event = Event()
121+
122+
model_config = ModelConfig(
123+
cite_name="gpt-4",
124+
api_key="<YOUR_API_KEY>",
125+
model_name="Qwen/Qwen2.5-32B-Instruct",
126+
url="https://api.siliconflow.cn/v1",
127+
)
128+
129+
agent_config = AgentConfig(
130+
name="example_agent",
131+
description="An example agent",
132+
tools=[],
133+
max_steps=5,
134+
model_name="gpt-4",
135+
)
136+
137+
agent_run_info = AgentRunInfo(
138+
query="How many letter r are in strrawberry?",
139+
model_config_list=[model_config],
140+
observer=observer,
141+
agent_config=agent_config,
142+
stop_event=stop_event
143+
)
144+
145+
async for message in agent_run(agent_run_info):
146+
message_data = json.loads(message)
147+
print(message_data)
148+
149+
asyncio.run(main())
138150
```
139151

140152
## 🔧 Configuration Options
@@ -148,8 +160,6 @@ agent = CoreAgent(
148160
model=model,
149161
name="my_agent",
150162
max_steps=10, # Maximum execution steps
151-
temperature=0.7, # Model creativity level
152-
system_prompt="You are a helpful AI assistant." # Custom system prompt
153163
)
154164
```
155165

@@ -161,41 +171,12 @@ search_tool = ExaSearchTool(
161171
exa_api_key="your-exa-key",
162172
observer=observer,
163173
max_results=10, # Number of search results
164-
search_type="neural", # Search type: neural, keyword, etc.
165-
include_domains=["example.com"], # Limit search to specific domains
166-
exclude_domains=["spam.com"] # Exclude specific domains
167-
)
168-
```
169-
170-
## 📊 Error Handling
171-
172-
### 🛡️ Graceful Error Recovery
173-
174-
```python
175-
try:
176-
result = agent.run("Your question")
177-
print(f"Success: {result.final_answer}")
178-
except Exception as e:
179-
print(f"Error occurred: {e}")
180-
# Handle error appropriately
181-
```
182-
183-
### 🔧 Tool Error Handling
184-
185-
```python
186-
# Tools automatically handle errors and provide fallback options
187-
search_tool = ExaSearchTool(
188-
exa_api_key="your-exa-key",
189-
observer=observer,
190-
max_results=5,
191-
fallback_to_keyword=True # Fallback to keyword search if neural search fails
192174
)
193175
```
194176

195177
## 📚 More Resources
196178

197-
For more advanced usage patterns and detailed API documentation, please refer to:
198-
199-
- **[Tool Development Guide](./core/tools)** - Detailed tool development standards and examples
200-
- **[Model Architecture Guide](./core/models)** - Model integration and usage documentation
201-
- **[Agents](./core/agents)** - Best practices and advanced patterns for agent development
179+
- **[Run agent with agent_run](./core/agent-run)**
180+
- **[Tool Development Guide](./core/tools)**
181+
- **[Model Architecture Guide](./core/models)**
182+
- **[Agents](./core/agents)**

doc/docs/en/sdk/core/agent-run.md

Lines changed: 166 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,166 @@
1+
# Run agent with agent_run (Streaming)
2+
3+
`agent_run` provides a concise and thread-friendly way to run an agent while exposing real-time streaming output via `MessageObserver`. It is ideal for server-side or frontend event stream rendering, as well as MCP tool integration scenarios.
4+
5+
## Quick Start
6+
7+
```python
8+
import json
9+
import asyncio
10+
import logging
11+
from threading import Event
12+
13+
from nexent.core.agents.run_agent import agent_run
14+
from nexent.core.agents.agent_model import (
15+
AgentRunInfo,
16+
AgentConfig,
17+
ModelConfig
18+
)
19+
from nexent.core.utils.observer import MessageObserver
20+
21+
22+
async def main():
23+
# 1) Create message observer (for receiving streaming messages)
24+
observer = MessageObserver(lang="en")
25+
26+
# 2) External stop flag (useful to interrupt from UI)
27+
stop_event = Event()
28+
29+
# 3) Configure model
30+
model_config = ModelConfig(
31+
cite_name="gpt-4", # Model alias (custom, referenced by AgentConfig)
32+
api_key="<YOUR_API_KEY>",
33+
model_name="Qwen/Qwen2.5-32B-Instruct",
34+
url="https://api.siliconflow.cn/v1",
35+
temperature=0.3,
36+
top_p=0.9
37+
)
38+
39+
# 4) Configure Agent
40+
agent_config = AgentConfig(
41+
name="example_agent",
42+
description="An example agent that can execute Python code and search the web",
43+
prompt_templates=None,
44+
tools=[],
45+
max_steps=5,
46+
model_name="gpt-4", # Corresponds to model_config.cite_name
47+
provide_run_summary=False,
48+
managed_agents=[]
49+
)
50+
51+
# 5) Assemble run info
52+
agent_run_info = AgentRunInfo(
53+
query="How many letter r are in strrawberry?", # Example question
54+
model_config_list=[model_config],
55+
observer=observer,
56+
agent_config=agent_config,
57+
mcp_host=None, # Optional: MCP service addresses
58+
history=None, # Optional: chat history
59+
stop_event=stop_event
60+
)
61+
62+
# 6) Run with streaming and consume messages
63+
async for message in agent_run(agent_run_info):
64+
message_data = json.loads(message)
65+
message_type = message_data.get("type", "unknown")
66+
content = message_data.get("content", "")
67+
print(f"[{message_type}] {content}")
68+
69+
# 7) Read final answer (if any)
70+
final_answer = observer.get_final_answer()
71+
if final_answer:
72+
print(f"\nFinal Answer: {final_answer}")
73+
74+
75+
if __name__ == "__main__":
76+
logging.disable(logging.CRITICAL)
77+
asyncio.run(main())
78+
```
79+
80+
Tip: Store sensitive config such as `api_key` in environment variables or a secrets manager, not in code.
81+
82+
## Message Stream Format and Handling
83+
84+
Internally, `agent_run` executes the agent in a background thread and continuously yields JSON strings from the `MessageObserver` message buffer. You can parse these fields for categorized display or logging.
85+
86+
- Important fields
87+
- `type`: message type (corresponds to `ProcessType`)
88+
- `content`: text content
89+
- `agent_name`: optional, which agent produced this message
90+
91+
Common `type` values (from `ProcessType`):
92+
- `AGENT_NEW_RUN`: new task started
93+
- `STEP_COUNT`: step updates
94+
- `MODEL_OUTPUT_THINKING` / `MODEL_OUTPUT_CODE`: model thinking/code snippets
95+
- `PARSE`: code parsing results
96+
- `EXECUTION_LOGS`: Python execution logs
97+
- `FINAL_ANSWER`: final answer
98+
- `ERROR`: error information
99+
100+
## Configuration Reference
101+
102+
### ModelConfig
103+
104+
- `cite_name`: model alias (referenced by `AgentConfig.model_name`)
105+
- `api_key`: model service API key
106+
- `model_name`: model invocation name
107+
- `url`: base URL of the model service
108+
- `temperature` / `top_p`: sampling params
109+
110+
### AgentConfig
111+
112+
- `name`: agent name
113+
- `description`: agent description
114+
- `prompt_templates`: optional, Jinja template dict
115+
- `tools`: tool configuration list (see ToolConfig)
116+
- `max_steps`: maximum steps
117+
- `model_name`: model alias (corresponds to `ModelConfig.cite_name`)
118+
- `provide_run_summary`: whether sub-agents provide run summary
119+
- `managed_agents`: list of sub-agent configurations
120+
121+
### Pass Chat History (optional)
122+
123+
You can pass historical messages via `AgentRunInfo.history`, and Nexent will write them into internal memory:
124+
125+
```python
126+
from nexent.core.agents.agent_model import AgentHistory
127+
128+
history = [
129+
AgentHistory(role="user", content="Hi"),
130+
AgentHistory(role="assistant", content="Hello, how can I help you?"),
131+
]
132+
133+
agent_run_info = AgentRunInfo(
134+
# ... other fields omitted
135+
history=history,
136+
)
137+
```
138+
139+
## MCP Tool Integration (optional)
140+
141+
If you provide `mcp_host` (list of MCP service addresses), Nexent will automatically pull remote tools through `ToolCollection.from_mcp` and inject them into the agent:
142+
143+
```python
144+
agent_run_info = AgentRunInfo(
145+
# ... other fields omitted
146+
mcp_host=["http://localhost:3000"],
147+
)
148+
```
149+
150+
Friendly error messages (EN/ZH) will be produced if the connection fails.
151+
152+
## Interrupt Execution
153+
154+
During execution, you can trigger interruption via `stop_event.set()`:
155+
156+
```python
157+
stop_event.set() # The agent will gracefully stop after the current step completes
158+
```
159+
160+
## Relation to CoreAgent
161+
162+
- `agent_run` is a wrapper over `NexentAgent` and `CoreAgent`, responsible for:
163+
- Constructing `CoreAgent` (including models and tools)
164+
- Injecting history into memory
165+
- Driving streaming execution and forwarding buffered messages from `MessageObserver`
166+
- You can also directly use `CoreAgent.run(stream=True)` to handle streaming yourself (see `core/agents.md`); `agent_run` provides a more convenient threaded and JSON-message oriented interface.

0 commit comments

Comments
 (0)