Skip to content

Commit bc7ae04

Browse files
added session1
1 parent 59f81a8 commit bc7ae04

File tree

7 files changed

+400
-0
lines changed

7 files changed

+400
-0
lines changed
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
OPENAI_API_KEY=xxxxx
2+
OPENAI_API_BASE=https://ark.cn-beijing.volces.com/api/v3
3+
MODEL_NAME=deepseek-v3-1-terminus
Lines changed: 150 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,150 @@
1+
# Session 1 - 标准 LangChain Agent 的 AgentKit 化改造
2+
3+
## 概述
4+
5+
本 Workshop 旨在演示如何将一个基于 LangChain 构建的标准 Agent,通过 AgentKit SDK Python 进行轻量级改造,使其能够一键部署到火山引擎 AgentKit 平台上。
6+
7+
### Agent 改造指南
8+
9+
我们将对比 `langchain_agent.py` (原生实现) 和 `agent.py` (AgentKit 适配版),改造过程仅需以下 **3 点** 极小改动:
10+
11+
1. **引入 SDK 并初始化应用**
12+
```python
13+
# 引入 AgentKit SDK
14+
from agentkit.apps import AgentkitSimpleApp
15+
16+
# 初始化应用实例
17+
app = AgentkitSimpleApp()
18+
```
19+
20+
2. **标记入口函数**
21+
使用 `@app.entrypoint` 装饰器标记您的主逻辑函数。
22+
```python
23+
@app.entrypoint
24+
async def run(payload: dict, headers: dict):
25+
# 您的业务逻辑...
26+
```
27+
28+
3. **按照标准协议返回**
29+
将原本直接打印到控制台的输出,改为 `yield` 返回标准 JSON 格式的 Event 数据。
30+
```python
31+
# 原生 LangChain: print(chunk)
32+
# AgentKit 适配:
33+
yield json.dumps(event_data)
34+
```
35+
这些改动是非侵入式的,您原有的 Chain 定义、Tool 定义和 Prompt 逻辑完全不需要修改。
36+
37+
## 核心功能
38+
39+
1. **构建 LangChain Agent**:使用 LangChain 1.0 标准范式构建具备工具调用能力的 ReAct Agent。
40+
2. **AgentKit 快速适配**:通过 SDK 将本地 Agent 转换为生产级微服务,无需修改核心 Chain 逻辑。
41+
3. **云端一键部署**:利用 AgentKit CLI 实现代码打包、镜像构建及环境变量的自动同步。
42+
43+
## Agent 能力
44+
45+
本 Agent 具备以下基础能力:
46+
47+
- **自动化推理**:基于 ReAct 范式,自动分析用户问题并规划工具调用顺序。
48+
- **工具调用**
49+
- `get_word_length`: 计算单词长度。
50+
- `add_numbers`: 执行数值加法运算。
51+
- **流式响应**:支持 SSE 标准协议,实时输出思考过程和最终结果。
52+
53+
## 目录结构说明
54+
55+
```bash
56+
session1/
57+
├── agent.py # 适配后的 AgentKit 应用 (核心文件)
58+
├── langchain_agent.py # 适配前的原生 LangChain 脚本 (对比参考)
59+
├── local_client.py # 本地流式调用测试客户端
60+
├── agentkit.yaml # 部署配置文件
61+
├── .env # 环境变量配置文件 (部署时自动同步)
62+
└── README.md # 说明文档
63+
```
64+
65+
## 本地运行
66+
67+
### 前置准备
68+
69+
1. **依赖安装**
70+
```bash
71+
uv sync
72+
source .venv/bin/activate
73+
```
74+
75+
2. **配置环境变量**
76+
```bash
77+
cp .env.sample .env
78+
# 编辑 .env 文件,填入 OPENAI_API_KEY 等必填项
79+
```
80+
81+
### 调试方法
82+
83+
**方式一:运行原生脚本** (验证 Agent 逻辑)
84+
```bash
85+
uv run langchain_agent.py
86+
```
87+
88+
**方式二:运行 AgentKit 服务** (模拟生产环境)
89+
```bash
90+
# 启动服务 (监听 8000 端口)
91+
uv run agent.py
92+
93+
# 在新终端运行客户端测试
94+
uv run local_client.py
95+
```
96+
97+
## AgentKit 部署
98+
99+
部署过程完全自动化,支持 `.env` 环境变量自动同步。
100+
101+
### 1. 初始化配置
102+
103+
```bash
104+
agentkit config
105+
```
106+
此命令会引导您选择项目空间和镜像仓库等信息,生成 `agentkit.yaml`
107+
108+
### 2. 部署上线
109+
110+
```bash
111+
agentkit launch
112+
```
113+
114+
> **重要**`agentkit launch` 命令会自动读取您本地项目根目录下的 `.env` 文件,并将其中的所有环境变量自动注入到云端 Runtime 环境中。这意味着您**无需**在控制台手动配置 `OPENAI_API_KEY``MODEL_NAME` 等敏感信息,CLI 帮您完成了一切环境同步工作,确保云端运行环境与本地完全一致。
115+
116+
### 3. 在线测试
117+
118+
部署完成后,您可以使用 CLI 直接调用云端 Agent:
119+
120+
```bash
121+
# <URL> 是 launch 命令输出的服务访问地址
122+
agentkit invoke --url <URL> 'Hello, can you calculate 10 + 20 and tell me the length of the word "AgentKit"?'
123+
```
124+
125+
## 示例提示词
126+
127+
- "What is the length of the word 'Volcengine'?"
128+
- "Calculate 123 + 456."
129+
- "Hello, can you calculate 10 + 20 and tell me the length of the word 'AgentKit'?"
130+
- "Tell me a fun fact about Python." (此Agent不具备通用知识,会尝试使用工具或拒绝回答)
131+
132+
## 效果展示
133+
134+
- **本地脚本**:终端直接输出 ReAct 思考链,展示 Agent 的推理过程和最终结果。
135+
- **HTTP 服务**:客户端接收 SSE 流式事件,包含详细的 `on_llm_chunk` (LLM思考过程)、`on_tool_start` (工具调用开始)、`on_tool_end` (工具调用结束) 等状态信息,提供丰富的交互体验。
136+
137+
## 常见问题
138+
139+
- **Q: 为什么提示 API Key 无效?**
140+
- A: 请确保 `.env` 文件中的 `OPENAI_API_KEY` 或其他模型服务商的 API Key 配置正确,且 `OPENAI_API_BASE` 与您使用的服务商(如火山方舟、OpenAI)匹配。
141+
142+
- **Q: 部署时环境变量未生效?**
143+
- A: 请确认 `.env` 文件位于运行 `agentkit launch` 命令的当前项目根目录下。CLI 会自动查找并同步该文件。
144+
145+
- **Q: Agent 无法回答通用知识问题?**
146+
- A: 本示例 Agent 主要演示工具调用能力,未集成通用知识库。如需回答通用知识,请扩展 Agent 的工具集或连接到知识库。
147+
148+
## 代码许可
149+
150+
本工程遵循 Apache 2.0 License。
Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
import os
2+
from langchain_openai import ChatOpenAI
3+
from langchain.agents import create_agent
4+
from langchain_core.tools import tool
5+
from dotenv import load_dotenv
6+
from agentkit.apps import AgentkitSimpleApp
7+
import logging
8+
import asyncio
9+
import json
10+
11+
12+
# Load environment variables (especially OPENAI_API_KEY)
13+
load_dotenv()
14+
logger = logging.getLogger(__name__)
15+
logger.setLevel(logging.INFO)
16+
17+
# 1. Define tools
18+
@tool
19+
def get_word_length(word: str) -> int:
20+
"""Returns the length of a word."""
21+
return len(word)
22+
23+
@tool
24+
def add_numbers(a: int, b: int) -> int:
25+
"""Adds two numbers together."""
26+
return a + b
27+
28+
# Create the agent
29+
# Fix: Ensure environment variables are mapped to the correct arguments
30+
agent = create_agent(
31+
model=ChatOpenAI(
32+
model=os.getenv("MODEL_NAME", "gpt-4o-mini"),
33+
api_key=os.getenv("OPENAI_API_KEY"),
34+
base_url=os.getenv("OPENAI_API_BASE", "https://api.openai.com/v1"),
35+
temperature=0
36+
),
37+
tools=[get_word_length, add_numbers],
38+
system_prompt="You are a helpful assistant. You have access to tools to help answer questions."
39+
)
40+
41+
42+
app = AgentkitSimpleApp()
43+
@app.entrypoint
44+
async def run(payload: dict, headers: dict):
45+
prompt = payload.get("prompt")
46+
user_id = headers.get("user_id")
47+
session_id = headers.get("session_id")
48+
49+
# Default values if still missing
50+
user_id = user_id or "default_user"
51+
session_id = session_id or "default_session"
52+
53+
logger.info(
54+
f"Running agent with prompt: {prompt}, user_id: {user_id}, session_id: {session_id}"
55+
)
56+
57+
inputs = {"messages": [{"role": "user", "content": prompt}]}
58+
59+
# stream returns an iterator of updates
60+
# To get the final result, we can just iterate or use invoke
61+
async for chunk in agent.astream(inputs, stream_mode="updates"):
62+
# chunk is a dict with node names as keys and state updates as values
63+
for node, state in chunk.items():
64+
logger.debug(f"--- Node: {node} ---")
65+
66+
if "messages" in state:
67+
last_msg = state["messages"][-1]
68+
69+
for block in last_msg.content_blocks:
70+
# 返回的event_data数据结构要求符合 adk event规范: https://google.github.io/adk-docs/events/#identifying-event-origin-and-type
71+
event_data = {
72+
"content": {
73+
"parts": [
74+
{"text": block.get("text") or block.get("reasoning")}
75+
]
76+
}
77+
}
78+
yield json.dumps(event_data)
79+
80+
@app.ping
81+
def ping() -> str:
82+
return "pong!"
83+
84+
async def local_test():
85+
"""Helper to run the agent locally without server"""
86+
print("Running local test...")
87+
query = "What is the length of the word 'LangChain' and what is that length plus 5?"
88+
print(f"Query: {query}")
89+
async for event in run({"prompt": query}, {"user_id": "1", "session_id": "1"}):
90+
print(f"Received event: {event}")
91+
92+
if __name__ == "__main__":
93+
app.run(host="0.0.0.0", port=8000)
Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
import os
2+
from langchain_openai import ChatOpenAI
3+
from langchain.agents import create_agent
4+
from langchain_core.tools import tool
5+
from dotenv import load_dotenv
6+
import logging
7+
import asyncio
8+
import json
9+
10+
11+
# Load environment variables (especially OPENAI_API_KEY)
12+
load_dotenv()
13+
logger = logging.getLogger(__name__)
14+
logger.setLevel(logging.INFO)
15+
16+
# 1. Define tools
17+
@tool
18+
def get_word_length(word: str) -> int:
19+
"""Returns the length of a word."""
20+
return len(word)
21+
22+
@tool
23+
def add_numbers(a: int, b: int) -> int:
24+
"""Adds two numbers together."""
25+
return a + b
26+
27+
# Create the agent
28+
# Fix: Ensure environment variables are mapped to the correct arguments
29+
agent = create_agent(
30+
model=ChatOpenAI(
31+
model=os.getenv("MODEL_NAME", "gpt-4o-mini"),
32+
api_key=os.getenv("OPENAI_API_KEY"),
33+
base_url=os.getenv("OPENAI_API_BASE", "https://api.openai.com/v1"),
34+
temperature=0
35+
),
36+
tools=[get_word_length, add_numbers],
37+
system_prompt="You are a helpful assistant. You have access to tools to help answer questions."
38+
)
39+
40+
41+
async def run(payload: dict, headers: dict):
42+
prompt = payload.get("prompt")
43+
user_id = headers.get("user_id")
44+
session_id = headers.get("session_id")
45+
46+
# Default values if still missing
47+
user_id = user_id or "default_user"
48+
session_id = session_id or "default_session"
49+
50+
logger.info(
51+
f"Running agent with prompt: {prompt}, user_id: {user_id}, session_id: {session_id}"
52+
)
53+
54+
inputs = {"messages": [{"role": "user", "content": prompt}]}
55+
56+
# stream returns an iterator of updates
57+
# To get the final result, we can just iterate or use invoke
58+
async for chunk in agent.astream(inputs, stream_mode="updates"):
59+
# chunk is a dict with node names as keys and state updates as values
60+
for node, state in chunk.items():
61+
logger.debug(f"--- Node: {node} ---")
62+
63+
if "messages" in state:
64+
last_msg = state["messages"][-1]
65+
66+
for block in last_msg.content_blocks:
67+
event_data = {
68+
"content": {
69+
"parts": [
70+
{"text": block.get("text")}
71+
]
72+
}
73+
}
74+
yield json.dumps(event_data)
75+
76+
async def local_test():
77+
"""Helper to run the agent locally without server"""
78+
print("Running local test...")
79+
query = "What is the length of the word 'LangChain' and what is that length plus 5?"
80+
print(f"Query: {query}")
81+
async for event in run({"prompt": query}, {"user_id": "1", "session_id": "1"}):
82+
print(f"Received event: {event}")
83+
84+
if __name__ == "__main__":
85+
asyncio.run(local_test())
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
import requests
2+
import json
3+
import sys
4+
5+
def main():
6+
# Target URL
7+
url = "http://localhost:8000/invoke"
8+
9+
# Payload parameters
10+
payload = {
11+
"prompt": "Hello, can you calculate 10 + 20 and tell me the length of the word 'AgentKit'?"
12+
}
13+
14+
# Headers
15+
headers = {
16+
"user_id": "test_user_001",
17+
"session_id": "session_test_001"
18+
}
19+
20+
print(f"Sending POST request to {url} with payload: {payload}")
21+
22+
try:
23+
response = requests.post(url, json=payload, headers=headers, stream=True)
24+
25+
if response.status_code == 200:
26+
print("\n--- Streaming Response ---")
27+
for line in response.iter_lines():
28+
if line:
29+
decoded_line = line.decode('utf-8')
30+
if decoded_line.startswith("data: "):
31+
decoded_line = decoded_line[6:]
32+
33+
if decoded_line.strip() == "[DONE]":
34+
continue
35+
36+
# Expecting SSE or JSON lines depending on implementation
37+
# The agent code yields json.dumps(event_data)
38+
try:
39+
data = json.loads(decoded_line)
40+
print(json.dumps(data, indent=2, ensure_ascii=False))
41+
except json.JSONDecodeError:
42+
print(f"Raw: {decoded_line}")
43+
print("\n--- End of Stream ---")
44+
else:
45+
print(f"Request failed with status code: {response.status_code}")
46+
print(response.text)
47+
48+
except requests.exceptions.ConnectionError:
49+
print("Error: Could not connect to the server. Is the agent running on port 8000?")
50+
except Exception as e:
51+
print(f"An error occurred: {e}")
52+
53+
if __name__ == "__main__":
54+
main()

0 commit comments

Comments
 (0)