Skip to content

Commit 7bc7def

Browse files
added session1
1 parent 59f81a8 commit 7bc7def

File tree

7 files changed

+374
-0
lines changed

7 files changed

+374
-0
lines changed
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
OPENAI_API_KEY=xxxxx
2+
OPENAI_API_BASE=https://ark.cn-beijing.volces.com/api/v3
3+
MODEL_NAME=deepseek-v3-1-terminus
Lines changed: 124 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,124 @@
1+
# Session 1 - 标准 LangChain Agent 的 AgentKit 化改造
2+
3+
本 Workshop 旨在演示如何将一个基于 LangChain 构建的标准 Agent,通过 AgentKit SDK Python 进行轻量级改造,使其能够一键部署到火山引擎 AgentKit 平台上。
4+
5+
## 核心目标
6+
7+
1. **构建 LangChain Agent**:演示如何使用 LangChain 1.0 的标准范式构建一个具备工具调用能力的 ReAct Agent。
8+
2. **AgentKit 快速适配**:展示如何通过仅有的几行代码修改,将本地 Agent 转换为生产级微服务。
9+
3. **云端一键部署**:演示 AgentKit CLI 如何简化部署流程,包括自动化的环境变量同步。
10+
11+
## Agent 改造指南:从本地到云端
12+
13+
我们将对比 `langchain_agent.py` (原生实现) 和 `agent.py` (AgentKit 适配版),您会发现改造过程非常简单直观。
14+
15+
### 改造点对比
16+
17+
仅需做以下 **3 点** 极小改动:
18+
19+
1. **引入 SDK 并初始化应用**
20+
```python
21+
# 引入 AgentKit SDK
22+
from agentkit.apps import AgentkitSimpleApp
23+
24+
# 初始化应用实例
25+
app = AgentkitSimpleApp()
26+
```
27+
28+
2. **标记入口函数**
29+
使用 `@app.entrypoint` 装饰器标记您的主逻辑函数。
30+
```python
31+
@app.entrypoint
32+
async def run(payload: dict, headers: dict):
33+
# 您的业务逻辑...
34+
```
35+
36+
3. **按照标准协议返回**
37+
将原本直接打印到控制台的输出,改为 `yield` 返回标准 JSON 格式的 Event 数据。
38+
```python
39+
# 原生 LangChain: print(chunk)
40+
# AgentKit 适配:
41+
yield json.dumps(event_data)
42+
```
43+
44+
这些改动是非侵入式的,您原有的 Chain 定义、Tool 定义和 Prompt 逻辑完全不需要修改。
45+
46+
## 目录结构说明
47+
48+
```bash
49+
session1/
50+
├── agent.py # 适配后的 AgentKit 应用 (核心文件)
51+
├── langchain_agent.py # 适配前的原生 LangChain 脚本 (对比参考)
52+
├── local_client.py # 本地流式调用测试客户端
53+
├── agentkit.yaml # 部署配置文件
54+
├── .env # 环境变量配置文件 (部署时自动同步)
55+
└── README.md # 说明文档
56+
```
57+
58+
## 本地开发与调试
59+
60+
### 1. 依赖安装与环境配置
61+
62+
```bash
63+
# 安装依赖
64+
uv sync
65+
source .venv/bin/activate
66+
67+
# 配置环境变量
68+
cp .env.sample .env
69+
# 编辑 .env 文件,填入 OPENAI_API_KEY 等必填项
70+
```
71+
72+
### 2. 本地运行
73+
74+
**方式一:运行原生脚本** (验证 Agent 逻辑)
75+
```bash
76+
uv run langchain_agent.py
77+
```
78+
79+
**方式二:运行 AgentKit 服务** (模拟生产环境)
80+
```bash
81+
# 启动服务 (监听 8000 端口)
82+
uv run agent.py
83+
84+
# 在新终端运行客户端测试
85+
uv run local_client.py
86+
```
87+
88+
## AgentKit 平台部署
89+
90+
部署过程完全自动化。AgentKit CLI 会自动打包代码、构建镜像并发布服务。
91+
92+
### 1. 初始化配置
93+
94+
```bash
95+
agentkit config
96+
```
97+
此命令会引导您选择项目空间和镜像仓库等信息,生成 `agentkit.yaml`
98+
99+
### 2. 部署上线 (支持 .env 自动同步)
100+
101+
```bash
102+
agentkit launch
103+
```
104+
105+
> **重要特性说明**
106+
> `agentkit launch` 命令会自动读取您本地项目根目录下的 `.env` 文件,并将其中的所有环境变量自动注入到云端 Runtime 环境中。
107+
>
108+
> 这意味着您**无需**在控制台手动配置 `OPENAI_API_KEY``MODEL_NAME` 等敏感信息,CLI 帮您完成了一切环境同步工作,确保云端运行环境与本地完全一致。
109+
110+
### 3. 在线测试
111+
112+
部署完成后,您可以使用 CLI 直接调用云端 Agent:
113+
114+
```bash
115+
# <URL> 是 launch 命令输出的服务访问地址
116+
agentkit invoke 'Hello, can you calculate 10 + 20 and tell me the length of the word "AgentKit"?'
117+
```
118+
119+
## 总结
120+
121+
通过本 Session,您学会了:
122+
- 如何构建一个标准的 LangChain Agent。
123+
- 如何使用 `AgentkitSimpleApp` 极速适配 AgentKit。
124+
- 如何利用 AgentKit CLI 的 `.env` 同步特性实现无痛上云。
Lines changed: 93 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,93 @@
1+
import os
2+
from langchain_openai import ChatOpenAI
3+
from langchain.agents import create_agent
4+
from langchain_core.tools import tool
5+
from dotenv import load_dotenv
6+
from agentkit.apps import AgentkitSimpleApp
7+
import logging
8+
import asyncio
9+
import json
10+
11+
12+
# Load environment variables (especially OPENAI_API_KEY)
13+
load_dotenv()
14+
logger = logging.getLogger(__name__)
15+
logger.setLevel(logging.INFO)
16+
17+
# 1. Define tools
18+
@tool
19+
def get_word_length(word: str) -> int:
20+
"""Returns the length of a word."""
21+
return len(word)
22+
23+
@tool
24+
def add_numbers(a: int, b: int) -> int:
25+
"""Adds two numbers together."""
26+
return a + b
27+
28+
# Create the agent
29+
# Fix: Ensure environment variables are mapped to the correct arguments
30+
agent = create_agent(
31+
model=ChatOpenAI(
32+
model=os.getenv("MODEL_NAME", "gpt-4o-mini"),
33+
api_key=os.getenv("OPENAI_API_KEY"),
34+
base_url=os.getenv("OPENAI_API_BASE", "https://api.openai.com/v1"),
35+
temperature=0
36+
),
37+
tools=[get_word_length, add_numbers],
38+
system_prompt="You are a helpful assistant. You have access to tools to help answer questions."
39+
)
40+
41+
42+
app = AgentkitSimpleApp()
43+
@app.entrypoint
44+
async def run(payload: dict, headers: dict):
45+
prompt = payload.get("prompt")
46+
user_id = headers.get("user_id")
47+
session_id = headers.get("session_id")
48+
49+
# Default values if still missing
50+
user_id = user_id or "default_user"
51+
session_id = session_id or "default_session"
52+
53+
logger.info(
54+
f"Running agent with prompt: {prompt}, user_id: {user_id}, session_id: {session_id}"
55+
)
56+
57+
inputs = {"messages": [{"role": "user", "content": prompt}]}
58+
59+
# stream returns an iterator of updates
60+
# To get the final result, we can just iterate or use invoke
61+
async for chunk in agent.astream(inputs, stream_mode="updates"):
62+
# chunk is a dict with node names as keys and state updates as values
63+
for node, state in chunk.items():
64+
logger.debug(f"--- Node: {node} ---")
65+
66+
if "messages" in state:
67+
last_msg = state["messages"][-1]
68+
69+
for block in last_msg.content_blocks:
70+
# 返回的event_data数据结构要求符合 adk event规范: https://google.github.io/adk-docs/events/#identifying-event-origin-and-type
71+
event_data = {
72+
"content": {
73+
"parts": [
74+
{"text": block.get("text") or block.get("reasoning")}
75+
]
76+
}
77+
}
78+
yield json.dumps(event_data)
79+
80+
@app.ping
81+
def ping() -> str:
82+
return "pong!"
83+
84+
async def local_test():
85+
"""Helper to run the agent locally without server"""
86+
print("Running local test...")
87+
query = "What is the length of the word 'LangChain' and what is that length plus 5?"
88+
print(f"Query: {query}")
89+
async for event in run({"prompt": query}, {"user_id": "1", "session_id": "1"}):
90+
print(f"Received event: {event}")
91+
92+
if __name__ == "__main__":
93+
app.run(host="0.0.0.0", port=8000)
Lines changed: 85 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,85 @@
1+
import os
2+
from langchain_openai import ChatOpenAI
3+
from langchain.agents import create_agent
4+
from langchain_core.tools import tool
5+
from dotenv import load_dotenv
6+
import logging
7+
import asyncio
8+
import json
9+
10+
11+
# Load environment variables (especially OPENAI_API_KEY)
12+
load_dotenv()
13+
logger = logging.getLogger(__name__)
14+
logger.setLevel(logging.INFO)
15+
16+
# 1. Define tools
17+
@tool
18+
def get_word_length(word: str) -> int:
19+
"""Returns the length of a word."""
20+
return len(word)
21+
22+
@tool
23+
def add_numbers(a: int, b: int) -> int:
24+
"""Adds two numbers together."""
25+
return a + b
26+
27+
# Create the agent
28+
# Fix: Ensure environment variables are mapped to the correct arguments
29+
agent = create_agent(
30+
model=ChatOpenAI(
31+
model=os.getenv("MODEL_NAME", "gpt-4o-mini"),
32+
api_key=os.getenv("OPENAI_API_KEY"),
33+
base_url=os.getenv("OPENAI_API_BASE", "https://api.openai.com/v1"),
34+
temperature=0
35+
),
36+
tools=[get_word_length, add_numbers],
37+
system_prompt="You are a helpful assistant. You have access to tools to help answer questions."
38+
)
39+
40+
41+
async def run(payload: dict, headers: dict):
42+
prompt = payload.get("prompt")
43+
user_id = headers.get("user_id")
44+
session_id = headers.get("session_id")
45+
46+
# Default values if still missing
47+
user_id = user_id or "default_user"
48+
session_id = session_id or "default_session"
49+
50+
logger.info(
51+
f"Running agent with prompt: {prompt}, user_id: {user_id}, session_id: {session_id}"
52+
)
53+
54+
inputs = {"messages": [{"role": "user", "content": prompt}]}
55+
56+
# stream returns an iterator of updates
57+
# To get the final result, we can just iterate or use invoke
58+
async for chunk in agent.astream(inputs, stream_mode="updates"):
59+
# chunk is a dict with node names as keys and state updates as values
60+
for node, state in chunk.items():
61+
logger.debug(f"--- Node: {node} ---")
62+
63+
if "messages" in state:
64+
last_msg = state["messages"][-1]
65+
66+
for block in last_msg.content_blocks:
67+
event_data = {
68+
"content": {
69+
"parts": [
70+
{"text": block.get("text")}
71+
]
72+
}
73+
}
74+
yield json.dumps(event_data)
75+
76+
async def local_test():
77+
"""Helper to run the agent locally without server"""
78+
print("Running local test...")
79+
query = "What is the length of the word 'LangChain' and what is that length plus 5?"
80+
print(f"Query: {query}")
81+
async for event in run({"prompt": query}, {"user_id": "1", "session_id": "1"}):
82+
print(f"Received event: {event}")
83+
84+
if __name__ == "__main__":
85+
asyncio.run(local_test())
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
import requests
2+
import json
3+
import sys
4+
5+
def main():
6+
# Target URL
7+
url = "http://localhost:8000/invoke"
8+
9+
# Payload parameters
10+
payload = {
11+
"prompt": "Hello, can you calculate 10 + 20 and tell me the length of the word 'AgentKit'?"
12+
}
13+
14+
# Headers
15+
headers = {
16+
"user_id": "test_user_001",
17+
"session_id": "session_test_001"
18+
}
19+
20+
print(f"Sending POST request to {url} with payload: {payload}")
21+
22+
try:
23+
response = requests.post(url, json=payload, headers=headers, stream=True)
24+
25+
if response.status_code == 200:
26+
print("\n--- Streaming Response ---")
27+
for line in response.iter_lines():
28+
if line:
29+
decoded_line = line.decode('utf-8')
30+
if decoded_line.startswith("data: "):
31+
decoded_line = decoded_line[6:]
32+
33+
if decoded_line.strip() == "[DONE]":
34+
continue
35+
36+
# Expecting SSE or JSON lines depending on implementation
37+
# The agent code yields json.dumps(event_data)
38+
try:
39+
data = json.loads(decoded_line)
40+
print(json.dumps(data, indent=2, ensure_ascii=False))
41+
except json.JSONDecodeError:
42+
print(f"Raw: {decoded_line}")
43+
print("\n--- End of Stream ---")
44+
else:
45+
print(f"Request failed with status code: {response.status_code}")
46+
print(response.text)
47+
48+
except requests.exceptions.ConnectionError:
49+
print("Error: Could not connect to the server. Is the agent running on port 8000?")
50+
except Exception as e:
51+
print(f"An error occurred: {e}")
52+
53+
if __name__ == "__main__":
54+
main()
Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,12 @@
1+
[project]
2+
name = "session1"
3+
version = "0.1.0"
4+
description = "Add your description here"
5+
readme = "README.md"
6+
requires-python = ">=3.12"
7+
dependencies = [
8+
"langchain>=1.1.3",
9+
"langchain-openai>=1.1.3",
10+
"python-dotenv>=1.2.1",
11+
"agentkit-sdk-python>=0.2.0"
12+
]
Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
langchain>=1.1.3
2+
langchain-openai>=1.1.3
3+
python-dotenv>=1.2.1

0 commit comments

Comments
 (0)