Skip to content

Commit 6cb7f90

Browse files
strickvlclaudeCopilot
authored
Add Qwen-Agent example to agent framework integrations (#4086)
* Add Qwen-Agent and lagent examples to agent framework integrations This commit adds two new agent framework examples with Chinese language support: 1. **Qwen-Agent**: Alibaba Cloud's agent framework - Custom tool registration with @register_tool decorator - Calculator tool example demonstrating function calling - Support for OpenAI API and DashScope - MCP integration capabilities - Comprehensive Chinese and English documentation 2. **lagent**: InternLM's lightweight ReAct agent framework - ReAct reasoning pattern implementation - Built-in Python interpreter for code execution - Support for OpenAI and InternLM models - Lightweight design requiring minimal code - Full Chinese and English documentation Both examples follow the established patterns: - ZenML pipeline integration with ExternalArtifact - Docker settings with UV package installer - Error handling with success/error states - Formatted response output - Production-ready artifact management The main README.md has been updated to include both frameworks in the frameworks overview table and added a new "International Frameworks" section highlighting Chinese language support. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <[email protected]> * Update examples/agent_framework_integrations/README.md Co-authored-by: Copilot <[email protected]> * Apply suggestion from @strickvl * Fix and test Qwen Agent * deployers * Delete lagent example * clean up main README * Linting fixes --------- Co-authored-by: Claude <[email protected]> Co-authored-by: Copilot <[email protected]>
1 parent 9187250 commit 6cb7f90

File tree

5 files changed

+316
-0
lines changed

5 files changed

+316
-0
lines changed

examples/agent_framework_integrations/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -63,6 +63,7 @@ python run.py
6363
| [LlamaIndex](llama_index/) | 📚 Functions | Function agents, Async execution | llama-index, openai |
6464
| [OpenAI Agents SDK](openai_agents_sdk/) | 🏗️ Structured | Official OpenAI agents, Structured execution | openai-agents, openai |
6565
| [PydanticAI](pydanticai/) | ✅ Type-Safe | Type-safe agents, Validation | pydantic-ai, openai |
66+
| [Qwen-Agent](qwen-agent/) | 🧠 Function Call | Custom tools, MCP integration, Qwen models | qwen-agent, openai |
6667
| [Semantic Kernel](semantic-kernel/) | 🧩 Plugins | Plugin architecture, Microsoft ecosystem | semantic-kernel, openai |
6768

6869
## 🎯 Core Patterns
Lines changed: 113 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,113 @@
1+
# Qwen-Agent + ZenML
2+
3+
## 中文快速开始 / Chinese Quick Start
4+
5+
Qwen-Agent 是阿里巴巴开源的智能体框架,支持函数调用、工具使用和多模态能力。
6+
7+
### 安装和运行
8+
```bash
9+
export OPENAI_API_KEY="your-api-key-here"
10+
cd qwen-agent/
11+
uv venv --python 3.11
12+
source .venv/bin/activate
13+
uv pip install -r requirements.txt
14+
```
15+
16+
初始化 ZenML 并登录:
17+
```bash
18+
zenml init
19+
zenml login
20+
```
21+
22+
运行示例流水线:
23+
```bash
24+
python run.py
25+
```
26+
27+
### 部署流水线(实时服务)
28+
使用 ZenML 的部署功能将该流水线部署为 HTTP 实时服务:
29+
30+
```bash
31+
# 将流水线部署为 HTTP 服务(引用 run.py 中的 agent_pipeline 符号)
32+
zenml pipeline deploy run.agent_pipeline --name qwen-agent
33+
```
34+
35+
通过 CLI 调用服务:
36+
```bash
37+
zenml deployment invoke qwen-agent --query="计算 12*8 然后再加上 10"
38+
```
39+
40+
通过 HTTP API 调用服务:
41+
```bash
42+
curl -X POST http://localhost:8000/invoke \
43+
-H "Content-Type: application/json" \
44+
-d '{"parameters": {"query": "请计算 15*7 并加上 42"}}'
45+
```
46+
47+
> 提示:
48+
> - 该示例使用 OpenAI 兼容接口(在 `qwen_agent_impl.py` 中配置),需要设置环境变量 `OPENAI_API_KEY`
49+
> - 如果改用阿里云 DashScope,可按照 Qwen-Agent 文档设置 `DASHSCOPE_API_KEY` 并相应调整 LLM 配置。
50+
51+
---
52+
53+
## English Documentation
54+
55+
Qwen-Agent integrated with ZenML for production-grade agent orchestration.
56+
57+
## Run
58+
```bash
59+
export OPENAI_API_KEY="your-api-key-here"
60+
uv venv --python 3.11
61+
source .venv/bin/activate
62+
uv pip install -r requirements.txt
63+
```
64+
65+
Initialize ZenML and login:
66+
```bash
67+
zenml init
68+
zenml login
69+
```
70+
71+
Run the pipeline locally:
72+
```bash
73+
python run.py
74+
```
75+
76+
## Pipeline Deployment
77+
78+
Deploy this agent as a real-time HTTP service:
79+
80+
```bash
81+
# Deploy the pipeline as an HTTP service (referencing the agent_pipeline symbol in run.py)
82+
zenml pipeline deploy run.agent_pipeline --name qwen-agent
83+
```
84+
85+
Invoke via CLI:
86+
```bash
87+
zenml deployment invoke qwen-agent --query="Calculate 25*4 and add 10"
88+
```
89+
90+
Invoke via HTTP API:
91+
```bash
92+
curl -X POST http://localhost:8000/invoke \
93+
-H "Content-Type: application/json" \
94+
-d '{"parameters": {"query": "Add 42 to 7*15"}}'
95+
```
96+
97+
## Features
98+
- Custom tool registration with `@register_tool` decorator
99+
- Function calling capabilities with Qwen models
100+
- Support for OpenAI-compatible APIs and DashScope
101+
- Built-in calculator tool for mathematical operations
102+
- Production-ready agent orchestration with ZenML
103+
104+
## About Qwen-Agent
105+
106+
Qwen-Agent is an open-source agent framework by Alibaba Cloud, built on the Qwen language models. It provides:
107+
- 🔧 Easy tool creation and registration
108+
- 🧠 Advanced function calling
109+
- 🌐 MCP (Model Context Protocol) integration
110+
- 📚 RAG (Retrieval-Augmented Generation) support
111+
- 💻 Code interpreter capabilities
112+
113+
Learn more at the [official repository](https://github.com/QwenLM/Qwen-Agent).
Lines changed: 84 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,84 @@
1+
"""Qwen-Agent implementation with custom tools.
2+
3+
This module demonstrates how to create a Qwen-Agent with custom tools
4+
for use in ZenML pipelines.
5+
"""
6+
7+
import os
8+
9+
import json5
10+
from qwen_agent.agents import Assistant
11+
from qwen_agent.tools.base import BaseTool, register_tool
12+
13+
14+
@register_tool("calculator")
15+
class Calculator(BaseTool):
16+
"""A simple calculator tool for basic arithmetic operations."""
17+
18+
name = "calculator"
19+
description = "Performs basic arithmetic operations (add, subtract, multiply, divide)"
20+
parameters = [
21+
{
22+
"name": "operation",
23+
"type": "string",
24+
"description": "The operation to perform: 'add', 'subtract', 'multiply', or 'divide'",
25+
"required": True,
26+
},
27+
{
28+
"name": "a",
29+
"type": "number",
30+
"description": "The first number",
31+
"required": True,
32+
},
33+
{
34+
"name": "b",
35+
"type": "number",
36+
"description": "The second number",
37+
"required": True,
38+
},
39+
]
40+
41+
def call(self, params: str, **kwargs) -> str:
42+
# Qwen-Agent passes tool arguments as a JSON string; json5 allows mild deviations the LLM may produce.
43+
data = json5.loads(params)
44+
operation = data.get("operation")
45+
a = float(data.get("a", 0))
46+
b = float(data.get("b", 0))
47+
48+
if operation == "divide" and b == 0:
49+
return "Error: Division by zero"
50+
51+
ops = {
52+
"add": lambda x, y: x + y,
53+
"subtract": lambda x, y: x - y,
54+
"multiply": lambda x, y: x * y,
55+
"divide": lambda x, y: x / y,
56+
}
57+
if operation not in ops:
58+
return f"Error: Unknown operation '{operation}'"
59+
60+
result = ops[operation](a, b)
61+
return str(result)
62+
63+
64+
# Initialize the LLM - uses OpenAI API by default
65+
llm_cfg = {
66+
"model": "gpt-4o-mini",
67+
# Explicit OpenAI base URL ensures consistent behavior across environments
68+
"model_server": "https://api.openai.com/v1",
69+
"api_key": os.environ.get("OPENAI_API_KEY"),
70+
"generate_cfg": {
71+
# You can add generation params here, e.g., "top_p": 0.8
72+
},
73+
}
74+
75+
system_instruction = (
76+
"You are a helpful math assistant. Use the 'calculator' tool to perform arithmetic. "
77+
"Respond with only the final numeric result unless the user requests detailed steps."
78+
)
79+
80+
agent = Assistant(
81+
llm=llm_cfg,
82+
system_message=system_instruction,
83+
function_list=["calculator"],
84+
)
Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
qwen-agent>=0.0.10
2+
openai>=1.0.0
3+
zenml[local]
4+
json5>=0.9
Lines changed: 114 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,114 @@
1+
"""ZenML Pipeline for Qwen-Agent.
2+
3+
This pipeline demonstrates how to integrate Qwen-Agent with ZenML
4+
for orchestration and artifact management.
5+
"""
6+
7+
import os
8+
from typing import Annotated, Any, Dict
9+
10+
from qwen_agent_impl import agent # Use the local agent implementation
11+
12+
from zenml import pipeline, step
13+
from zenml.config import DockerSettings, PythonPackageInstaller
14+
15+
docker_settings = DockerSettings(
16+
python_package_installer=PythonPackageInstaller.UV,
17+
requirements="requirements.txt", # relative to the pipeline directory
18+
environment={
19+
# Propagate API keys into the container so either OpenAI-compatible endpoints
20+
# or Alibaba DashScope can be used interchangeably based on the LLM config.
21+
"OPENAI_API_KEY": os.getenv("OPENAI_API_KEY"),
22+
"DASHSCOPE_API_KEY": os.getenv("DASHSCOPE_API_KEY"),
23+
},
24+
)
25+
26+
27+
@step
28+
def run_qwen_agent(
29+
query: str,
30+
) -> Annotated[Dict[str, Any], "agent_results"]:
31+
"""Execute the Qwen-Agent with the given query."""
32+
try:
33+
messages = [{"role": "user", "content": query}]
34+
last_batch: list[Any] = []
35+
for batch in agent.run(messages=messages):
36+
# run() yields incremental message lists; keep the latest for the final assistant output
37+
last_batch = batch
38+
39+
final_text = ""
40+
if last_batch:
41+
last_msg = last_batch[-1]
42+
final_text = (
43+
last_msg.get("content", str(last_msg))
44+
if isinstance(last_msg, dict)
45+
else str(last_msg)
46+
)
47+
48+
return {
49+
"query": query,
50+
"response": final_text,
51+
"status": "success",
52+
}
53+
except Exception as e:
54+
return {
55+
"query": query,
56+
"response": f"Agent error: {str(e)}",
57+
"status": "error",
58+
}
59+
60+
61+
@step
62+
def format_qwen_response(
63+
agent_data: Dict[str, Any],
64+
) -> Annotated[str, "formatted_response"]:
65+
"""Format the Qwen-Agent results into a readable summary."""
66+
query = agent_data["query"]
67+
response = agent_data["response"]
68+
status = agent_data["status"]
69+
70+
if status == "error":
71+
formatted = f"""❌ QWEN-AGENT ERROR
72+
{"=" * 40}
73+
74+
Query: {query}
75+
Error: {response}
76+
"""
77+
else:
78+
formatted = f"""🤖 QWEN-AGENT RESPONSE
79+
{"=" * 40}
80+
81+
Query: {query}
82+
83+
Response:
84+
{response}
85+
86+
🧠 Powered by Qwen-Agent (Alibaba Cloud)
87+
"""
88+
89+
return formatted.strip()
90+
91+
92+
@pipeline(settings={"docker": docker_settings}, enable_cache=False)
93+
def agent_pipeline(
94+
query: str = "Calculate the result of 15 multiplied by 7, then add 42 to it.",
95+
) -> str:
96+
"""ZenML pipeline that orchestrates the Qwen-Agent.
97+
98+
Returns:
99+
Formatted agent response
100+
"""
101+
# Run the Qwen-Agent with the provided query
102+
agent_results = run_qwen_agent(query=query)
103+
104+
# Format the results
105+
summary = format_qwen_response(agent_results)
106+
107+
return summary
108+
109+
110+
if __name__ == "__main__":
111+
print("🚀 Running Qwen-Agent pipeline...")
112+
run_result = agent_pipeline()
113+
print("Pipeline completed successfully!")
114+
print("Check the ZenML dashboard for detailed results and artifacts.")

0 commit comments

Comments
 (0)