English | 中文
🚀 A powerful Go framework for building intelligent agent systems that transforms how you create AI applications. Build autonomous agents that think, remember, collaborate, and act with unprecedented ease.
✨ Why tRPC-Agent-Go?
- 🧠 Intelligent Reasoning: Advanced hierarchical planners and multi-agent orchestration
- 🧰 Rich Tool Ecosystem: Seamless integration with external APIs, databases, and services
- 💾 Persistent Memory: Long-term state management and contextual awareness
- 🔗 Multi-Agent Collaboration: Chain, parallel, and graph-based agent workflows
- 📊 Production Ready: Built-in telemetry, tracing, and enterprise-grade reliability
- ⚡ High Performance: Optimized for scalability and low latency
Perfect for building:
- 🤖 Customer Support Bots - Intelligent agents that understand context and solve complex queries
- 📊 Data Analysis Assistants - Agents that query databases, generate reports, and provide insights
- 🔧 DevOps Automation - Smart deployment, monitoring, and incident response systems
- 💼 Business Process Automation - Multi-step workflows with human-in-the-loop capabilities
- 🧠 Research & Knowledge Management - RAG-powered agents for document analysis and Q&A
// Chain agents for complex workflows
pipeline := chainagent.New("pipeline",
chainagent.WithSubAgents([]agent.Agent{
analyzer, processor, reporter,
}))
// Or run them in parallel
parallel := parallelagent.New("concurrent",
parallelagent.WithSubAgents(tasks)) |
// Persistent memory with search
memory := memorysvc.NewInMemoryService()
agent := llmagent.New("assistant",
llmagent.WithTools(memory.Tools()),
llmagent.WithModel(model))
// Memory service managed at runner level
runner := runner.NewRunner("app", agent,
runner.WithMemoryService(memory))
// Agents remember context across sessions |
// Any function becomes a tool
calculator := function.NewFunctionTool(
calculate,
function.WithName("calculator"),
function.WithDescription("Math operations"))
// MCP protocol support
mcpTool := mcptool.New(serverConn) |
// OpenTelemetry integration
runner := runner.NewRunner("app", agent,
runner.WithTelemetry(telemetry.Config{
TracingEnabled: true,
MetricsEnabled: true,
})) |
- Use Cases
- Key Features
- Documentation
- Quick Start
- Examples
- Architecture Overview
- Using Built-in Agents
- Future Enhancements
- Contributing
- Acknowledgements
Ready to dive into tRPC-Agent-Go? Our documentation covers everything from basic concepts to advanced techniques, helping you build powerful AI applications with confidence. Whether you're new to AI agents or an experienced developer, you'll find detailed guides, practical examples, and best practices to accelerate your development journey.
🎬 See it in Action: [Demo GIF placeholder - showing agent reasoning and tool usage]
- ✅ Go 1.24.1 or later
- 🔑 LLM provider API key (OpenAI, DeepSeek, etc.)
- 💡 5 minutes to build your first intelligent agent
Get started in 3 simple steps:
# 1️⃣ Clone and setup
git clone https://github.com/trpc-group/trpc-agent-go.git
cd trpc-agent-go
# 2️⃣ Configure your LLM
export OPENAI_API_KEY="your-api-key-here"
export OPENAI_BASE_URL="your-base-url-here" # Optional
# 3️⃣ Run your first agent! 🎉
cd examples/runner
go run . -model="gpt-4o-mini" -streaming=true
What you'll see:
- 💬 Interactive chat with your AI agent
- ⚡ Real-time streaming responses
- 🧮 Tool usage (calculator + time tools)
- 🔄 Multi-turn conversations with memory
Try asking: "What's the current time? Then calculate 15 * 23 + 100"
package main
import (
"context"
"fmt"
"log"
"trpc.group/trpc-go/trpc-agent-go/agent/llmagent"
"trpc.group/trpc-go/trpc-agent-go/model"
"trpc.group/trpc-go/trpc-agent-go/model/openai"
"trpc.group/trpc-go/trpc-agent-go/runner"
"trpc.group/trpc-go/trpc-agent-go/tool"
"trpc.group/trpc-go/trpc-agent-go/tool/function"
)
func main() {
// Create model.
modelInstance := openai.New("deepseek-chat")
// Create tool.
calculatorTool := function.NewFunctionTool(
calculator,
function.WithName("calculator"),
function.WithDescription("Execute addition, subtraction, multiplication, and division. "+
"Parameters: a, b are numeric values, op takes values add/sub/mul/div; "+
"returns result as the calculation result."),
)
// Enable streaming output.
genConfig := model.GenerationConfig{
Stream: true,
}
// Create Agent.
agent := llmagent.New("assistant",
llmagent.WithModel(modelInstance),
llmagent.WithTools([]tool.Tool{calculatorTool}),
llmagent.WithGenerationConfig(genConfig),
)
// Create Runner.
runner := runner.NewRunner("calculator-app", agent)
// Execute conversation.
ctx := context.Background()
events, err := runner.Run(ctx,
"user-001",
"session-001",
model.NewUserMessage("Calculate what 2+3 equals"),
)
if err != nil {
log.Fatal(err)
}
// Process event stream.
for event := range events {
if event.Object == "chat.completion.chunk" {
fmt.Print(event.Choices[0].Delta.Content)
}
}
fmt.Println()
}
func calculator(ctx context.Context, req calculatorReq) (calculatorRsp, error) {
var result float64
switch req.Op {
case "add", "+":
result = req.A + req.B
case "sub", "-":
result = req.A - req.B
case "mul", "*":
result = req.A * req.B
case "div", "/":
result = req.A / req.B
}
return calculatorRsp{Result: result}, nil
}
type calculatorReq struct {
A float64 `json:"A" jsonschema:"description=First integer operand,required"`
B float64 `json:"B" jsonschema:"description=Second integer operand,required"`
Op string `json:"Op" jsonschema:"description=Operation type,enum=add,enum=sub,enum=mul,enum=div,required"`
}
type calculatorRsp struct {
Result float64 `json:"result"`
}
The examples
directory contains runnable demos covering every major feature.
- examples/agenttool – Wrap agents as callable tools.
- examples/multitools – Multiple tools orchestration.
- examples/duckduckgo – Web search tool integration.
- examples/filetoolset – File operations as tools.
- examples/fileinput – Provide files as inputs.
- examples/agenttool shows streaming and non-streaming patterns.
2. LLM-Only Agent (examples/llmagent)
- Wrap any chat-completion model as an
LLMAgent
. - Configure system instructions, temperature, max tokens, etc.
- Receive incremental
event.Event
updates while the model streams.
3. Multi-Agent Runners (examples/multiagent)
- ChainAgent – linear pipeline of sub-agents.
- ParallelAgent – run sub-agents concurrently and merge results.
- CycleAgent – iterate until a termination condition is met.
4. Graph Agent (examples/graph)
- GraphAgent – demonstrates building and executing complex, conditional
workflows using the
graph
andagent/graph
packages. It shows how to construct a graph-based agent, manage state safely, implement conditional routing, and orchestrate execution with the Runner.
5. Memory (examples/memory)
- In‑memory and Redis memory services with CRUD, search and tool integration.
- How to configure, call tools and customize prompts.
6. Knowledge (examples/knowledge)
- Basic RAG example: load sources, embed to a vector store, and search.
- How to use conversation context and tune loading/concurrency options.
7. Telemetry & Tracing (examples/telemetry)
- OpenTelemetry hooks across model, tool and runner layers.
- Export traces to OTLP endpoint for real-time analysis.
8. MCP Integration (examples/mcptool)
- Wrapper utilities around trpc-mcp-go, an implementation of the Model Context Protocol (MCP).
- Provides structured prompts, tool calls, resource and session messages that follow the MCP specification.
- Enables dynamic tool execution and context-rich interactions between agents and LLMs.
9. Debug Web Demo (examples/debugserver)
- Launches a debug Server that speaks ADK-compatible HTTP endpoints.
- Front-end: google/adk-web connects via
/run_sse
, streams agent responses in real-time. - Great starting point for building your own chat UI.
Other notable examples:
- examples/humaninloop – Human in the loop.
- examples/codeexecution – Secure code execution.
See individual README.md
files in each example folder for usage details.
Architecture
- 🚀 Runner orchestrates the entire execution pipeline with session management
- 🤖 Agent processes requests using multiple specialized components
- 🧠 Planner determines the optimal strategy and tool selection
- 🛠️ Tools execute specific tasks (API calls, calculations, web searches)
- 💾 Memory maintains context and learns from interactions
- 📚 Knowledge provides RAG capabilities for document understanding
Key packages:
Package | Responsibility |
---|---|
agent |
Core execution unit, responsible for processing user input and generating responses. |
runner |
Agent executor, responsible for managing execution flow and connecting Session/Memory Service capabilities. |
model |
Supports multiple LLM models (OpenAI, DeepSeek, etc.). |
tool |
Provides various tool capabilities (Function, MCP, DuckDuckGo, etc.). |
session |
Manages user session state and events. |
memory |
Records user long-term memory and personalized information. |
knowledge |
Implements RAG knowledge retrieval capabilities. |
planner |
Provides Agent planning and reasoning capabilities. |
For most applications you do not need to implement the agent.Agent
interface yourself. The framework already ships with several ready-to-use
agents that you can compose like Lego bricks:
Agent | Purpose |
---|---|
LLMAgent |
Wraps an LLM chat-completion model as an agent. |
ChainAgent |
Executes sub-agents sequentially. |
ParallelAgent |
Executes sub-agents concurrently and merges output. |
CycleAgent |
Loops over a planner + executor until stop signal. |
// 1. Create a base LLM agent.
base := llmagent.New(
"assistant",
llmagent.WithModel(openai.New("gpt-4o-mini")),
)
// 2. Create a second LLM agent with a different instruction.
translator := llmagent.New(
"translator",
llmagent.WithInstruction("Translate everything to French"),
llmagent.WithModel(openai.New("gpt-3.5-turbo")),
)
// 3. Combine them in a chain.
pipeline := chainagent.New(
"pipeline",
chainagent.WithSubAgents([]agent.Agent{base, translator}),
)
// 4. Run through the runner for sessions & telemetry.
run := runner.NewRunner("demo-app", pipeline)
events, _ := run.Run(ctx, "user-1", "sess-1",
model.NewUserMessage("Hello!"))
for ev := range events { /* ... */ }
The composition API lets you nest chains, cycles, or parallels to build complex workflows without low-level plumbing.
We ❤️ contributions! Join our growing community of developers building the future of AI agents.
- 🐛 Report bugs or suggest features via Issues
- 📖 Improve documentation - help others learn faster
- 🔧 Submit PRs - bug fixes, new features, or examples
- 💡 Share your use cases - inspire others with your agent applications
# Fork & clone the repo
git clone https://github.com/YOUR_USERNAME/trpc-agent-go.git
cd trpc-agent-go
# Run tests to ensure everything works
go test ./...
go vet ./...
# Make your changes and submit a PR! 🎉
📋 Please read CONTRIBUTING.md for detailed guidelines and coding standards.
Special thanks to Tencent's business units including Tencent Yuanbao, Tencent Video, Tencent News, IMA, and QQ Music for their invaluable support and real-world validation. Production usage drives framework excellence! 🚀
Inspired by amazing frameworks like ADK, Agno, CrewAI, AutoGen, and many others. Standing on the shoulders of giants! 🙏
Licensed under the Apache 2.0 License - see LICENSE file for details.