Skip to content

Commit 78729be

Browse files
committed
feat: Implement multi-agent Chain of Thought system
- Add specialized agents (Planner, Research, Reasoning, Synthesis) - Modify RAG and LocalRAG agents to use multi-agent CoT - Add test file for multi-agent system - Update README with new CoT documentation - Make local model the default option
1 parent a106d83 commit 78729be

File tree

5 files changed

+555
-258
lines changed

5 files changed

+555
-258
lines changed

agentic_rag/README.md

Lines changed: 63 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -190,50 +190,88 @@ python rag_agent.py --query "Can you explain the DaGAN Approach proposed in the
190190
191191
## 2. Chain of Thought (CoT) Support
192192
193-
The system implements Chain of Thought prompting, allowing the LLMs to break down complex queries into steps and show their reasoning process. This feature can be activated in several ways:
193+
The system implements an advanced multi-agent Chain of Thought system, allowing complex queries to be broken down and processed through multiple specialized agents. This feature enhances the reasoning capabilities of both local and cloud-based models.
194194
195-
### 1. Using the API
195+
### Multi-Agent System
196196
197+
The CoT system consists of four specialized agents:
198+
199+
1. **Planner Agent**: Breaks down complex queries into clear, manageable steps
200+
2. **Research Agent**: Gathers and analyzes relevant information from knowledge bases
201+
3. **Reasoning Agent**: Applies logical analysis to information and draws conclusions
202+
4. **Synthesis Agent**: Combines multiple pieces of information into a coherent response
203+
204+
### Using CoT
205+
206+
You can activate the multi-agent CoT system in several ways:
207+
208+
1. **Command Line**:
209+
```bash
210+
# Using local Mistral model (default)
211+
python local_rag_agent.py --query "your query" --use-cot
212+
213+
# Using OpenAI model
214+
python rag_agent.py --query "your query" --use-cot
215+
```
216+
217+
2. **Testing the System**:
218+
```bash
219+
# Test with local model (default)
220+
python tests/test_new_cot.py
221+
222+
# Test with OpenAI model
223+
python tests/test_new_cot.py --model openai
224+
```
225+
226+
3. **API Endpoint**:
197227
```http
198228
POST /query
199229
Content-Type: application/json
200230
201231
{
202-
"query": "your question here",
232+
"query": "your query",
203233
"use_cot": true
204234
}
205235
```
206236
207-
### 2. Using Command Line
237+
### Example Output
208238
209-
```bash
210-
# Using local Mistral model with CoT
211-
python local_rag_agent.py --query "your question" --use-cot
239+
When CoT is enabled, the system will show:
240+
- The initial plan for answering the query
241+
- Research findings for each step
242+
- Reasoning process and conclusions
243+
- Final synthesized answer
244+
- Sources used from the knowledge base
212245
213-
# Using OpenAI with CoT
214-
python rag_agent.py --query "your question" --use-cot
246+
Example:
215247
```
248+
Step 1: Planning
249+
- Break down the technical components
250+
- Identify key features
251+
- Analyze implementation details
216252
217-
### 3. Programmatically
253+
Step 2: Research
254+
[Research findings for each step...]
218255
219-
```python
220-
# Initialize agents with CoT enabled
221-
local_agent = LocalRAGAgent(vector_store, use_cot=True)
222-
openai_agent = RAGAgent(vector_store, openai_api_key, use_cot=True)
223-
```
256+
Step 3: Reasoning
257+
[Logical analysis and conclusions...]
224258
225-
When CoT is enabled, the system will:
226-
1. Break down complex queries into logical steps
227-
2. Show the reasoning process for each step
228-
3. Use available context more effectively by explaining how it relates to each step
229-
4. Arrive at more reliable answers through structured thinking
259+
Final Answer:
260+
[Comprehensive response synthesized from all steps...]
261+
262+
Sources used:
263+
- document.pdf (pages: 1, 2, 3)
264+
- implementation.py
265+
```
230266
231-
This is particularly useful for:
232-
- Complex analytical questions
233-
- Multi-step reasoning problems
234-
- Questions requiring detailed explanations
235-
- Queries that need careful consideration of multiple pieces of context
267+
### Benefits
236268
269+
The multi-agent CoT approach offers several advantages:
270+
- More structured and thorough analysis of complex queries
271+
- Better integration with knowledge bases
272+
- Transparent reasoning process
273+
- Improved answer quality through specialized agents
274+
- Works with both local and cloud-based models
237275
238276
## Annex: API Endpoints
239277

agentic_rag/agents/agent_factory.py

Lines changed: 160 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,160 @@
1+
from typing import List, Dict, Any
2+
from pydantic import BaseModel, Field
3+
from langchain_openai import ChatOpenAI
4+
from langchain.prompts import ChatPromptTemplate
5+
6+
class Agent(BaseModel):
7+
"""Base agent class with common properties"""
8+
name: str
9+
role: str
10+
description: str
11+
12+
class PlannerAgent(Agent):
13+
"""Agent responsible for breaking down problems and planning steps"""
14+
def __init__(self, llm):
15+
super().__init__(
16+
name="Planner",
17+
role="Strategic Planner",
18+
description="Breaks down complex problems into manageable steps"
19+
)
20+
self.llm = llm
21+
22+
def plan(self, query: str, context: List[Dict[str, Any]] = None) -> str:
23+
if context:
24+
template = """You are a strategic planning agent. Your role is to break down complex problems into clear, manageable steps.
25+
26+
Given the following context and query, create a step-by-step plan to answer the question.
27+
Each step should be clear and actionable.
28+
29+
Context:
30+
{context}
31+
32+
Query: {query}
33+
34+
Plan:"""
35+
context_str = "\n\n".join([f"Context {i+1}:\n{item['content']}" for i, item in enumerate(context)])
36+
else:
37+
template = """You are a strategic planning agent. Your role is to break down complex problems into clear, manageable steps.
38+
39+
Given the following query, create a step-by-step plan to answer the question.
40+
Each step should be clear and actionable.
41+
42+
Query: {query}
43+
44+
Plan:"""
45+
context_str = ""
46+
47+
prompt = ChatPromptTemplate.from_template(template)
48+
messages = prompt.format_messages(query=query, context=context_str)
49+
response = self.llm.invoke(messages)
50+
return response.content
51+
52+
class ResearchAgent(Agent):
53+
"""Agent responsible for gathering and analyzing information"""
54+
def __init__(self, llm, vector_store):
55+
super().__init__(
56+
name="Researcher",
57+
role="Information Gatherer",
58+
description="Gathers and analyzes relevant information from knowledge bases"
59+
)
60+
self.llm = llm
61+
self.vector_store = vector_store
62+
63+
def research(self, query: str, step: str) -> List[Dict[str, Any]]:
64+
# Query all collections
65+
pdf_results = self.vector_store.query_pdf_collection(query)
66+
repo_results = self.vector_store.query_repo_collection(query)
67+
68+
# Combine results
69+
all_results = pdf_results + repo_results
70+
71+
if not all_results:
72+
return []
73+
74+
# Have LLM analyze and summarize findings
75+
template = """You are a research agent. Your role is to analyze information and extract relevant details.
76+
77+
Given the following research step and context, summarize the key findings that are relevant to this step.
78+
79+
Step: {step}
80+
81+
Context:
82+
{context}
83+
84+
Key Findings:"""
85+
86+
context_str = "\n\n".join([f"Source {i+1}:\n{item['content']}" for i, item in enumerate(all_results)])
87+
prompt = ChatPromptTemplate.from_template(template)
88+
messages = prompt.format_messages(step=step, context=context_str)
89+
response = self.llm.invoke(messages)
90+
91+
return [{"content": response.content, "metadata": {"source": "Research Summary"}}]
92+
93+
class ReasoningAgent(Agent):
94+
"""Agent responsible for logical reasoning and analysis"""
95+
def __init__(self, llm):
96+
super().__init__(
97+
name="Reasoner",
98+
role="Logic and Analysis",
99+
description="Applies logical reasoning to information and draws conclusions"
100+
)
101+
self.llm = llm
102+
103+
def reason(self, query: str, step: str, context: List[Dict[str, Any]]) -> str:
104+
template = """You are a reasoning agent. Your role is to apply logical analysis to information and draw conclusions.
105+
106+
Given the following step, context, and query, apply logical reasoning to reach a conclusion.
107+
Show your reasoning process clearly.
108+
109+
Step: {step}
110+
111+
Context:
112+
{context}
113+
114+
Query: {query}
115+
116+
Reasoning:"""
117+
118+
context_str = "\n\n".join([f"Context {i+1}:\n{item['content']}" for i, item in enumerate(context)])
119+
prompt = ChatPromptTemplate.from_template(template)
120+
messages = prompt.format_messages(step=step, query=query, context=context_str)
121+
response = self.llm.invoke(messages)
122+
return response.content
123+
124+
class SynthesisAgent(Agent):
125+
"""Agent responsible for combining information and generating final response"""
126+
def __init__(self, llm):
127+
super().__init__(
128+
name="Synthesizer",
129+
role="Information Synthesizer",
130+
description="Combines multiple pieces of information into a coherent response"
131+
)
132+
self.llm = llm
133+
134+
def synthesize(self, query: str, reasoning_steps: List[str]) -> str:
135+
template = """You are a synthesis agent. Your role is to combine multiple pieces of information into a clear, coherent response.
136+
137+
Given the following query and reasoning steps, create a final comprehensive answer.
138+
The answer should be well-structured and incorporate the key points from each step.
139+
140+
Query: {query}
141+
142+
Reasoning Steps:
143+
{steps}
144+
145+
Final Answer:"""
146+
147+
steps_str = "\n\n".join([f"Step {i+1}:\n{step}" for i, step in enumerate(reasoning_steps)])
148+
prompt = ChatPromptTemplate.from_template(template)
149+
messages = prompt.format_messages(query=query, steps=steps_str)
150+
response = self.llm.invoke(messages)
151+
return response.content
152+
153+
def create_agents(llm, vector_store=None):
154+
"""Create and return the set of specialized agents"""
155+
return {
156+
"planner": PlannerAgent(llm),
157+
"researcher": ResearchAgent(llm, vector_store) if vector_store else None,
158+
"reasoner": ReasoningAgent(llm),
159+
"synthesizer": SynthesisAgent(llm)
160+
}

0 commit comments

Comments
 (0)