-
Notifications
You must be signed in to change notification settings - Fork 31
Add AG2 (formerly AutoGen) multi-agent examples #15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
VasiliyRad
wants to merge
4
commits into
fetchai:main
Choose a base branch
from
VasiliyRad:vasiliyr/03112026
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 1 commit
Commits
Show all changes
4 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| OPENAI_API_KEY=your-openai-api-key-here | ||
|
|
||
| # Optional | ||
| LLM_MODEL=gpt-4o-mini | ||
| OPENAI_BASE_URL=https://api.openai.com/v1 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,26 @@ | ||
| # AG2 Two-Agent Payment Approval | ||
|
|
||
| Demonstrates AG2's `human_input_mode="ALWAYS"` pattern as an approval gate | ||
| before triggering a Skyfire payment — the first example in this repo requiring | ||
| explicit user confirmation before a financial action. | ||
|
|
||
| Two agents collaborate: | ||
| - **researcher** — investigates the recipient and produces a risk assessment | ||
| - **payment_executor** — presents the assessment, pauses for human confirmation, | ||
| then executes or aborts based on the response | ||
|
|
||
| ## Key AG2 Features | ||
|
|
||
| - **`human_input_mode="ALWAYS"`** — executor pauses before every response; human | ||
| types "yes" to proceed or "no" to abort — no custom routing logic needed | ||
| - **Two-agent `initiate_chat`** — researcher hands off to executor via the | ||
| natural conversation flow; the shared message history carries the assessment | ||
|
|
||
| ## Quick Start | ||
|
|
||
| ```bash | ||
| cd ag2-agents/payment-approval | ||
| pip install -r requirements.txt | ||
| cp .env.example .env | ||
| python main.py | ||
| ``` |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,70 @@ | ||
| """ | ||
| AG2 two-agent payment approval with human-in-the-loop gate. | ||
|
|
||
| researcher — investigates the recipient and produces a risk assessment | ||
| executor — presents the assessment and waits for explicit human confirmation | ||
| before executing the Skyfire payment | ||
|
|
||
| human_input_mode="ALWAYS" on executor is the approval gate: the agent | ||
| pauses before every response so the human can type "yes" to proceed or | ||
| "no" to abort. No custom routing logic required. | ||
| """ | ||
| import os | ||
| from dotenv import load_dotenv | ||
| from autogen import ConversableAgent | ||
|
|
||
| load_dotenv() | ||
|
|
||
| llm_config = { | ||
| "config_list": [{ | ||
| "model": os.getenv("LLM_MODEL", "gpt-4o-mini"), | ||
| "api_key": os.environ["OPENAI_API_KEY"], | ||
| "base_url": os.getenv("OPENAI_BASE_URL", "https://api.openai.com/v1"), | ||
| }], | ||
| "temperature": 0.2, | ||
| "cache_seed": None, | ||
| } | ||
|
|
||
| researcher = ConversableAgent( | ||
| name="researcher", | ||
| system_message=( | ||
| "You are a payment risk analyst. Investigate the payment recipient using available " | ||
| "tools: check their Fetch.ai address history, reputation, and any known flags. " | ||
| "Produce a concise risk assessment with a clear recommendation (proceed / do not proceed). " | ||
| "End your assessment with ASSESSMENT COMPLETE." | ||
| ), | ||
| llm_config=llm_config, | ||
| is_termination_msg=lambda m: "ASSESSMENT COMPLETE" in (m.get("content") or ""), | ||
| ) | ||
|
|
||
| executor = ConversableAgent( | ||
| name="payment_executor", | ||
| system_message=( | ||
| "You handle payment execution. Present the researcher's risk assessment clearly, " | ||
| "state the exact payment details (recipient, amount, reason), then ask the human " | ||
| "to confirm. If the human approves, call the skyfire_send tool to execute the payment. " | ||
| "If the human declines, acknowledge and terminate. End with TERMINATE." | ||
| ), | ||
| llm_config=llm_config, | ||
| human_input_mode="ALWAYS", # pauses before every response — the human types yes/no | ||
| is_termination_msg=lambda m: "TERMINATE" in (m.get("content") or ""), | ||
| ) | ||
|
|
||
|
|
||
| def run_payment_approval(recipient: str, amount: float, reason: str) -> None: | ||
| researcher.initiate_chat( | ||
| executor, | ||
| message=( | ||
| f"Payment request: {amount} USDC to {recipient} — reason: '{reason}'. " | ||
| f"Investigate the recipient and produce a risk assessment." | ||
| ), | ||
| max_turns=6, | ||
| ) | ||
|
|
||
|
|
||
| if __name__ == "__main__": | ||
| run_payment_approval( | ||
| recipient="alice.fetch", | ||
| amount=50.0, | ||
| reason="research report delivery", | ||
| ) | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,2 @@ | ||
| ag2[openai]>=0.11.0 | ||
| python-dotenv>=1.0.0 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,9 @@ | ||
| """ | ||
| Ensure payment-approval/ is on sys.path so 'import main' works in tests. | ||
| """ | ||
| import sys | ||
| import os | ||
|
|
||
| parent = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) | ||
| if parent not in sys.path: | ||
| sys.path.insert(0, parent) |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,23 @@ | ||
| import os | ||
| os.environ.setdefault("OPENAI_API_KEY", "test-key") | ||
|
|
||
| def test_agents_instantiate(): | ||
| """Verify agent setup without initiating a chat.""" | ||
| import main as m | ||
| assert m.researcher.name == "researcher" | ||
| assert m.executor.name == "payment_executor" | ||
|
|
||
| def test_executor_human_input_mode(): | ||
| """Executor must have human_input_mode=ALWAYS — the approval gate.""" | ||
| import main as m | ||
| assert m.executor.human_input_mode == "ALWAYS" | ||
|
|
||
| def test_researcher_termination_condition(): | ||
| import main as m | ||
| assert m.researcher._is_termination_msg({"content": "Risk: low. ASSESSMENT COMPLETE"}) is True | ||
| assert m.researcher._is_termination_msg({"content": "Still investigating..."}) is False | ||
|
|
||
| def test_executor_termination_condition(): | ||
| import main as m | ||
| assert m.executor._is_termination_msg({"content": "Payment aborted. TERMINATE"}) is True | ||
| assert m.executor._is_termination_msg({"content": "Please confirm."}) is False |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,9 @@ | ||
| OPENAI_API_KEY=your-openai-api-key-here | ||
| AGENTVERSE_API_KEY=your-agentverse-api-key-here | ||
|
|
||
| # Optional | ||
| LLM_MODEL=gpt-4o-mini | ||
| OPENAI_BASE_URL=https://api.openai.com/v1 | ||
| AGENT_PORT=8008 | ||
| AGENTVERSE_URL=https://agentverse.ai | ||
| MCP_SERVER_URL= |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| FROM python:3.11-slim | ||
|
|
||
| WORKDIR /app | ||
|
|
||
| COPY . . | ||
|
|
||
| RUN apt-get update && apt-get install -y gcc \ | ||
| && pip install --no-cache-dir --upgrade pip \ | ||
| && pip install --no-cache-dir -r requirements.txt \ | ||
| && apt-get clean && rm -rf /var/lib/apt/lists/* | ||
|
|
||
| ENV PYTHONUNBUFFERED=1 | ||
| EXPOSE 8008 | ||
| CMD ["python", "main.py"] |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,35 @@ | ||
| # AG2 Research Synthesis Team | ||
|
|
||
| A multi-agent research pipeline using [AG2](https://github.com/ag2ai/ag2) (formerly AutoGen) | ||
| integrated with the Fetch.ai uAgents ecosystem via the A2A protocol. | ||
|
|
||
| ## Architecture | ||
|
|
||
| Four specialists collaborate under GroupChat with LLM-driven speaker selection, wrapped as | ||
| an A2A executor and exposed as a discoverable agent on Agentverse. | ||
|
|
||
| ``` | ||
| GroupChat (AG2) | ||
| ├── web_researcher — searches and gathers information | ||
| ├── financial_analyst — analyses market and economic aspects | ||
| ├── tech_analyst — evaluates technical feasibility | ||
| └── synthesizer — produces the final report | ||
| ↓ | ||
| SingleA2AAdapter (Fetch.ai uagents-adapter) | ||
| ↓ | ||
| Agentverse (discoverable at port 8008) | ||
| ``` | ||
|
|
||
| ## Quick Start | ||
|
|
||
| ```bash | ||
| pip install -r requirements.txt | ||
| cp .env.example .env # add OPENAI_API_KEY and AGENTVERSE_API_KEY | ||
| python main.py | ||
| ``` | ||
|
|
||
| ## AG2 Features Demonstrated | ||
|
|
||
| - **`GroupChat` with `speaker_selection_method="auto"`** — LLM-driven dynamic speaker selection | ||
| - **Native MCP client** — optional connection to Fetch.ai's MCP gateway for web search tools | ||
| - **Pattern B (A2A Outbound)** — same integration pattern as LangChain and Google ADK examples |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,20 @@ | ||
| """ | ||
| Wraps the AG2 GroupChat workflow as a LangChain-compatible AgentExecutor | ||
| for use with SingleA2AAdapter (Pattern B — matches LangChain/ADK examples). | ||
| """ | ||
| import asyncio | ||
| from autogen import LLMConfig | ||
| from workflow import run_research | ||
|
|
||
|
|
||
| class AG2ResearchExecutor: | ||
| """Drop-in AgentExecutor interface for SingleA2AAdapter.""" | ||
|
|
||
| def __init__(self, llm_config: LLMConfig, mcp_url: str | None = None): | ||
| self.llm_config = llm_config | ||
| self.mcp_url = mcp_url | ||
|
|
||
| def invoke(self, inputs: dict) -> dict: | ||
| topic = inputs.get("input", "") | ||
| result = asyncio.run(run_research(topic, self.llm_config, self.mcp_url)) | ||
sentry[bot] marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
| return {"output": result} | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,43 @@ | ||
| """ | ||
| AG2 (formerly AutoGen) research synthesis team. | ||
| Four specialists collaborate under GroupChat with LLM-driven speaker selection. | ||
| """ | ||
| from autogen import AssistantAgent, LLMConfig | ||
|
|
||
| def build_agents(llm_config: LLMConfig) -> list[AssistantAgent]: | ||
| web_researcher = AssistantAgent( | ||
| name="web_researcher", | ||
| system_message=( | ||
| "You are a research specialist. Search and gather comprehensive information " | ||
| "on the assigned topic using available tools. Cite sources clearly." | ||
| ), | ||
| llm_config=llm_config, | ||
| ) | ||
| financial_analyst = AssistantAgent( | ||
| name="financial_analyst", | ||
| system_message=( | ||
| "You are a financial analyst. Analyse market data, trends, and economic " | ||
| "implications of the research topic. Be quantitative when possible." | ||
| ), | ||
| llm_config=llm_config, | ||
| ) | ||
| tech_analyst = AssistantAgent( | ||
| name="tech_analyst", | ||
| system_message=( | ||
| "You are a technology analyst. Evaluate technical aspects, feasibility, " | ||
| "and innovation potential of the research topic." | ||
| ), | ||
| llm_config=llm_config, | ||
| ) | ||
| synthesizer = AssistantAgent( | ||
| name="synthesizer", | ||
| system_message=( | ||
| "You are a synthesis expert. Once all specialists have contributed, " | ||
| "produce a final structured report combining all perspectives. " | ||
| "Format as Markdown with sections: Summary, Financial Analysis, " | ||
| "Technical Analysis, Conclusions. End with TERMINATE." | ||
| ), | ||
| llm_config=llm_config, | ||
| is_termination_msg=lambda m: "TERMINATE" in (m.get("content") or ""), | ||
| ) | ||
| return [web_researcher, financial_analyst, tech_analyst, synthesizer] |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,43 @@ | ||
| """ | ||
| Fetch.ai uAgent exposing the AG2 research team via A2A protocol (Pattern B). | ||
| Discoverable on Agentverse; callable from ASI:One or other uAgents. | ||
| """ | ||
| import os | ||
| from dotenv import load_dotenv | ||
| from uagents_adapter import SingleA2AAdapter | ||
| from autogen import LLMConfig | ||
|
|
||
| from agent_executor import AG2ResearchExecutor | ||
|
|
||
| load_dotenv() | ||
|
|
||
| llm_config = LLMConfig( | ||
| config_list=[{ | ||
| "model": os.getenv("LLM_MODEL", "gpt-4o-mini"), | ||
| "api_key": os.getenv("OPENAI_API_KEY", ""), | ||
| "base_url": os.getenv("OPENAI_BASE_URL", "https://api.openai.com/v1"), | ||
| }], | ||
| temperature=0.3, | ||
| cache_seed=None, | ||
| ) | ||
|
|
||
| executor = AG2ResearchExecutor( | ||
| llm_config=llm_config, | ||
| mcp_url=os.getenv("MCP_SERVER_URL"), # optional: Fetch.ai MCP gateway | ||
| ) | ||
|
|
||
| adapter = SingleA2AAdapter( | ||
| agent_executor=executor, | ||
| name="AG2 Research Synthesis Team", | ||
| description=( | ||
| "Multi-agent research team using AG2 (formerly AutoGen). " | ||
| "Four specialists (web researcher, financial analyst, tech analyst, synthesizer) " | ||
| "collaborate to produce comprehensive research reports on any topic." | ||
| ), | ||
| port=int(os.getenv("AGENT_PORT", "8008")), | ||
| agentverse_url=os.getenv("AGENTVERSE_URL", "https://agentverse.ai"), | ||
| mailbox_api_key=os.getenv("AGENTVERSE_API_KEY", ""), | ||
| ) | ||
|
|
||
| if __name__ == "__main__": | ||
| adapter.run() |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,5 @@ | ||
| ag2[openai,mcp]>=0.11.0 | ||
| mcp>=1.0.0 | ||
| uagents>=0.20.0 | ||
| uagents-adapter>=0.4.0 | ||
| python-dotenv>=1.0.0 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,20 @@ | ||
| """ | ||
| Ensure the project root (research-synthesis-team/) is on sys.path so that | ||
| local modules (agents.py, agent_executor.py, workflow.py) are importable. | ||
| Also evict the installed 'agents' package (OpenAI Agents SDK) that would | ||
| otherwise shadow the local agents.py. | ||
| """ | ||
| import sys | ||
| import os | ||
|
|
||
| parent = os.path.dirname(os.path.dirname(os.path.abspath(__file__))) | ||
| if parent not in sys.path: | ||
| sys.path.insert(0, parent) | ||
|
|
||
| # Evict the installed 'agents' package from sys.modules cache so | ||
| # 'from agents import build_agents' finds our local agents.py instead. | ||
| for key in list(sys.modules.keys()): | ||
| if key == "agents" or key.startswith("agents."): | ||
| mod = sys.modules[key] | ||
| if hasattr(mod, "__file__") and mod.__file__ and parent not in (mod.__file__ or ""): | ||
| del sys.modules[key] |
14 changes: 14 additions & 0 deletions
14
ag2-agents/research-synthesis-team/tests/test_agent_executor.py
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,14 @@ | ||
| from unittest.mock import patch, MagicMock | ||
| from autogen import LLMConfig | ||
|
|
||
| TEST_LLM = LLMConfig( | ||
| {"model": "gpt-4o-mini", "api_key": "test", "base_url": "https://api.openai.com/v1"} | ||
| ) | ||
|
|
||
| def test_executor_invoke_calls_run_research(): | ||
| from agent_executor import AG2ResearchExecutor | ||
| executor = AG2ResearchExecutor(llm_config=TEST_LLM) | ||
| with patch("agent_executor.asyncio.run", return_value="Mock research report") as mock_run: | ||
| result = executor.invoke({"input": "quantum computing"}) | ||
| mock_run.assert_called_once() | ||
| assert result == {"output": "Mock research report"} |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,35 @@ | ||
| """Unit tests — no LLM calls, no network, no uAgents runtime.""" | ||
| import pytest | ||
| import os | ||
| os.environ.setdefault("OPENAI_API_KEY", "test-key") | ||
| from autogen import LLMConfig | ||
|
|
||
| TEST_LLM_CONFIG = LLMConfig( | ||
| {"model": "gpt-4o-mini", "api_key": "test", "base_url": "https://api.openai.com/v1"} | ||
| ) | ||
|
|
||
| def test_agents_instantiate(): | ||
| from agents import build_agents | ||
| agents = build_agents(TEST_LLM_CONFIG) | ||
| assert len(agents) == 4 | ||
| names = [a.name for a in agents] | ||
| assert "web_researcher" in names | ||
| assert "financial_analyst" in names | ||
| assert "tech_analyst" in names | ||
| assert "synthesizer" in names | ||
|
|
||
| def test_synthesizer_termination(): | ||
| from agents import build_agents | ||
| agents = build_agents(TEST_LLM_CONFIG) | ||
| synthesizer = next(a for a in agents if a.name == "synthesizer") | ||
| assert synthesizer._is_termination_msg({"content": "Report done. TERMINATE"}) is True | ||
| assert synthesizer._is_termination_msg({"content": "Still analysing..."}) is False | ||
|
|
||
| def test_executor_instantiates(): | ||
| from autogen import UserProxyAgent | ||
| executor = UserProxyAgent( | ||
| name="executor", human_input_mode="NEVER", | ||
| code_execution_config=False, | ||
| is_termination_msg=lambda m: "TERMINATE" in (m.get("content") or ""), | ||
| ) | ||
| assert executor.name == "executor" |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.