This project demonstrates a collaborative multi-agent system built with the Agent2Agent SDK (A2A) and OpenAI, where a top-level Auditor agent coordinates the workflow to verify facts. The Critic agent gathers evidence via live internet searches using DuckDuckGo through the Model Context Protocol (MCP), while the Reviser agent analyzes and refines the conclusion using internal reasoning alone. The system showcases how agents with distinct roles and tools can collaborate under orchestration.
Tip
✨ No configuration needed — run it with a single command.
- Docker Desktop 4.43.0+ or Docker Engine installed.
- A laptop or workstation with a GPU (e.g., a MacBook) for running open models locally. If you don't have a GPU, you can alternatively use Docker Offload.
- If you're using Docker Engine on Linux or Docker Desktop on Windows, ensure that the Docker Model Runner requirements are met (specifically that GPU support is enabled) and the necessary drivers are installed.
- If you're using Docker Engine on Linux, ensure you have Docker Compose 2.38.1 or later installed.
- An OpenAI API Key 🔑.
Create a secret.openai-api-key file with your OpenAI API key:
sk-...
Then run:
docker compose up --buildEverything runs from the container. Open http://localhost:8080 in your browser and then chat with
the agents.
By default, this project uses OpenAI to handle LLM inference. If you'd prefer to use a local LLM instead, run:
docker compose -f compose.dmr.yaml upUsing Docker Offload with GPU support, you can run the same demo with a larger model that takes advantage of a more powerful GPU on the remote instance:
docker compose -f compose.dmr.yaml -f compose.offload.yaml up --buildThis system performs multi-agent fact verification, coordinated by an Auditor:
- 🧑⚖️ Auditor:
- Orchestrates the process from input to verdict.
- Delegates tasks to Critic and Reviser agents.
- 🧠 Critic:
- Uses DuckDuckGo via MCP to gather real-time external evidence.
- ✍️ Reviser:
- Refines and verifies the Critic’s conclusions using only reasoning.
🧠 All agents use the Docker Model Runner for LLM-based inference.
Example question:
“Is the universe infinite?"
| File/Folder | Purpose |
|---|---|
compose.yaml |
Launches app and MCP DuckDuckGo Gateway |
Dockerfile |
Builds the agent container |
src/AgentKit |
Agent runtime |
agents/*.yaml |
Agent definitions |
flowchart TD
input[📝 User Question] --> auditor[🧑⚖️ Auditor Sequential Agent]
auditor --> critic[🧠 Critic Agent]
critic -->|uses| mcp[MCP Gateway<br/>DuckDuckGo Search]
mcp --> duck[🌐 DuckDuckGo API]
duck --> mcp --> critic
critic --> reviser[(✍️ Reviser Agent<br/>No tools)]
auditor --> reviser
reviser --> auditor
auditor --> result[✅ Final Answer]
critic -->|inference| model[(🧠 Docker Model Runner<br/>LLM)]
reviser -->|inference| model
subgraph Infra
mcp
model
end
- The Auditor is a Sequential Agent, it coordinates Critic and Reviser agents to verify user-provided claims.
- The Critic agent performs live web searches through DuckDuckGo using an MCP-compatible gateway.
- The Reviser agent refines the Critic’s conclusions using internal reasoning alone.
- All agents run inference through a Docker-hosted Model Runner, enabling fully containerized LLM reasoning.
| Agent | Tools Used | Role Description |
|---|---|---|
| Auditor | ❌ None | Coordinates the entire fact-checking workflow and delivers the final answer. |
| Critic | ✅ DuckDuckGo via MCP | Gathers evidence to support or refute the claim |
| Reviser | ❌ None | Refines and finalizes the answer without external input |
To stop and remove containers and volumes:
docker compose down -v