A lightweight Python package for a config-driven multi-agent network: one Orchestrator plans and calls specialist Agents; you run two commands—start the network and query it.
| Command | What it does |
|---|---|
python scripts/startup.py |
Starts the orchestrator + all agents (ports from config). Logs query, plan, and each agent call in the terminal. |
python scripts/query_cli.py "Your question" |
Sends a query, prints step-by-step agent iteration and final answer. Use --trace to see URLs and request/response bodies. |
One-time setup: create a Postgres DB, set .env, then run python scripts/migrate.py once.
# 1. Install
cd multi-agent-langchain
python3 -m venv venv && source venv/bin/activate
pip install -e .
# 2. Config
cp config/env/.env.example config/env/.env
# Set POSTGRES_APP_URL, OPENAI_API_KEY (and optionally CHROMA_PATH, POSTGRES_* for tools)
# 3. DB (once)
# Create a Postgres database, then:
PYTHONPATH=. python scripts/migrate.py
# 4. Run
# Terminal 1 – start network
PYTHONPATH=. python scripts/startup.py
# Terminal 2 – send a query
PYTHONPATH=. python scripts/query_cli.py "What are the safety guidelines for product X?"You’ll see the query, plan, each → agent / ← agent line, and the final answer in the CLI and in the startup terminal logs.
- Orchestrator (FastAPI): receives a query → plans steps (LLM) → calls agents over HTTP → persists requests/plans/step_results in Postgres → synthesizes final answer.
- Agents (FastAPI, one process per agent): LangChain agents with system prompt, guardrails, and tools (e.g.
query_facts,search_docs). Ports and config come from a domain JSON file. - Config: one JSON per domain (orchestrator + agents + data_sources) and one
.env. No code changes for new use cases—edit config only.
Detailed flow of how a query becomes a final answer. Arrows show direction and content of each call.
Summary
| Step | From → To | Message |
|---|---|---|
| ① | User/CLI → Orchestrator | POST /query with { query } |
| ② | Orchestrator → Agent (per step) | POST /invoke with { task, context } (context = original query + prior step results) |
| ③ | Agent → Orchestrator | HTTP 200 with { result, status, latency_ms } |
| ④ | Orchestrator → User/CLI | Response with { request_id, status, final_answer } |
Agents run in separate processes (one per port). The orchestrator calls them over HTTP in the order of the plan; each agent may use its tools (Postgres, Chroma) before returning. The orchestrator then synthesizes the final answer from all step results and returns it to the client.
multi-agent-langchain/
├── config/domains/ # Domain JSON (e.g. manufacturing.json)
├── config/env/ # .env (secrets)
├── src/
│ ├── core/ # Config loader, contracts
│ ├── data_access/ # Postgres + Chroma clients
│ ├── tools/ # query_facts, search_docs
│ ├── agent/ # Agent FastAPI (POST /invoke)
│ ├── orchestrator/ # Planner, executor, reporter, FastAPI (POST /query)
│ └── gateway/ # Optional reverse proxy
├── migrations/versions/ # One SQL file: app.requests, app.plans, app.step_results
├── scripts/
│ ├── startup.py # Start orchestrator + agents
│ ├── query_cli.py # Send query, print steps + answer
│ └── migrate.py # Run DB migration (once)
├── pyproject.toml
└── README.md
| Command | Description |
|---|---|
PYTHONPATH=. python scripts/migrate.py |
Run DB migration once (needs POSTGRES_APP_URL). |
PYTHONPATH=. python scripts/startup.py |
Start orchestrator + agents. --no-kill, --background, --list-ports, --config <path>. |
PYTHONPATH=. python scripts/query_cli.py "question" |
Send query; prints request_id, steps, final answer. |
PYTHONPATH=. python scripts/query_cli.py "question" --trace |
Same + full URL and request/response for each HTTP call. |
From project root; or use pip install -e . and omit PYTHONPATH=..
- Domain JSON (
config/domains/<id>.json):domain_id,orchestrator(name, port, system_prompt, guardrails, tool_names),agents[],data_sources[],env_file_path. - .env (path in JSON):
POSTGRES_APP_URL(required),OPENAI_API_KEY(required),CHROMA_PATH,POSTGRES_*for tools.
- Orchestrator:
POST /query→{ "query": "..." }→{ "request_id", "status", "final_answer", "error"? }.GET /health. - Agent:
POST /invoke→{ "task", "context"? }→{ "result", "status", "latency_ms" }.GET /health.
To support a new domain (e.g. HR, support, internal docs):
-
Add a domain config
Createconfig/domains/<domain_id>.json(e.g.hr.json). Copy the structure fromconfig/domains/manufacturing.json:domain_id,domain_name,env_file_pathorchestrator:name,port,system_prompt,guardrails,tool_names(orchestrator usually hastool_names: [])agents: list of agents; each hasname,port,system_prompt,guardrails,tool_namesdata_sources: list of{ "id", "type", "engine", "connection_id" }; for Chroma add"collection_name"session_store:{ "type": "postgres", "connection_id": "POSTGRES_APP_URL" }
-
Environment
Use the same.envor a new one (e.g.config/env/hr.env) and setenv_file_pathin the JSON. EnsurePOSTGRES_APP_URL,OPENAI_API_KEY, and anyconnection_idenv vars used indata_sourcesare set. -
Run with that domain
PYTHONPATH=. python scripts/startup.py --config config/domains/hr.json PYTHONPATH=. python scripts/query_cli.py "Your HR question"
No changes to orchestrator or agent code—only new config. If you need a new capability (e.g. call an external API, read from another DB), add a new tool (see below) and reference it in the right agents’ tool_names.
Tools are LangChain tools that agents can call. The registry in src/tools/registry.py maps tool_names from config to actual tool instances, injecting data clients where needed.
Step 1 – Implement the tool
- Add a module under
src/tools/(e.g.src/tools/rel_db/query.pyor a new subpackage). - Create a factory function that returns a LangChain tool (use
@toolfromlangchain_core.tools). The factory can take a client (DB URL, retriever, etc.) so the registry can inject it.
Example (conceptually like query_facts):
# src/tools/my_tool/thing.py
from langchain_core.tools import tool
def create_my_tool(some_client): # client comes from build_clients()
@tool
def my_tool(arg: str) -> str:
"""Description for the LLM: what this tool does and when to use it."""
# use some_client, return a string
return "result"
return my_toolStep 2 – Register the tool
- In
src/tools/registry.py, inget_tools(tool_names, clients):- For each
nameintool_names, ifname == "my_tool", get the right client fromclients(keyed bydata_sources[].id), call your factory, and append the result toresult.
- For each
Example:
elif name == "my_tool":
client = clients.get("my_data_source_id") # id from config data_sources
if client is None:
continue
result.append(create_my_tool(client))Step 3 – Wire config
- In your domain JSON, ensure the tool’s data source exists under
data_sources(sobuild_clientsfillsclients["my_data_source_id"]). - Add
"my_tool"to thetool_nameslist of any agent that should use it.
Agents receive only the tools listed in their tool_names; the orchestrator does not run tools itself.
To tailor the package to a concrete use case (e.g. “HR policy answers”, “support ticket summarization”):
-
Define the roles
Decide which agents you need (e.g. “researcher”, “analyst”, “writer”) and what each is responsible for. One agent per role is a good default. -
Define data sources
Indata_sources, list every DB or vector store the agents need:- Postgres:
{ "id": "hr_db", "type": "rel_db", "engine": "postgres", "connection_id": "POSTGRES_HR_URL" } - Chroma:
{ "id": "docs", "type": "vector_db", "engine": "chroma", "connection_id": "CHROMA_PATH", "collection_name": "hr_policies" }
Set the corresponding env vars in.env.
- Postgres:
-
Assign tools per agent
In each agent’stool_names, list only the tools that role should use (e.g. researcher:["search_docs", "query_facts"]; writer:[]). Use the same tool names you register insrc/tools/registry.py. -
Write prompts and guardrails
- Orchestrator
system_prompt: instruct it to understand the query, plan steps, delegate to the right agents by name, and synthesize a final answer. Mention the list of agent names. - Each agent
system_prompt: role, responsibility, and “use only the provided tools”. - Guardrails: short list of rules (e.g. “Do not fabricate data.”, “Max 500 words.”). These are passed to the agent runtime; keep them enforceable and clear.
- Orchestrator
-
Optional: new tools
If the use case needs a new capability (e.g. call an API, read from another system), add the tool insrc/tools/and register it as in Defining new tools above, then add it to the right agents’tool_names. -
Test
Runstartup.pywith your domain config and send representative queries viaquery_cli.py. Use--traceto inspect requests and responses. Adjust prompts, guardrails, or tool assignments until behavior matches the use case.
MIT.
