A multi-agent system built with Agno, deployable to Railway.
# Clone the repo
git clone https://github.com/agno-agi/demo.git agno-demo
cd agno-demo
# Add OPENAI_API_KEY
cp example.env .env
# Edit .env and add your key
# Start the application
docker compose up -d --build
# Load documents for the knowledge agent
docker exec -it agno-demo-api python -m agents.knowledge.scripts.load_knowledgeConfirm AgentOS is running at http://localhost:8000/docs.
- Open os.agno.com and login
- Add OS → Local →
http://localhost:8000 - Click "Connect"
Requires:
- Railway CLI
OPENAI_API_KEYset in your environment
railway login
./scripts/railway_up.shThe script provisions PostgreSQL, configures environment variables, and deploys your application.
- Open os.agno.com
- Click "Add OS" → "Live"
- Enter your Railway domain
railway logs --service agno-demo # View logs
railway open # Open dashboard
railway up --service agno-demo -d # Update after changesTo stop services:
railway down --service agno-demo
railway down --service pgvectorLoad data and knowledge:
# Knowledge agent — Agno documentation
railway run python -m agents.knowledge.scripts.load_knowledge
# Dash — table schemas, validated queries, and business rules
railway run python -m agents.dash.scripts.load_knowledge
# Dash — F1 data (1950-2020)
railway run python -m agents.dash.scripts.load_data
# Scout — source metadata, routing rules, and patterns
railway run python -m agents.scout.scripts.load_knowledgeView logs:
railway logs --service agno-demoRun commands in production:
railway run python -m app.main # CLI modeRedeploy after changes:
railway up --service agno-demo -dOpen dashboard:
railway open| Agent | Description |
|---|---|
| Knowledge | Answers questions about Agno using Agentic RAG |
| MCP | Queries the Agno docs using the Agno MCP Server |
| Dash | Self-learning data analyst — queries structured data with SQL and learns from validated results |
| Gcode | Lightweight coding agent — writes, reviews, and iterates on code in an isolated workspace |
| Pal | Personal knowledge agent — remembers preferences, saves notes, and manages user context |
| Scout | Enterprise knowledge agent — searches and synthesizes internal documents |
| Seek | Deep research agent — conducts multi-source research and produces structured reports |
| Guard | HITL demo — confirms restarts, collects user input, runs external diagnostics |
| Relay | User feedback demo — collects structured preferences via ask_user before planning |
| Sentinel | Approvals demo — gates refunds, deletions, and exports behind approval workflows |
Knowledge — framework Q&A using RAG
Searches embedded Agno documentation using hybrid vector + keyword search. Answers developer questions about the Agno framework with working code examples.
Knowledge: agno_knowledge_agent_docs
Operations:
docker exec -it agno-demo-api python -m agents.knowledge.scripts.load_knowledgeTry it:
- "What is Agno?"
- "Tell me about Learning Machines"
- "Summarize the key features of Agno"
MCP — framework expert via live docs
Queries docs.agno.com directly through MCP, so answers always reflect the latest documentation. No local knowledge base needed.
Tools: MCP connection to docs.agno.com/mcp
Try it:
- "What is Agno?"
- "Tell me about Learning Machines"
- "Summarize the key features of Agno"
Dash — self-learning data analyst
SQL-based data agent that provides insights, not just query results. Ships with Formula 1 race data (1950–2020). Learns from errors, type gotchas, and user corrections.
Tools: SQL queries, schema introspection, Exa web search
Knowledge: dash_knowledge, dash_learnings
Operations:
# Load table schemas, validated queries, and business rules
docker exec -it agno-demo-api python -m agents.dash.scripts.load_knowledge
# Load F1 data (1950-2020)
docker exec -it agno-demo-api python -m agents.dash.scripts.load_dataTry it:
- "Who won the most F1 races in 2019?"
- "Show the points gap between Hamilton and Verstappen by season"
- "Which circuits had the most DNFs in 2020?"
Gcode — lightweight coding agent
Writes, reviews, and iterates on code in an isolated /workspace directory. Each project is a git repo with worktree-based task isolation. Learns project conventions and gotchas as it works.
Tools: CodingTools, ReasoningTools
Knowledge: gcode_knowledge, gcode_learnings
Try it:
- "Build a command-line todo app: add, list, done, delete. Persist to JSON. Write pytest tests and run them."
- "Create a Python library that parses and evaluates math expressions like '2 + 3 * (4 - 1)', with tests."
- "Build a URL shortener with an in-memory store: shorten, resolve, list stats. Include a demo script."
Pal — personal knowledge agent
Saves notes, bookmarks, people, and projects into pal_-prefixed tables it creates on demand. Over time the database becomes a structured map of the user's world. Also searches the web via Exa.
Tools: SQL (dynamic table creation), Exa web search
Knowledge: pal_knowledge, pal_learnings
Try it:
- "Save a note: quarterly planning meeting moved to Friday 3pm"
- "What notes have I saved recently?"
- "Save a bookmark: https://docs.agno.com -- Agno documentation"
Scout — enterprise knowledge navigator
Finds answers — not just files — across internal documents. Navigates folder structures, reads full documents, and remembers which search paths work. Learns routing rules and source locations over time.
Tools: File operations, content search, source navigation, Exa web search
Knowledge: scout_knowledge, scout_learnings
Operations:
# Load source metadata, routing rules, and patterns
docker exec -it agno-demo-api python -m agents.scout.scripts.load_knowledge
# Recreate from scratch
docker exec -it agno-demo-api python -m agents.scout.scripts.load_knowledge --recreateTry it:
- "What is our PTO policy?"
- "Find the incident response runbook"
- "How do I request a new laptop?"
Seek — deep research agent
Conducts exhaustive multi-source research and produces structured, well-sourced reports. Uses web search, company/people research, URL crawling, and parallel search — all via Exa. Learns which sources are reliable and what research patterns work.
Tools: Web search, company research, people search, URL crawling, parallel search (all via Exa)
Knowledge: seek_knowledge, seek_learnings
Try it:
- "What is Agno?"
- "Compare agno to other agent frameworks"
- "How do I build the best agentic system?"
Guard — human-in-the-loop demo
IT operations helpdesk agent demonstrating all three HITL patterns: confirmation (service restarts), user input (ticket priority), and external execution (diagnostics).
Tools: restart_service, create_support_ticket, run_diagnostic, UserFeedbackTools
Try it:
- "The auth service is timing out, can you check it?"
- "Create a support ticket for the slow dashboard"
- "Run diagnostics on the payment service"
Relay — user feedback demo
Planning concierge that collects structured preferences via ask_user before making recommendations. Demonstrates Agno's UserFeedbackTools for presenting choices with predefined options.
Tools: UserFeedbackTools (ask_user)
Try it:
- "Plan a weekend trip to Tokyo"
- "Help me plan a team dinner"
- "Organize a birthday party for 20 people"
Sentinel — approvals demo
Compliance and finance agent that gates sensitive operations behind approval workflows. Refunds and account deletions require explicit approval; data exports and reports are logged to an audit trail.
Tools: process_refund, delete_user_account, export_customer_data, generate_report
Try it:
- "Process a $150 refund for customer C-1234"
- "Delete the account for user U-5678"
- "Export all data for customer C-9012"
| Team | Description |
|---|---|
| Research Team | Coordinates Seek and Scout to combine external research with internal knowledge |
| Workflow | Description |
|---|---|
| Daily Brief | Scheduled morning briefing — calendar, email, news, and priorities |
Add your own agent
- Create
agents/my_agent/with three files:
agents/my_agent/agent.py
from agno.agent import Agent
from agno.models.openai import OpenAIResponses
from db import get_postgres_db
my_agent = Agent(
id="my-agent",
name="My Agent",
model=OpenAIResponses(id="gpt-5.2"),
db=get_postgres_db(),
instructions="You are a helpful assistant.",
add_datetime_to_context=True,
add_history_to_context=True,
read_chat_history=True,
num_history_runs=5,
markdown=True,
enable_agentic_memory=True,
)agents/my_agent/__init__.py
from agents.my_agent.agent import my_agentagents/my_agent/__main__.py
from agents.my_agent.agent import my_agent
my_agent.print_response("Hello!", stream=True)- Register in
app/main.py:
from agents.my_agent import my_agent
agent_os = AgentOS(
agents=[..., my_agent],
...
)-
Add quick prompts to
app/config.yamlusing the agent'sid -
Restart:
docker compose restart
Add tools to an agent
Agno includes 100+ tool integrations. See the full list.
from agno.tools.slack import SlackTools
from agno.tools.google_calendar import GoogleCalendarTools
my_agent = Agent(
...
tools=[
SlackTools(),
GoogleCalendarTools(),
],
)Add dependencies
- Edit
pyproject.toml - Regenerate requirements:
./scripts/generate_requirements.sh - Rebuild:
docker compose up -d --build
Use a different model provider
- Add your API key to
.env(e.g.,ANTHROPIC_API_KEY) - Update agents to use the new provider:
from agno.models.anthropic import Claude
model=Claude(id="claude-sonnet-4-5")- Add dependency:
anthropicinpyproject.toml
For development without Docker:
# Install uv
curl -LsSf https://astral.sh/uv/install.sh | sh
# Setup environment
./scripts/venv_setup.sh
source .venv/bin/activate
# Start PostgreSQL (required)
docker compose up -d agno-demo-db
# Run the app
python -m app.main| Variable | Required | Default | Description |
|---|---|---|---|
OPENAI_API_KEY |
Yes | - | OpenAI API key |
GITHUB_TOKEN |
No | - | GitHub PAT for Gcode to clone private repos (setup) |
PORT |
No | 8000 |
API server port |
DB_HOST |
No | localhost |
Database host |
DB_PORT |
No | 5432 |
Database port |
DB_USER |
No | ai |
Database user |
DB_PASS |
No | ai |
Database password |
DB_DATABASE |
No | ai |
Database name |
RUNTIME_ENV |
No | prd |
Set to dev for auto-reload |
Giving Gcode access to GitHub
Gcode can clone and push to GitHub repos when you provide a Fine-grained Personal Access Token. Do not use a Classic PAT — fine-grained tokens let you scope access to specific repos.
- Go to GitHub → Settings → Developer settings → Personal access tokens → Fine-grained tokens
- Click Generate new token
| Field | Value |
|---|---|
| Token name | gcode (or whatever helps you identify it) |
| Expiration | 90 days (set a calendar reminder to rotate) |
| Repository access | Only select repositories — pick the repos Gcode should work on |
| Permission | Access | Why |
|---|---|---|
| Contents | Read and write | Clone, read files, commit, push |
| Metadata | Read-only | Required by GitHub for all token operations |
That's it — two permissions. Add more only if needed (e.g., Pull requests read/write for opening PRs).
Add it to your .env file:
GITHUB_TOKEN=github_pat_xxxxxxxxxxxxxxxxxxxxxThe container's git credential helper reads this from the environment at runtime. The token is never written to disk — Gcode just runs git clone https://github.com/... and authentication happens transparently.
When a token expires, generate a new one with the same settings, update .env, and restart: docker compose up -d.