A simple and containerized chabot demo using Agno AgentOS.
This project is a chatbot designed to interact with Agents running on Agno AgentOS (multiple AI Agents and MongoDB storage).
Depending on the selected tools, the agent can utilize different LLM models, tools, and instructions. The chatbot is capable of managing both text and images. The available tools include Search (using DuckDuckGo) and MCP (using Airbnb).
This project is intended for exploration and community sharing.
- Interaction with Agents running on Agno AgentOS server (backend)
- Selection of different LLM models, tools, and instructions for the agent
- Text and image management
- Available tools:
- Search (DuckDuckGo)
- MCP (Airbnb)
Anlayse image
Search the web
Airbnb MCP
- More MCPTools exemples
- TTS (Text-to-Speech) Implementation
- STT (Speech-to-Text) Implementation
- Streaming AI Agent Responses
- Upload a PDF or Word file to interact with its content
- Linux (AMD & Nvidia) or Windows with WSL2 (Nvidia)
- Docker installed
- Docker Compose installed
Instructions for installing the project, use compose.amd.yml:
docker compose -f compose.amd.yml build --no-cachedocker compose -f compose.amd.yml up -dInstructions for installing the project, use compose.nvidia.yml:
docker compose -f compose.nvidia.yml build --no-cachedocker compose -f compose.nvidia.yml up -ddocker psYou should see 4 services :
- mongodb_service
- ollama_service
- backend_agentos
- frontend_streamlit
Output exemple :
IMAGE COMMAND STATUS PORTS NAMES
streamlit-frontend "streamlit run app/m…" Up 14 minutes 0.0.0.0:8501->8501/tcp, [::]:8501->8501/tcp frontend_streamlit
streamlit-backend "fastapi run app/mai…" Up 14 minutes 0.0.0.0:8000->8000/tcp, [::]:8000->8000/tcp backend_agentos
ollama/ollama:rocm "/bin/ollama serve" Up 17 minutes 0.0.0.0:11434->11434/tcp, [::]:11434->11434/tcp ollama_service
mongo:latest "docker-entrypoint.s…" Up 17 minutes 0.0.0.0:27017->27017/tcp, [::]:27017->27017/tcp mongodb_service
Once service are installed and up, a LLM models is required.
You can change the name and options of the LLM models in the file : backend/app/config.yaml
For the basic configuration follow thoses steps :
Check models available in your Ollama container
docker exec -it ollama_service ollama listDownload a model to your Ollama container
docker exec -it ollama_service ollama pull devstral-small-2:24bdocker exec -it ollama_service ollama pull ministral-3:8bInstructions for Uninstalling the Project (choose the same compse file as the setup)
AMD :
docker compose -f compose.amd.yml downNVIDIA :
docker compose -f compose.nvidia.yml down| Technology | Description | License | Documentation |
|---|---|---|---|
| Python | A high-level programming language. | MIT | Docs |
| Docker | A platform for containerization. | Apache 2.0 | Docs |
| MongoDB | A document-oriented, operational database | SSPL | Docs |
| Ollama | A tool for running large language models locally. | MIT | Docs |
| Agno | A unified stack for multi-agent systems. | Apache 2.0 | Docs |
project/
│
├── backend/
│ ├── app/
│ │ ├── core/
│ │ │ └── agent.py
│ │ ├── utils/
│ │ │ └── param.py
│ │ └── main.py
│ ├── requirements.txt
│ ├── Dockerfile
│ └── config.yaml
│
├── frontend/
│ ├── app/
│ │ ├── assets/
│ │ │ └── image/
│ │ │ └── logo.jpg
│ │ ├── files/
│ │ ├── pages/
│ │ │ ├── chatbot.py
│ │ │ └── pdf.py
│ │ ├── utils/
│ │ │ ├── file.py
│ │ │ ├── curl.py
│ │ │ ├── time.py
│ │ │ ├── utils.py
│ │ │ └── widget.py
│ │ └── main.py
│ ├── requirements.txt
│ └── Dockerfile
│
├── compose.amd.yml
├── compose.nvidia.yml
└── README.md
This project has been tested on the following hardware configuration:
- OS: Linux Solus
- CPU: AMD Ryzen 5 9600X
- GPU: AMD Radeon Pro 9700 AI & AMD RX 9070
- Storage: 2.5" SATA SSD
No authentication (password or JWT verification key) is configured for service connections (MongoDB or AgentOS).
This project is open source and the code is usable and modifiable. However, the author disclaims all responsibility and no technical support is provided.


