A backend system that connects a Discord chatbot with a Retrieval-Augmented Generation (RAG) agent powered by Google Gemini.
The backend is responsible for API design, LLM integration, logging, observability, and containerized deployment.
This project focuses on backend architecture and operational excellence, rather than frontend UI or model training.
Discord Bot
-> HTTP (POST /api/rag-query)
-> FastAPI Backend (Docker)
-> Prompt-based RAG logic
-> Google Gemini API
- Receive user questions from a Discord bot
- Process Python-related queries using a Gemini-powered RAG agent
- Return structured responses
- Provide logging and observability
- Support containerized deployment via Docker
Note: you need to create a Discord bot first in Discord Developer Portal.
pip install -r requirements.txt
GEMINI_API_KEY=your_api_key
DISCORD_TOKEN=your_discord_bot_token
uvicorn app.main:app --reload --env-file .env
python app/discord_bot.py
Type the question after !py to ask your bot in Discord.
Example:
!py What is a Python decorator?
Note: you may need to download Docker Desktop first.
GEMINI_API_KEY=your_api_key
DISCORD_TOKEN=your_discord_bot_token
After running Docker Desktop, open a terminal and navigate to the project folder to type the below command:
docker build -t discord-rag-backend .
In the same terminal, run:
docker run --env-file .env -p 8000:8000 discord-rag-backend
Keep the last terminal open, and open new terminal under the project folder. Type:
curl -X POST http://localhost:8000/api/rag-query ^
-H "Content-Type: application/json" ^
-d "{\"user_id\":\"test\",\"query\":\"What is a Python decorator?\"}"
You should get answer.
Open new terminal under the project folder and type:
python app/discord_bot.py
Open Discord and type the question after !py to ask your bot in Discord.